John L Jerz Website II Copyright (c) 2015

Complex Systems and Cognitive Processes (Serra, Zanarini, 1990)

Home
Current Interest
Page Title

Roberto Serra, Gianni Zanarini

"A relatively cautious proposal is that of 'heterogeneous systems', i.e. systems composed of two or more modules, some symbolic and others dynamical, which specialize in different tasks... Such a 'loose coupling' solution is the least dangerous one, since the two approaches are kept separate, interacting only through the exchange of data: the output of a given module is (a part of) the input of another module... Another different suggestion is that of using the results which can be achieved by training... in order to provide hints and suggestions for the knowledge engineering phase"

JLJ - Cilliers in Complexity and Postmodernism cited p.26 of this work (in a definition of rule based systems) and on a curiosity I found a copy (for $0.03 on amazon.com) to see if anything else was of interest.

Interesting is the description of dynamical systems provided by Serra and Zanarini  - "Classical AI has obtained important results... however, things turned out to be more difficult than the enthusiasts tended to think. Thus, there is currently a tendency to see dynamical systems, now more credible and interesting... as a possible solution to some of the difficulties in AI."

An interesting thought is this: do you really *need* to be able to predict the future, or instead, to be *ready* for whatever future arises? This fundamental problem changes the nature of AI from precise prediction, which in my opinion is impossible, to building adaptive capacity. AI involves answering the question, "how do I 'go on'?"

When I get onto I-66 every morning, I do not try to predict exactly what is going to happen. I read the richly-detailed cues present, and place myself (and the car I am driving) in positions where I am reasonably sure that I can adapt to whatever arises. This is usually good enough, when I apply some margin and some common sense. Of course I do some anticipation, but I never "decide" what to do on a single scenario. I imagine a range of actions, and a range of behaviors from other drivers, and position myself accordingly. I also use the "rule of 1.1" - if the gap between your car and the vehicle in front of you is more than 1.1 times the length of the car alongside you, then inevitably, that car will try to cut in front of you.

p.4 Self-organization... may arise not only from the interaction between unknown microscopic dynamics, but also from the practical impossibility of foreseeing all possible kinds of collective behaviour which can emerge from the interactions between microscopic elements.

p.16 the capability of adapting to a changing environment, and of learning from past experience is a key property of cognitive systems.

p.17 From the "internal" point of view we can consider the adaptive modification of the parameters of the dynamical system as a particular form of interaction with the environment: the system is not isolated and modifies its own parameters (and thus its own space) due to this coupling with the exterior.

p.25 Classical AI has obtained important results... however, things turned out to be more difficult than the enthusiasts tended to think. Thus, there is currently a tendency to see dynamical systems, now more credible and interesting... as a possible solution to some of the difficulties in AI.

p.25-26 it is possible to outline some of the fundamental aspects of classical AI which the various schools hold in common... The first common aspect is the choice of operating with "entities" of a high conceptual level... direct use of symbols is made to represent concepts... The basic characteristic of classical AI is the use, for the manipulation of these symbols, of chains of inferences or "rules" of the "If A, then B" type, as in logic programming or production systems. AI systems reach their goal through chains of inferences, that is through explicit reasoning... any particular rule... draws its motivation from the meaning of the symbols it manipulates... AI systems also show centralized "control", i.e. a set of rules... which are used to decide which rules of inference to apply in a given computational stage... Another aspect common to the various approaches to classical AI is the use of dedicated symbols to represent each concept... there must be a symbol for each concept.

p.26-27 On the contrary to AI programs, dynamical systems reach their conclusions by applying evolutive rules to numerical variables, instead of applying inference rules to logical variables. Also, in the systems considered these evolutive rules are the same for each element and are justified on the basis of plausibility considerations, ... analogies and, above all, a posteriori, because of their effectively demonstrated capacity to memorise and generalise... Dynamical systems also lack a "control" in the classical sense... in dynamical systems the order reached derives from direct interactions amongst the elementary units... it should be noted that while AI systems possess a well-defined termination condition, the goal, for dynamical systems this is unnecessary.. it is... possible to let the system evolve towards the asymptotic state, and consider the latter as a terminal state.

In conclusion, it can be stated that dynamical systems perform cognitive tasks by relying on their self-organizing properties much more than classical AI systems do.

p.28 One of the main reasons for the interest in cognitive dynamical systems arises from the consideration that they show promising behaviour in precisely those aspects which are the most problematic ones for AI.

p.28-29 Another critical aspect of AI systems is the elicitation of this specific knowledge about the problem under examination, which is crucial for the success of the system. Definition of the rules to be loaded into an expert system is a result of the process known as "knowledge engineering"... Knowledge engineering is thus a crucial task for the success of the project, and is often difficult and unnatural because it requires, above all, the making explicit of the knowledge and working methods of the expert, which are often implicit.

p.39 Knowledge of the initial conditions could then allow prediction of the successive behaviour only if the former were known with infinite precision. The unpredictability of deterministic systems with chaotic behaviour then arises from the fact that any real observation is characterised by a finite precision.

p.64 It is worth recalling once again that dynamical systems with extremely simple rules often lead to complex and unexpected behaviours.

p.132-133 In this perspective, the very concept of error loses its meaning. When an input pattern is coded... it does not, in fact, have any meaning, from the system's point of view, to ask whether the coding is right or wrong... The key for introducing the concept of error in an unsupervised system lies, according to Grossberg, in the internal structure of the system itself and in the communication between the various layers of the system.

p.182 The central theme of this book is that complex dynamical systems are useful and important for the comprehension and the artificial simulation of cognitive behaviour.

p.184-185 So far, we have stressed the advantages of the dynamical approach, which avoids some of the main problems of classical AI... We must now outline a critical examination of the main problems raised by this type of approach... the dynamical approach is still in an embryonic stage... The application of these dynamical models to real size problems is just in a beginning phase... One fairly obvious difficulty of dynamical systems regards the handling of long inference chains... it is not realistic to expect that, in this sector, dynamical systems will be as efficient and powerful... It has not been possible, so far, to prove that neural networks are superior with respect to the more usual statistical methods in the fields of signal recognition and analysis

p.185 The state of the art in the application of the dynamical approach to high level knowledge is at a much lower state of development than that for low level development... In all these systems the case under examination is described using a certain set of variables - like e.g. symptoms in the case of medical diagnosis, information about the financial situation of the applicant... This set of variables constitutes the input pattern, while the output is given by the corresponding diagnosis, or decision. The network learns a set of symptom-diagnosis associations; its ability to generalize then allows it to operate also on cases which are not included in the training set.

p.186 A serious drawback of the use of dynamical systems for handling high level knowledge lies in their limited explanation capabilities.

p.189-190 The difficulties of connection engineering point to the need for integrating the dynamical systems approach and the classical approach... A relatively cautious proposal is that of "heterogeneous systems", i.e. systems composed of two or more modules, some symbolic and others dynamical, which specialize in different tasks... Such a "loose coupling" solution is the least dangerous one, since the two approaches are kept separate, interacting only through the exchange of data: the output of a given module is (a part of) the input of another module... Another different suggestion is that of using the results which can be achieved by training... in order to provide hints and suggestions for the knowledge engineering phase