John L Jerz Website II Copyright (c) 2015

The Science of Self-Organization and Adaptivity (Heylighen, 2001)

Home
Current Interest
Page Title

Francis Heylighen

In: Kiel, L.D. (ed.) Knowledge Management, Organizational Intelligence and Learning, and Complexity, The Encyclopedia of Life Support Systems. EOLSS Publishers (2003)

http://pespmc1.vub.ac.be/papers/eolss-self-organiz.pdf

"The theory of self-organization and adaptivity has grown out of a variety of disciplines, including thermodynamics, cybernetics and computer modelling. The present article reviews its most important concepts and principles."

JLJ - valuable insight - perhaps even for game theory. Complex games of strategy need a theory to support analysis, and Heylighen's work might just be the foundation we need. Of course, the game board and pieces are not really a self-organizing, adaptive system, but perhaps we can think of them as one, and follow where our thoughts take us.

The concepts of self-organization and adaptivity provide a framework for visualizing the problem of playing such a game and lead us in directions which can only advance our understanding. Discovering this paper by Heylighen made today a good day. Why can't all days be like this? 28 August 2015

Note to self: this paper will serve as excellent foundational material for any future paper on machine-based games of complex strategy. There has to be something underneath the argument, supporting, converting what otherwise would be an unfounded opinion into reasoned thought, and this paper will provide that support.

p.1 The theory of self-organization and adaptivity has grown out of a variety of disciplines, including thermodynamics, cybernetics and computer modelling. The present article reviews its most important concepts and principles.

p.1 Self-organization can be defined as the spontaneous creation of a globally coherent pattern out of local interactions. Because of its distributed character, this organization tends to be robust, resisting perturbations. The dynamics of a self-organizing system is typically non-linear, because of circular or feedback relations between the components. Positive feedback leads to an explosive growth, which ends when all components have been absorbed into the new configuration, leaving the system in a stable, negative feedback state.

p.1 To adapt to a changing environment, the system needs a variety of stable states that is large enough to react to all perturbations but not so large as to make its evolution uncontrollably chaotic. The most adequate states are selected according to their fitness, either directly by the environment, or by subsystems that have adapted to the environment at an earlier stage. Formally, the basic mechanism underlying self-organization is the (often noise-driven) variation which explores different regions in the system’s state space until it enters an attractor.

p.1 Around the middle of the [twentieth] century, researchers from different backgrounds and disciplines started to study phenomena that seemed to be governed by inherent creativity, by the spontaneous appearance of novel structures or the autonomous adaptation to a changing environment. The different observations they made, and the concepts, methods and principles they developed, have slowly started to coalesce into a new approach, a science of self-organization and adaptation.

p.2 self-organization: the appearance of structure or pattern without an external agent imposing it.

p.3 Prigogine noted that such self-organization typically takes place in non-linear systems, which are far from their thermodynamic equilibrium state... Prigogine... sees the universe as an irreversible "Becoming", which endlessly generates novelty.

p.4 What Adam Smith, the father of economics, called "the invisible hand" can nowadays simply be called self-organization.

p.6-7 In the Benard phenomenon, a liquid is heated evenly from below, while cooling down evenly at its surface, like the water in an open container that is put on an electric hot-plate. Since warm liquid is lighter than cold liquid, the heated liquid tries to move upwards towards the surface. However, the cool liquid at the surface similarly tries to sink to the bottom. These two opposite movements cannot take place at the same time without some kind of coordination between the two flows of liquid. The liquid tends to self-organize into a pattern of hexagonal cells, or a series of parallel "rolls", with an upward flow on one side of the roll or cell and a downward flow on the other side... The most obvious change that has taken place... is the emergence of global organization... The liquid as a whole has started cycling through a sequence of rolls. Yet, initially the elements of the system (... molecules) were only interacting locally... A liquid molecule would only influence the few molecules it collides with.

The locality of interactions follows from the basic continuity of all physical processes: for any influence to pass from one region to another it must first pass through all intermediate regions... in the original disordered state of the system, distant parts of the system are basically independent: they do not influence each other... In the self-organized state... all segments of the system are strongly correlated.

p.8 The correlation length can be defined as the maximum distance over which there is a significant correlation.

p.8 When we consider a highly organized system, we usually imagine some external or internal agent that is responsible for guiding, directing or controlling that organization... The controller... exerts its influence over the rest of the system... we may say that control is centralized.

In self-organizing systems... "control" of the organization is typically distributed over the whole of the system.

p.9 a general characteristic of self-organizing systems: they are robust or resilient. This means that they are relatively insensitive to perturbations or errors, and have a strong capacity to restore themselves... One reason for this fault tolerance is the redundant, distributed organization: the non-damaged regions can usually make up for the damaged ones.

p.9 Another reason for this intrinsic robustness is that self-organization thrives on randomness, fluctuations or "noise"... a certain amount of random perturbations will facilitate rather than hinder self-organization.

p.10 Most of the systems modelled by the traditional mathematical methods of physics are linear. This means basically that effects are proportional to their causes: if you kick a ball twice as hard, it will fly away twice as fast. In self-organizing systems, on the other hand, the relation between cause and effect is much less straightforward: small causes can have large effects, and large causes can have small effects.

p.10 non-linearity can be understood from the relation of feedback that holds between the system's components... A process of self-organization... starts with a positive feedback phase, where an initial fluctuation is amplified, spreading ever more quickly, until it affects the complete system. Once all components have "aligned" themselves with the configuration created by the initial fluctuation, the configuration stops growing: it has "exhausted" the available resources... Since further growth is no longer possible, the only possible changes are those that reduce the dominant configuration. However, as soon as some components deviate from this configuration, the same forces that reinforced that configuration will suppress the deviation, bringing the system back to its stable configuration. This is the phase of negative feedback.

p.10 In more complex self-organizing systems, there will be several interlocking positive and negative feedback loops, so that changes in some directions are amplified while changes in other directions are suppressed. This can lead to very complicated, difficult to predict behavior.

p.11 Organization can be defined as the characteristic of being ordered or structured so as to fulfil a particular function. In self-organizing systems, this function is the maintenance of a particular configuration, in spite of disturbances. Only those orders will result from self-organization that can maintain themselves... Organizational closure turns a collection of interacting elements into an individual, coherent whole. This whole has properties that arise out of its organization, and that cannot be reduced to the properties of its elements. Such properties are called emergent.

p.12 Linear systems of equations normally have a single solution. Non-linear systems, on the other hand, have typically several solutions, and there is no a priori way to decide which solution is the "right" one. In terms of actual self-organizing systems, this means that there is a range of stable configurations in which the system may settle... Which of the possible... will depend on a chance fluctuation. Since small fluctuations are amplified by positive feedback, this means that the initial fluctuation that led to one outcome rather than another may be so small that it cannot be observed. In practice, given the observable state of the system at the beginning of the process, the outcome is therefore unpredictable.

p.15 A configuration of a system may be called "fit" if it is able to maintain or grow given the specific configuration of its environment... adaptation can be considered as achieving a fit between system and environment.

It follows that every self-organizing system adapts to its environment. The particular stable configuration it reaches by definition fits its particular circumstances... when the boundary conditions change... Systems may be called adaptive if they can adjust to such changes while keeping their organization as much as possible intact.

p.15 Cybernetics has shown that adaptation can be modelled as a problem of regulation or control: minimizing deviations from a goal configuration by counteracting perturbations before they become large enough to endanger the essential organization. This means that the system must be able to: 1) produce a sufficient variety of actions to cope with each of the possible perturbations (Ashby’s "law of requisite variety"); 2) select the most adequate counteraction for a given perturbation. Mechanical control systems, such as a thermostat or an automatic pilot, have both variety and selectivity built in by the system designer. Self-organizing systems need to autonomously evolve these capabilities. Variety can be fostered by keeping the system sufficiently far from equilibrium so that it has plenty of stationary states to choose from. Selectivity requires that these configurations be sufficiently small in number and sufficiently stable to allow an appropriate one to be "chosen" without danger of losing the overall organization.

p.16 The system needs a fitness criterion for choosing the best action for the given circumstances. The most straightforward method is to let the environment itself determine what is fit: if the action maintains the basic organization, it is, otherwise it is not. This can be dangerous, though, since trying out an inadequate action may lead to the destruction of the system. Therefore, complex systems such as organisms or minds have evolved internal models of the environment. This allows them to try out a potential action "virtually", in the model, and use the model to decide on its fitness. The model functions as a vicarious selector, which internally selects actions acting for, or in anticipation of, external selection. This "shortcut" makes the selection of actions much more reliable and efficient. It must be noted, though, that these models themselves at some stage must have evolved to fit the real environment; otherwise they cannot offer any reliable guidance.

p.16 Both examples [JLJ - the circumstances in an ecosystem change and (in an economy) a particular good becomes scarce] assume that the initial variety in species or in firms is sufficient to cope with the perturbation. However, if none of the existing species or firms would be really fit to cope with the new situation, new variations would arise, e.g. by mutation or sexual recombination in the case of species, and by R&D efforts or new start-ups in the case of firms. If the system is sufficiently rich in inherent diversity and capacity to evolve, this variation will sooner or later produce one or more types of component that can "neutralize" the perturbation, and thus save the global system.

p.17 In practice, only some components will adapt to the external changes, but the resulting internal changes in the system will trigger a further round of adaptations, where a change in one component will create change in the other components that interact with it, which in turn will change the components they interact with, and so on, until some kind of equilibrium has been restored.

p.17 The modelling approach that will be introduced here can be used to build precise mathematical models or computer simulations for specific systems. However, since the variety and complexity of such models is unlimited, and since the more typical examples are discussed in depth in the literature, the present overview will focus merely on the general principles that these models have in common.

p.17 Every system capable of change has certain variable features, that can take on different values. For example, a particle can have different positions and move with different speeds, a spin can point in different directions, and a liquid can have different temperatures. If such a feature can be described as a real number varying over a finite or infinite interval, it is called a degree of freedom. All the values for the different variables we consider together determine the states of the system.

Any change or evolution of the system can be described as a transition from one state to another one. [JLJ - elsewhere, the concept of critical success factors is discussed. We should be concerned with the change in these particular parameters over time, especially with "how might we proceed" to change such a factor which has slipped out of range, back within range. Factors of lesser importance can still be considered, but perhaps indirectly, as a result of affecting the critical ones.]

p.19 To model the evolution of a system, we need rules that tell us how the system moves from one state to another in the course of time t. This might be expressed by a function fT: SS: s(t) s(t+T), which is usually the solution of a differential or difference equation. According to classical mechanics, the evolution of a system is deterministic and reversible. In practice, though, the evolution of complex systems is irreversible: the future is fundamentally different from the past, and it is impossible to reconstruct the past from the present.

p.19 Most models still assume the dynamics to be deterministic, though: for a given initial state s, there will in general be only one possible later state f(s). In practice, the lack of information about the precise state will make the evolution unpredictable.

p.20 the reaching of an attractor... can be viewed as a general model of self-organization [JLJ - the business community thinks in terms of critical success factors that influence decisions made now, as in "how do I go on, from this point in the present, where the path forward is unclear?" You simply decide what the factors are, then monitor them continuously, over time. When one of the critical success factors decreases, you respond by creating a "project" which seeks to bump it back up, which you then "execute". If that project is not working, you think of something else that might work.]

p.20 In many cases, the rather abstract and mathematically complex structure of a system of attractors and basins can be replaced by the more intuitive model of a fitness landscape. Under certain mathematical conditions, the deterministic dynamics of a system can be represented by a potential function F on the state space, which attaches a certain number to each state: F: SR: sF(s), such that trajectory of the system through the state space will always follow the path of steepest descent, i.e. move from a given state s to that neighboring state for which F is minimal. In mechanics, this function corresponds to the potential energy of the system. More generally, the function represents the degree to which a certain state is "preferable" to another state: the lower the value of F, the "better" or the more "fit" the state. Thus, the potential can be seen as the negative of the fitness function. It is unfortunate that the convention in physics sees systems as striving to minimize a potential function, whereas the convention in biology sees systems as striving to maximize a fitness function.

p.21 The fitness function transforms the state space into a fitness landscape, where every point in the space has a certain "height" corresponding to its fitness value.

p.21 In the fitness landscape representation all attractors are not equal: those with a higher fitness are in a sense "better" than the others. For self-organizing systems, "better" or "fitter" usually means "more stable" or "with more potential for growth". However, the dynamics implied by a fitness landscape does not in general lead to the overall fittest state: the system has no choice but to follow the path of steepest descent. This path will in general end in a local minimum of the potential, not in the global minimum.

p.21-22 Apart from changing the fitness function, the only way to get the system out of a local minimum is to add a degree of indeterminism to the dynamics, that is, to give the system the possibility to make transitions to states other than the locally most fit one. This can be seen as the injection of "noise" or random perturbation into the system, which makes it deviate from its preferred trajectory.

p.22 In general, the deeper the valley, the more difficult it will be for a perturbation to make a system leave that valley. Therefore, noise will in general make the system move out of the more shallow, and into the deeper valleys. Thus, noise will in general increase fitness.

p.22 The most effective use of noise to maximize self-organization is to start with large amounts of noise which are then gradually decreased, until the noise disappears completely. The initially large perturbations will allow it to escape all local minima, while the gradual reduction will allow it to settle down in what is hopefully the deepest valley. This is the principle underlying annealing, the hardening of metals by gradually reducing the temperature, thus allowing the metal molecules to settle in the most stable crystalline configuration. The same technique applied to computer models of self-organization is called simulated annealing.

p.22 The theory of self-organization and adaptivity has grown out of many disparate scientific fields, including physics, chemistry, biology, cybernetics, computer modelling, and economics. This has led to a quite fragmented approach, with many different concepts, terms and methods, applied to seemingly different types of systems. However, out of these various approaches a core of fundamental concepts and principles has slowly started to emerge which seem applicable to all self-organizing systems... The present article has attempted to bring the most important of these ideas together without going into the technical details.

p.22 Self-organization is basically the spontaneous creation of a globally coherent pattern out of the local interactions between initially independent components. This collective order is organized in function of its own maintenance, and thus tends to resist perturbations. This robustness is achieved by distributed, redundant control so that damage can be restored by the remaining, undamaged sections.

The basic mechanism underlying self-organization is the deterministic or stochastic variation that governs any dynamic system, exploring different regions in the state space until it happens to reach an attractor, i.e. a configuration that closes in on itself. This process can be accelerated and deepened by increasing variation, for example by adding "noise" to the system.

p.23 To adapt to a changing environment, the systems needs a sufficiently large variety of possible stable states to cope with likely perturbations. This variety, however, must not be so large as to make its evolution uncontrollably chaotic. Given this variety, the most adequate configurations are selected according to their fitness, either by the environment, or indirectly by subsystems that have already adapted to the environment at an earlier stage. Thus, the system can adjust its internal configuration to external perturbations, while minimizing the changes to its overall organization.

p.23 The theory of self-organization has many potential - but as yet relatively few practical - applications. In principle, it provides an insight in the functioning of most of the complex systems that surround us... Such an understanding does not necessarily lead to a better capability of prediction, though, since the behavior of self-organizing systems is unpredictable by its very nature. On the other hand, getting a better insight into the relevant sources of variation, selection and intrinsic attractor structures will help us to know which behaviors are likely, and which are impossible.

p.23 Managing or controlling self-organizing systems runs into similar limitations: since they intrinsically resist external changes, it is difficult to make them do what you want. Increasing pressure will eventually result in a change, but this may be very different from the desired effect, and may even result in the destruction of the system. The best approach seems to consist in identifying "lever points", that is, properties where a small change may result in a large, predictable effect.

p.23 Most practical applications until now have focused on designing and implementing artificial self-organizing systems in order to fulfil particular functions. Such systems have several advantages over more traditional systems: robustness, flexibility, capability to function autonomously while demanding a minimum of supervision, and the spontaneous development of complex adaptations without need for detailed planning. Disadvantages are limited predictability and difficulty of control.

Most such applications have been computer programs, such as neural networks, genetic algorithms, or artificial life simulations, that solve complex problems. The basic method is to define a fitness function that distinguishes better from worse solutions, and then create a system whose components vary relative to each other in such a way as to discover configurations with higher global fitness.

p.24 Perhaps the most challenging application would be to design a complex socio-economic system that relies on self-organization rather than centralized planning and control. Although our present organizations and societies incorporate many aspects of self-organization, it is clear that they are far from optimal. Yet, our lack of understanding of social self-organization makes it dangerous to introduce radical changes, however well intended, because of their unforeseen side-effects. Better models and simulations of social systems may be very useful in this respect.

Future developments in the science of self-organization are likely to focus on more complex computer simulations and mathematical methods. However, the basic mechanisms underlying self-organization in nature are still far from clear, and the different approaches need to be better integrated. Although researchers such as Kauffman have started exploring the structure of fitness landscapes for various formally defined systems by computer simulation, we should at the same time try to understand which types of variations, fitness functions and attractor dynamics are most common in natural systems (physical, biological or social), and why. This may help us to focus on those models, out of the infinite number of possible mathematical models, that are most likely to be useful in understanding and managing every-day phenomena.