John L Jerz Website II Copyright (c) 2015

Complexity & Postmodernism (Cilliers, 1998)

Home
Current Interest
Page Title

Paul Cilliers

"there is no denying that the world we live in is complex and that we have to confront this complexity if we are to survive, and, perhaps, even prosper. The traditional (or modern) way of confronting complexity was to find a secure point of reference that could serve as foundation... a master key from which everything else could be derived. Whatever that point of reference might be... my claim is that following such a strategy constitutes an avoidance of complexity. The obsession to find one essential truth blinds us to the relationary nature of complexity, and especially to the continuous shifting of those relationships. Any acknowledgement of complexity will have to incorporate these shifts and changes, not as epiphenomena, but as constitutive of complex systems."

"Derrida's concept of differance can be used to describe the dynamics of complex neural networks... If an ensemble of neurons (whether real or artificial) generates a pattern of activity, traces of the activity reverberate through the network. When there are loops in the network, these traces are reflected back after a certain propagation delay (deferral), and alter (make different) the activity that produced them in the first place. Since complex systems always contain loops and feedback, delayed self-altering will be one of the network's characteristics... Meaning is determined by the dynamic relationships between the components of the system."

JLJ - Cilliers does a good job of simplifying complexity. Um... that is not what I meant.

Perhaps the best deconstruction of complexity that I have come across, and better yet, a good starting point for self-education. If you are dealing in any way with a complex system (elitists from Harvard Business Review - take note here) this work by Cilliers can only deepen the insight you oh-so-critically need into the nature of the beast.

Possible weaknesses include an over-emphasis on neural networks (he does, however, cover symbolic rule-based systems) and little discussion of "adaptive capacity" - the system must instead be 'plastic' - whatever that means.

Missing is my concept of an intelligently selectable portfolio of 'tricks that work' as essential tools for reducing the complexity of our predicament to the level of a child playing with blocks, at which point we merely 'play' with the possibilities that 'emerge' - in order to determine what is going on in enough detail to form plans which we use to essentially, 'go on.' Perhaps we initially form a 'sketch' and fill in the details from prior experience or what typically happens. We do not critically need full understanding in order to 'go on' - merely a practical scheme which experiments with the possibilities prior to evolving and producing a 'launch point' for our daily activities, later guiding us with daily mid-course corrections as events unfold different from what we imagined. There is a clock ticking in our life world - opportunities available today might not be available tomorrow - or in the same form they are available today. A certain 'reveling in the present' is therefore called for - guided of course by wisdom from the past and 'premonitions' of the future. Today is a day that will come to a close - along with the opportunities of today that were not seized - and will never happen again. Tomorrow, we and those around us will be one day older and we will eventually have new responsibilities as the world continues to evolve and change.

Cilliers, meanwhile, (Did I get distracted? Must have been the post-modern world) has other ideas which you ought to read for yourself.

viii it is not possible to tell a single and exclusive story about something that is really complex.

[JLJ - But instead we can tell a story of what typically happens, and be ready for that. We can build and deploy adaptive capacities (or even insurance) against what typically happens. Ultimately, it does not critically matter that we do not know what will eventually happen - we just need to be ready for it - whatever it is.]

viii-ix In a complex system... the interaction among constituents of the system, and the interaction between the system and its environment, are of such a nature that the system as a whole cannot be fully understood simply by analysing its components. Moreover, these relationships are not fixed, but shift and change, often as a result of self-organization. This can result in novel features, usually referred to in terms of emergent properties... The problem of understanding this kind of complexity is a central issue throughout the [JLJ - this] book.

ix it is exactly the robust nature of complex systems, i.e. their capability to perform in the same way under different conditions, that ensures their survival.

[JLJ - It would perhaps be better to speak of their capability to 'adequately perform the system-level goal of the designer, under different and unexpected conditions'. A system can be robust, yet fail due to a 'trick that works' which is deployed by an opponent or competitor, and is not foreseen, and therefore not prepared for in advance. One cannot be 'robust' to all possibilities, at all times, and one does not need to be. One selects an appropriate level of paranoia, then plans accordingly.]

ix If something is really complex, it cannot be adequately described by means of a simple theory. Engaging with complexity entails engaging with specific complex systems. Despite this we can, at a very basic level, make general remarks concerning the conditions for complex behaviour and the dynamics of complex systems. Furthermore, I suggest that complex systems can be modelled.

p.2 the study of complex dynamic systems has uncovered a fundamental flaw in the analytical method. A complex system is not constituted merely by the sum of its components, but also by the intricate relationships between these components. In 'cutting up' a system, the analytical method destroys what it seeks to understand. Fortunately this does not mean that the investigation of complexity is hopeless. Modelling techniques on powerful computers allow us to simulate the behaviour of complex systems without having to understand them.

[JLJ - Unfortunately, this is one "answer" to the question of how machines can "play" complex games of strategy such as chess. A programmer develops an effective modelling technique - a scheme - which the machine blindly executes - without any understanding. There is no reason why we cannot refine these schemes in order to make them useful.]

p. 3-5 it is useful in developing a description of the characteristics of complex systems. I offer the following list:

  1. Complex systems consist of a large number of elements...
  2. ...In order to constitute a complex system, the elements have to interact, and this interaction must be dynamic...
  3. The interaction is fairly rich, i.e. any element in the system influences, and is influenced by, quite a few other ones...
  4. The interactions themselves... are non-linear...
  5. The interactions usually have a fairly short range... Long-range interaction is not impossible, but practical constraints usually force this consideration...
  6. There are loops in the interactions. The effect of any activity can feed back onto itself...
  7. Complex systems are usually open systems, i.e. they interact with their environment...
  8. Complex systems operate under conditions far from equilibrium...
  9. Complex systems have a history. Not only do they evolve through time, but their past is co-responsible for their present behaviour...
  10. Each element in the system is ignorant of the behaviour of the system as a whole... Complexity is the result of a rich interaction of simple elements that only respond to the limited information each of them are presented with...

p.9 Chaitin's analyses help us to realize that truly complex problems can only be approached with complex resources.

p.10 The complex systems we are interested in... contain a lot of spare capacity or redundancy. This is necessary for more than one reason: it provides robustness, space for development and the means for plasticity.

p.10 Complex systems have to grapple with a changing environment. Depending on the severity of these changes, great demands can be made on the resources of the system. To cope with these demands the system must have two capabilities: it must be able to store information concerning the environment for future use; and it must be able to adapt its structure when necessary. The first of these will be discussed as the process of representation; the second, which concerns the development and change of internal structure without the a priori necessity of an external designer, as the process of self-organisation.

These capabilities are vital... Any model of a truly complex system will have to possess these capabilities. In other words, the processes of representation and self-organisation must be simulated by the model.

p.11 In order to respond appropriately to its environment, a complex system must be able to gather information about that environment and store it for future use... the structure of the system... must have some meaning... this means that the system must somehow 'represent' the information important to its existence.

p.11 Meaning is conferred... by the relationships between the structural components of the system itself... Meaning is the result of a process, and this process is dialectical [JLJ - concerned with or acting through opposing forces.] - involving elements from inside and outside

p.12 A complex system... has to develop its structure and be able to adapt that structure in order to cope with changes in the environment... we have to find mechanisms by which a system can acquire and adapt its internal structure on an evolutionary basis.

The key concept here is the notion of self-organization... Since the system has to cope with unpredictable changes in the environment, the development of the structure cannot be contained in a rigid programme that controls the behaviour of the system. The system must be 'plastic'.

[JLJ - I prefer to think in terms of adaptive capacity. Being 'plastic' works - initially - as long as the system adapts to the coercive force - eventually.]

p.12 Why would we want to model complexity? ...To be effective, however, these models have to work, they have to produce results.

p.13 we wish to model complex complex systems because we want to understand them better. The main requirement for our models accordingly shifts from having to be correct to being rich in information.

p.13 in order to model complexity, we will need both the scientific and the philosophical perspectives... science without philosophy is blind, and philosophy without science is paralysed. Co-operation between the two will benefit both.

p.14 The range of things that can be done on a computer gives us an indication of the power of formal systems. This range is so bewildering that we often forget that it is a formal system, consisting only of a set of tokens manipulated by rules (in this case called a program).

p.15 Symbolic rule-based systems constitute the classical approach to the modelling of complexity. The behaviour of the complex system has to be reduced to a set of rules that describes the system adequately. The problem lies in finding those rules, assuming that they exist... let me summarise the main characteristics of rule based systems (following Serra and Zanarini 1990: 26):

  • Rule-based symbol systems model complex systems on an abstract (semantic) level. The symbols are used to represent important concepts directly. In this way a lot of the contingent aspects of the complex systems, i.e. the unnecessary detail of the implementation, can be ignored. The model consists of the set of logical relationships between the symbols (the production rules).
  • The set of rules are governed by a system of centralised control, known as the meta-rules of the system. This control system decides which of the production rules should become active at every stage of the computation...
  • Each concept has a symbol dedicated to it, or, conversely, each symbol represents a specific concept. This is known as local representation...

p.18 [Connectionist model] Complex behaviour emerges from the interaction between many simple processors that respond in a non-linear fashion to local information.

p.22 My argument is that post-structuralism is not merely a subversive form of discourse analysis, but a style of thinking that is sensitive to the complexity of the phenomena under consideration.

p.23 When dealing with complex phenomena, no single method will yield the whole truth. Approaching a complex system playfully allows for different avenues of advance, different viewpoints, and, perhaps, a better understanding of its characteristics.

p.24 when modelling complex systems...

  • A closer look at the characteristics of complex systems clearly shows the limitations of the analytics method when dealing with complexity...
  • Algorithmic and information-theoretical approaches to complexity fail in their attempts to reveal the true nature of complexity, but provide us with valuable insight, namely that complexity is 'incompressible'. A complex system cannot be 'reduced' to a simple one unless it was not really complex to start with...
  • Complex systems have a special relationship with their environment as far as the manner of processing information, and developing and changing of internal structure, are concerned.
  • Computer technology has opened up new possibilities for the modelling of complex systems...
  • It was suggested that there are interesting links between connectionist models and post-structural theory...

p.25 I argue that the traditional rule-based and analytical approaches to complex systems are flawed, and that insights from postmodern and post-structural theory can help us to find novel ways of looking at complexity... these insights can influence our models of complex systems.

[JLJ - Not surprising, since Cilliers is a connectionist. But he fails to understand that rule-based systems can succeed if an intentioned human programmer creates an effective scheme which can 1) determine adaptive capacity, and position itself effectively  2) create "ideas" for what to do next based on what might work, and develop a scheme to test those ideas, discarding or delaying those that score poorly 3) properly recognize richly-detailed signs and symbols from within the environment, guiding action now (and in just such a way) so that it is not surprised later.

Well duh, a computer just does what we tell it to do. It has no "intention" and is a means to an end, like a hammer driving a nail. It if becomes "intelligent", it is indirectly and 'artificially', and due to the effective scheme programmed by the human programmer - including schemes of competitive performance testing where operating parameters might be adjusted to make performance more effective. A computer's intelligence will always be artificial, an emergent part of an effective scheme of a programmer, and limited to wherever we ignore the hidden hand of the human, and the 'tricks that work' which can be leveraged.]

p.37 Saussure... His primary insight - that meaning is generated through a system of differences - remains an excellent way of conceptualising the relationships in a complex system.

[JLJ - Yet, this is merely a trick that works, which we use in order to 'go on'. We are free to use other tricks - they might not work as well.]

p.38 'Where there are signs there is a system' (Culler 1976: 91).

p.39 The sign is determined by the way in which it differs from all the other signs in the system... The sign is a node in a network of relationships. The relationships are not determined by the sign; rather, the sign is the result of interacting relationships.

p.42-43 Meaning is never simply present and therefore we cannot escape the process of interpretation, even when the speaker is in front of us.

p.43 The play of signifiers does... create 'pockets of stability' (Stofberg 1988: 224)... Within these pockets a more rigorous analysis of relationships is possible, as long as it is understood that the stability is not permanent or complete... meaning remains a result of the process of interaction between signifiers. This interaction is explained by Derrida in terms of, amongst others, two concepts: 'trace' and 'differance'.

p.44-45 Saussure defines the meaning of a sign in terms of the relationships it has with all the other signs in the system... The sign has no component that belongs to itself only; it is merely a collection of the traces of every other sign running through it... Traces are traces of difference. In the play of differences meaning is generated. However, as this play is always in progress, meaning is never produced finally, but continuously deferred. As soon as a certain meaning is generated for a sign, it reverberates through the system... The characteristics of the system emerge as a result of the differance [JLJ - essentially, conflict or interaction] of traces, not as a result of essential characteristics of specific components of the system... In order for signs to interact by means of traces and differance, they cannot... be stacked tightly against each other. Space is required as a site of action... differance can only take place if there is a certain space, a space maintained by the dynamics of differance. Differance thus has both spatial and temporal characteristics

p.46 Derrida's concept of differance can be used to describe the dynamics of complex neural networks... If an ensemble of neurons (whether real or artificial) generates a pattern of activity, traces of the activity reverberate through the network. When there are loops in the network, these traces are reflected back after a certain propagation delay (deferral), and alter (make different) the activity that produced them in the first place. Since complex systems always contain loops and feedback, delayed self-altering will be one of the network's characteristics... Meaning is determined by the dynamic relationships between the components of the system.

p.54 Searle's... Chinese Room argument... I am of the opinion that any conclusions drawn from such a simplistic theory will be of little or no help in saying things about the brain or machines.

[JLJ - In my opinion, what is overlooked in the Chinese Room experiment by both sides is the fact that we are deconstructing in great detail what is at essence a complex trick that works. Such a competition- or performance-based trick simply works - whether it is performed by a human or executed by a machine. Intelligently, we latch onto such a proven trick that works in order to 'go on.' The machine does not choose to execute the trick that works, or in any way intend for the results to come about - it merely executes machine code in a reliable way. Intelligence in its most deconstructed form is perhaps a portfolio of tricks that work which lay mostly dormant, and which are 'intelligently' or otherwise selected or selectable, based on their ability to perform in a predicament of sorts, in order to 'go on.' A machine could care less what happens to it in the next moment or next year, even when it is following a programmed set of tricks which achieve a high level of performance in a competition, test or predicament of some sort. A machine that is programmed to execute code which results in its own destruction will simply do so, when commanded to run the code. In conclusion, one can intelligently choose to execute competition-proven 'tricks that work' with either 1) full understanding, or 2) blindly execute them with no understanding: the result in many cases is - unbelievably, and due to the nature of such a 'trick that works' - the same. 'Intelligence' often reduces (deconstructs) to a portfolio of tricks that work - and that are selected and selectable in a predicament in a timely manner in order to 'go on'. All other definitions are critically missing this component and are therefore incomplete.]

p.54-55 One of the contributions made by Austin (1980) in his development of speech-act theory was to argue that the meaning of an utterance depends on the context in which the speech act is performed, where 'context' includes the social conventions pertaining to the act.

p.56 'the category of intention will not disappear; it will have its place, but from that place it will no longer be able to govern the entire scene and system of utterance' (Derrida 1988: 18).

p.57 Complex issues demand complex descriptions, and a certain humility.

[JLJ - I would argue that there is a practical way to describe every situation which is simpler than what appears on first impression, in order to better determine how to 'go on.' In fact, this would be a central function, and even purpose, of cognition. If there is no way to practically reduce the complexity of a predicament, then one ought to consider avoiding the situation in advance, or to better prepare adaptive capacities to reconfigure one's posture or stance in order to deal with it in an appropriate way.]

p.58 Models of complex systems will have to be as complex as the systems themselves.

[JLJ - Not necessarily - it depends on what you are ultimately trying to do. If you are playing a complex game of strategy, your models of complex interaction among the game pieces just need to be performance-wise better than those of your opponent. They might actually be quite simple and mistake-prone. Or consider: we do not need to have a complex 'model' for the inner workings of a car, in order to drive it. We know 'generally' how it works, as well as 'generally' what will happen when we step on the gas pedal or brake pedal or turn the steering wheel. Effectively, we create an abstract layer of understanding in order to practically operate the vehicle. We also build adaptive capacity into our driving performance, so that we can react in real time to our present situation, including the performance of the vehicle we are driving.]

p.58 A theory of representation is essentially a theory of meaning... Unfortunately, the ease with which symbols can be made to represent something vanishes when we deal with complex problems

p.62

There can be no informational sensitivity without representation. There can be no flexible and adaptive response to the world without representation. To learn about the world, and to use what we learn to act in new ways, we must be able to represent the world, our goals and options. Furthermore we must make appropriate inferences from these representations. (Sterelny 1990: 21)

p.62 Representation is the process whereby the two levels of description - the symbol and its meaning - are related... the core of a generally accepted theory of representation has been worked out by Jerry Fodor (1975, 1981)

p.62 The fundamental proposition of Fodor's model is that we think in a special inner language, often referred to as 'mentalese' (Fodor 1975). Mentalese is not an equivalent of the language we speak (e.g. French or German), but prior to it. Our capacity for mentalese is not something we acquire or learn, it is an innate capacity of our brains. Like other languages, mentalese is medium-independent. Thus, there are no written or spoken words in our heads; the language is implemented in our neural structure.

p.62 mentalese must share with language that one thing that best explains productivity and structure: a formal syntax. This conclusion is summarised by Sterelny (1990: 26) in the following way:

For this model, and any based on it, requires an agent to represent the world as it is and as it might be, and to draw appropriate inferences from that representation. Fodor argues that the agent must have a language-like symbol system, for she can represent indefinitely many and indefinitely complex actual and possible states of her environment. She could not have this capacity without an appropriate means of representation, a language of thought.

[JLJ - Well, perhaps we can go further by saying that we have the equivalent of an internal conversation in 'mentalese' (perhaps we should rename this concept Fodorese). Ideas do not usually emerge in the mind fully formed, crystal clear, and ready for immediate practical use in a predicament. Perhaps instead we simultaneously produce Peirce-like 'musings of the moment' of various kinds (How might I proceed, now that I am in this situation? Well what about this, then? Who is that? What is going on? What is this? Is he/she upset? What do I do now? How do I feel about that idea? etc) inspired by richly detailed and critically-experienced perceptions, then ask ourselves how much we should care about them - essentially, which of our musings actually matter to me - now that I am in this particular predicament? What emerges from these mutterings is an effective understanding of the predicament in order to develop and execute plans, in order to 'go on.'

Thought is perhaps a search through impressions or musements for things to care about, followed by stacking together the building blocks of plans into a master plan, which allows us to specifically care for (or attend to) those things that need (or ought to have) our attention, based on the predicament that we are in. Remember that the "us" of a few minutes ago decided that the "me" of "right now" ought to be positioned in just about the way we are, in order to make progress in our plans, or else "he" would not have placed "us" in the general area where "we" are right now. Kind of confusing. The other way to think about thought, is that we are carefully executing a scheme that we had planned previously, including fallback positions, and we need to observe and improvise a series of changes to stay on track with what we had previously planned. Otherwise, like the ever-present message from a GPS navigator, we are endlessly "recalculating"...]

p.67 A neural network consists of a large collection of interconnected nodes or 'neurons'. Each neuron receives inputs from many others. Every connection has a certain 'strength' associated with it, called the 'weight' of that connection.

p.69 In the process of solving a complex problem by using a network, it is not necessary to have an explicit theory about the structure of the problem.

p.73 Lloyd selects the problem of representation as the central one that has to be solved in order to understand the mind... His meta-theory consists of a number of constraints applicable to any theory of representation... Lloyd's constraints... prevent him from seeing that distributed representation is really not representation at all

p.74-75 expert systems... Since they work with formal symbols in logical relationships, they employ the conventional methods of local representation. This does not mean that there is anything wrong with them per se, it just means that you need a very good formal model of the domain you are describing. Adequate models often employ many ad hoc rules to cope with exceptions. The more ad hoc rules there are, the more 'distributed' the model becomes. Minimal neural networks, on the other hand, begin to look more like expert systems. There are therefore a range of options between fully local and fully distributed representation.

p.76 The amount of training examples that would be adequate for a certain problem will be determined by the complexity of that problem... The ability of neural networks to operate successfully on inputs that did not form part of the training set is one of their most important characteristics.

[JLJ - This all assumes that there is no hidden complexity - that the 'training examples' are practically effective - meaning that the adaptive capacities developed will be effective in critical future scenarios - even after we have learned new ways for the components to interact and operate.]

p.80 Despite the fact that we cannot represent the essence of a complex system in determinate terms, we cannot resist, or perhaps even avoid, the construction of some kind of interpretation of the nature of the system at a given moment. These interpretations, however, are in principle limited. We are always constrained to taking snapshots of the system.

p.81 The danger lies in falling under the spell of a specific picture and claiming a privileged position for it... this spell must be broken by relentlessly showing the contradictions that result from fixing the boundaries from one perspective.

p.81 In a system of distributed semiotics the sign is constituted by the sum of its relationships to other signs. Derrida calls the relationship between any two signs, a 'trace' ...a trace is equivalent to a weight in a neural network... The weight, just like the trace, does not stand for anything specific.

p.89 In this chapter the focus will be on how that structure [of complex systems] comes about, develops and changes. The notion of 'structure' pertains to the internal mechanism developed by the system to receive, encode, transform and store information on the one hand, and to react to such information by some form of output on the other. The main burden of the argument will be to show that internal structure can evolve without the intervention of an external designer or the presence of some centralised form of internal control.

p.91-93 General attributes of self-organising systems include the following:

  1. The structure of the system is not the result of an a priori design, nor is it determined directly by external conditions. It is a result of interaction between the system and its environment.
  2. The internal structure of the system can adapt dynamically to changes in the environment, even if these changes are not regular.
  3. Self-organisation... involves higher-order, non-linear processes that cannot be modelled by sets of linear differential equations...
  4. Self-organisation is an emergent property of a system as a whole... The macroscopic behaviour emerges from microscopic interactions... Simple, local interactions can result in complex behaviour when viewed macroscopically.
  5. Self-organising systems increase in complexity...
  6. Self-organisation is impossible without some form of memory...
  7. Since the self-organising process is not guided or determined by specific goals, it is often difficult to talk about the function of such a system...
  8. Similarly, it is not possible to give crudely reductionist descriptions of self-organising systems...

p.93 In a nutshell, the process of self-organisation in complex systems works in the following way. Clusters of information from the external world flow into the system. This information will influence the interaction of some of the components in the system - it will alter the interaction of some of the components in the system - it will alter the weights in the network. Following Hebb's rule... if a certain cluster is present regularly, the system will acquire a stable set of weights that 'represents' that cluster, i.e. a certain pattern of activity will be caused in the system each time that specific cluster is present. If two clusters are regularly present together, the system will automatically develop an association between the two... As the system encounters different conditions in the environment, it will generate new structures to 'represent' those conditions, within the constraints determined by the amount of memory available to the system. This process can be described mathematically... but it does not differ in principle from Freud's neurological model of how the brain develops its structure

p.94 the most important aspect of self-organisation is the emergence of structure through the activity of microscopic units that do not have access to global patterns

p.95 In the following section an argument will be presented that claims not only that complex systems will organise their structure, but that they will tend to do so in an optimal way.

p.96 Complex systems - in which many factors interact in an asynchronous way - display unexpected, often unpredictable behaviour. Any analysis that ignores the possibility of self-organizing behaviour by a complex system will be seriously lacking in explanatory power.

p.96-98 A very useful concept in the analysis of complex systems, introduced by Per Bak, Kan Chen and colleagues (Bak and Chen 1991), is that of self-organised criticality... the system organises itself towards the critical point where single events have the widest possible range of effects. Put differently, the system tunes itself towards optimum sensitivity to external inputs... with the system poised at the point of criticality... the system will... be able to change its state with the least amount of effort. It should be clear that the principle of competition is the driving force behind this behaviour.

p.103 The brain is pre-structured in a way that is general and non-specific, but with enough differentiation... to allow external influences a 'foothold'. The 'general' structure is then modified through experience and behaviour in order to reflect the specific circumstances encountered in the history of the organism in question. The brain thus organises itself so as to cope with its environment.

p.106 In point of fact, self-organisation provides the mechanism whereby complex structure can evolve without having to postulate first beginnings or transcendental interventions... As a result of complex patterns of interaction, the behaviour of a system cannot be explained solely in terms of its atomistic components, despite the fact that the system does not consist of anything else but the basic components and their interconnections. Complex characteristics 'emerge' through the process of interaction within the system.

p.107 Heraclitus... For him, the basic principle of the universe was strife: war is common to all and strife is justice, and all things come into being and pass away through strife... Heraclitus placed everything in mutual competition. In this dynamic tension 'all things come into being and pass away'.

p.109 In complex systems... novel, unpredicted behaviour need not be a result of chance. It can be 'caused' by the complex interaction of a large number of factors - factors that may not be logically compatible. Complexity is not to be confused with randomness and chance, but cannot be described in first-order logical terms either... It is the interaction of complex constraints that produces interesting behaviour - behaviour that cannot be described as chance events or instabilities.

p.110 Since the certainty with which the future can be predicted has been greatly reduced, any plan of action has to be adapted continuously. If the plan is too rigid - too much central control - the system will not be able to cope with unpredictable changes. On the other hand, it will also be disastrous if the system tries to adjust itself to every superficial change... The system will waste its resources in trying to follow every fluctuation instead of adapting to higher-order trends. Being able to discriminate between changes that should be followed and changes that should be resisted is vital to the survival of any organisation (or organism). This is achieved optimally when the control of the system is not rigid and localised, but distributed over the system, ensuring that the positive dynamics of self-organisation is utilised effectively.

p.112 there is no denying that the world we live in is complex and that we have to confront this complexity if we are to survive, and, perhaps, even prosper. The traditional (or modern) way of confronting complexity was to find a secure point of reference that could serve as foundation... a master key from which everything else could be derived. Whatever that point of reference might be... my claim is that following such a strategy constitutes an avoidance of complexity. The obsession to find one essential truth blinds us to the relationary nature of complexity, and especially to the continuous shifting of those relationships. Any acknowledgement of complexity will have to incorporate these shifts and changes, not as epiphenomena [JLJ - secondary phenomena that occur alongside or in parallel to primary phenomena], but as constitutive of complex systems.

p.116 discourses... cannot isolate themselves... There are always connections to other discourses. The different local narratives interact... no discourse is fixed or stabilised by itself. Different discourses... may grow, shrink, break up, coalesce, absorb others or be absorbed... What we have is a self-organising process in which meaning is generated through a dynamic process... discourses are in constant interaction, battling with each other for territory, the provisional boundaries between them being the very stakes in the game.

p.117 Lyotard is quite clear on the point that the complexity of the social system does not automatically lead to randomness or noise... the system 'combats entropy', that it generates meaning, not noise or chaos. To optimise this process, the system has to be as diverse as possible, not as structured as possible.

p.118-119 If it is granted that all knowledge is embedded in the larger social network... the proliferation of meaning and discourses is an inevitable characteristic of a complex, self-organising network... the conditions of knowledge in a complex society... Dissenting voices receive no special privilege; they have to enter into the 'agonistics of the network', where their relevance is dynamically determined through competition and co-operation in terms of the history as well as the changing needs and goals of the system.

p.119-122 postmodern society (seen as a system) can be described in terms of the ten characteristics of complex systems...

  1. Complex systems consist of a large number of elements...
  2. The elements in a complex system interact dynamically...
  3. The level of interaction is fairly rich...
  4. Interactions are non-linear...
  5. The interactions have a fairly short range...
  6. There are loops in the interconnections...
  7. Complex systems are open systems...
  8. Complex systems operate under conditions far from equilibrium...
  9. Complex systems have histories...
  10. Individual elements are ignorant of the behaviour of the whole system in which they are embedded...

p.125 Self-organisation describes how a complex system can develop and change its internal structure. The process is driven by competition for the resources of the system. Information from the environment enters the system (through some sensing mechanism) and interacts with information already encoded and stored in the system,causing the system to adapt and change its responses to the environment... information from the environment has a direct, though non-determinate, influence on the system: it causes certain changes in the system, but it does not fully determine the nature of these changes. Information from the environment interacts in a non-linear way with information already stored in the system.

p.126 Meaning flows from a complex process of interaction between information from the world, on the one hand, and a web of already existing relationships, built up through previous interactions, on the other hand... If certain aspects of the environment are of great importance, the system will organise itself towards a robust, accurate interpretation of these aspects.

p.127 [Wilden] Organized complexity is the fount of life, liberty, and novelty on the planet earth.

p.127 We need to come to grips with complexity in order to ensure our survival.

[JLJ - We all have to 'go on' from where we are right now, in our current predicament, with the skills and abilities we have and can further develop, and with the help we can obtain from our social circles. It would be best, it would seem, to ponder those things that we can control.]

p.131 Using Wittgenstein's notion of family resemblances, she [Mary Hesse] argues that we gain cognitive knowledge not by exhaustively calculating all the logical relations at stake in a particular instance, but rather by finding enough analogies to place this instance relative to others we are already happy with.

p.134 Churchland... Another important conclusion he draws from the model of the network is that all empirical observations are theory-laden. Since information entering the network can only be processed in terms of the patterns of weights already existing, 'no cognitive activity whatsoever takes place in the absence of some theory or other'

[JLJ - ...with the exception of things like coming up with ideas, or enjoying a music concert or dancing. Perhaps Cilliers can explain to me the 'theory' involved here, in these cases. Is he implying that there is a 'theory' involved in activities such as reveling in the present?]

p.139 since we cannot fully predict the effects of this event, the future has to be considered as well, despite the fact that we have no idea what this future might be... we should take responsibility for this unknowable future... How do we deal with this...? To fall back on universal principles is to deny the complexity of the social system... Cornell's suggestion... is to take present... principles seriously... but to be keenly aware of when they should not be applied, or have to be discarded. We therefore do follow principles as if they were universal rules... but we have to remotivate the legitimacy of the rule each time we use it.

[JLJ - Rescher (Scientific Understanding, 1970) has spoken of such principles as quasi-laws. Remotivate or revalidate?]

p.139-140 To make a responsible judgement... would therefore involve at least the following components:

  • Respecting otherness and difference as values in themselves.
  • Gathering as much information on the issue as possible, notwithstanding the fact that it is impossible to gather all the information.
  • Considering as many of the possible consequences of the judgement, notwithstanding the fact that it is impossible to consider all the consequences.
  • Making sure that it is possible to revise the judgement as soon as it becomes clear that it has flaws, whether it be under specific circumstances, or in general.

[JLJ - Yes, these should ideally all be part of our scheme for 'going on.' If we do not have the time to continuously ask ourselves these questions consciously, of everything we do, (and we likely do not, because of the predicament we are in), we should should instead imbed them within our scheme and simply execute the scheme, in order to 'go on'. IMHO, we make a judgement as part of our scheme for 'going on.' The scheme calls for the judgement and may even specify how we are to perform it, the information cues to examine, and even the consequences, should the judgement be one way or the other. We have in effect pre-decided, when we execute a scheme for going on. With regarding the gathering of information or examining the consequences, we need to do so in a way which impacts performance - if we cannot answer the question, 'How much should I care about that (piece of information, or consequence) in a positive way, based on prior performance or prior experience, we likely should turn our attention to other pressing matters - we are in a predicament, remember.]

p.141 The aim of this book has been to stimulate transdisciplinary discussion on the subject of complexity...  It must be clearly understood that the general understanding of complexity developed here does not supply a complete description of any specific complex system... the ideas presented here merely provide a framework. The framework will have to be filled in with the... detail relevant to the specific case.

p.142 Derrida (1981: 26)... claims 'there are only, everywhere, differences and traces of traces'.

[JLJ - This may be what you see, or what any attempt at understanding must work with, but it does not imply that this is all there is to reality. Living beings continually confuse the real with the tricks and techniques we use to critically understand the real in order to go on - in the mind they often are one and the same, due to the predicaments we are constantly in. The purpose of cognition, nevertheless, is to understand in enough detail to determine how to 'go on,' within the current predicament, and ideally, future ones as well. IMHO, fundamental statements about the nature of reality are therefore irrelevant to a large degree if they do not tell you or advise you on how to 'go on' within it.]

p.146 There is an interesting tension between the need for structure and the need for plasticity. Some form of structure is a prerequisite for the encoding of information in the system, but if the structure is too rigid, it cannot adapt. Similarly, plasticity is necessary for adaptation, but if change can take place too easily - if the memory of the system is too short - the system merely reflects the surroundings, and cannot interpret it.

[JLJ - 'Tricks that work,' in order to 'work', must contain some kind of balance of structure and plasticity.]