John L Jerz Website II Copyright (c) 2015

The Engine of Complexity: Evolution as Computation (Mayfield, 2013)

Home
Current Interest
Page Title

John E. Mayfield

"The central message of this book is that a particular computational strategy, which I call the engine of complexity, is a remarkable method for efficiently building up sizeable and otherwise wildly improbable bodies of information that are purposefully useful, and that this method appears in various venues in our world."

"Nature has demonstrated in other contexts that engine of complexity-type computations are quite capable of solving open-ended problems as complex as those faced by the brain."

JLJ - Get lectured by a know-it-all who is out to prove, like most of this ilk, that he knows-it-all. He asks questions that he just KNOWS that you want answered, then conveniently, answers them.

There is nothing that Mayfield does not know. Nothing. Just sitting on the table, Mayfield's book actually oozes knowledge - dripping from within the pages, out on the table, down on to the floor, where it pools.

Silence, please. The class has started. Are you taking notes? Mayfield has begun to speak, and you are to silence yourself and listen up. There is much to learn from the man who knows that he knows everything. It is hard work, asking and answering questions, in order to remove your ignorance. You should be grateful that Mayfield has written this book. You will be smarter because he has written it.

Let's see what Mayfield has to offer concerning managing attention and problem solving in a complex environment - specifically, how we might effectively program a machine to play a complex game of strategy. Might our know-it-all know all the answers to our difficult problems? Let's take a look and see if engine of complexity-type solutions work in this context. It appears that they will - we just have to figure out how to do it.

Occasional bursts of wisdom, modulated by enough hot air to inflate several hot air balloons. However, his "engine of complexity" concept - elsewhere known as variation and selection - might just be useful for "programming" machines to play complex games of strategy.

p.89 a basic principle of structure formation: when appropriate simple rules are applied to certain situations, structure results as a consequence of interacting rules.

p.115 Instructions, by their nature, have the purpose of prespecifying something... Instructions are always motivated; they represent investment in an outcome.

p.116 If we set aside human creativity for the moment, the only known strategy for creating purposeful bodies of information, which instructions certainly are, is a probabilistic computation that is naturally carried out by populations of replicating entities subject to repeated cumulative selection. I call this computational strategy the engine of complexity. It is the same computational strategy that underlies biological evolution and several other phenomena we will discuss in later chapters.

p.123 When rules are followed, a property of paths over time through the space of all possibilities is that future states are contingent on past events. Structure builds on past structure.

p.124 When chance events are combined with selection, the resulting outcomes are nonrandom... evolution should be seen as a nonrandom method for finding new paths through possibility space even when the initial changes to be selected are randomly generated.

p.125 An opportunity made possible by the nature of computation is having the output of one computation serve as the input to another... Such a process is diagrammed as a cycle... We say it is iterated (repeated over and over). Most computations cannot be iterated because their outputs are not structured to serve as inputs; but it is not difficult for a programmer to design an iterated computation.

p.127 Any system that incorporates the engine of complexity cycle will normally accumulate information in the form of input/output structure that improves entity chances of being copied again. The cycle extracts information from whatever random changes are made by retaining any change that better satisfies the selection criteria and rejecting any change that does not. I call such accumulated information "purposeful" because it exists to satisfy the selection criteria. Every time a probabilistic change is made, the resulting output is evaluated according to the selection rules. If a change is favored, it becomes input to the next cycle, if not, it is discarded.

p.130 a system featuring the engine of complexity will adapt over time responding to whatever nonrandom selection rules are in play. As it adapts, the system naturally accumulates information pertinent to satisfying the selection rules. If new selection rules are imposed, the system will immediately begin accumulating information pertinent to the new rules.

p.130 For completeness I need to make clear that the engine of complexity does not require random change. The basic cycle shown in figure 5.2 works with any source of change. Because the spectrum of possible outcomes is restricted whenever the range of possible changes is restricted, the creativity of the process is reduced by predetermining the changes made... The beauty and power of the IPCS strategy is that it exhibits maximum creativity in the absence of preplanning.

p.207 complex systems... are made up of many parts interacting with one another in specific ways. Outcomes of these interactions are often hard to predict... Clearly, the opportunity for a system to exhibit complex behavior must be greater when there are more parts. One surprising finding of research in this area is that complicated behavior does not require complicated parts. When systems have lots of interacting parts, even simple parts and simple interaction rules are sometimes sufficient to produce complex structure and behavior.

p.208 An important aid for visualizing complex systems is the concept of a network. Whenever systems of objects can be characterized in terms of interactions between parts, the systems can be described or, if simple enough, drawn as a network.

p.231 We have seen that the engine of complexity is inherently creative and is a powerful strategy for finding improbable solutions to difficult problems, that is, problems whose solutions occur in a vast space of possible solutions, most of which are useless. The brain must solve such problems all the time.

p.243 In a broad sense, the central activity of any brain is to "make sense" of an individual's current state vis-a-vis the local environment and to formulate actions based on this understanding.

p.248 Minimizing surprise could provide the criterion by which alternative hypotheses are selected.

p.249 Dr. Ed Sobey, first director of the National Inventors Hall of Fame and founder of the Northwest Invention Center... All great inventors, he claims, invent by trial and error... When confronted by a problem for which the solution is not obvious, inventive people try things out, test the better ideas, modify them, test them, modify them again, and test and re-modify again and again. [JLJ - yes, and our machine agent should ideally behave exactly the same way when effectively playing a complex game of strategy. Except, I would call it idea-then trial-then fail-then leads to new idea.]

p.249 Dr. Sobey is quite clear that quick-and-dirty first attempts followed by successive rounds of improvement constitute a far more effective invention technique than prolonged pondering leading in one step to a finished product. Not explicitly stated in this description of the invention process is the much larger number of potential modifications that are considered and discarded in the inventor's mind before each physical attempt is made.

p.250 The central message of this book is that a particular computational strategy, which I call the engine of complexity, is a remarkable method for efficiently building up sizeable and otherwise wildly improbable bodies of information that are purposefully useful, and that this method appears in various venues in our world.

p.252-253 Nature has demonstrated in other contexts that engine of complexity-type computations are quite capable of solving open-ended problems as complex as those faced by the brain. If the brain does indeed implement some version of the engine of complexity, the implication is that there must be an unconscious effervescence of ever-changing combinations of thought and action fragments that provide the grist for selection of potentially functional patterns to be modified, tested, re-modified, retested, re-modified, and retested again until one matches the job at hand. [JLJ - or time for creative construction runs out - then we select whatever best concept we have.]

p.253-254 Figure 10.1 shows the engine of complexity redrawn using vocabulary appropriate to brain action. The diagram illustrates abstractly how the engine of complexity would function in he brain cumulatively selecting networks that yield useful outcomes. The cycle starts with the generation of a modest number of "guesses," tentative models in the Bayesian brain scheme ("outputs" in the figure) in response to a problem. Physically, these guesses would have the form of parallel neuronal communication patterns or networks. The guesses are tested  by matching projected outcomes against the problem at hand. The poorest matches (those having the greatest surprise) would fade away. Those providing the best match would be probabilistically modified, resulting in multiple new outputs. These, in turn, are again tested against projected outcomes. Repeated cycles would rapidly result in better and better matches between the problem and projected outcomes.  At some point a decision is made to "go with the best." [JLJ - Ok, fairly similar to my current proposed approach of asking over and over in the internal conversation "How might I proceed?" Followed by "How much should I care about that?" If we think of this process as what the brain does, over and over, then thought is explained as emergent from reality, perception, experience, sketch-producing behavior and capability. Hmmm.]

p.263 science does not deal in ultimate truths but rather in "best explanations available."

p.296 A powerful but general way to analyze the costs and benefits of complexity is to associate success of entities in complicated situations with their ability to maintain control... In 1956 W. Ross Ashby put forward what he called the Law of Requisite Variety. This states that "In order to achieve control, the variety of actions a control system is able to execute must be at least as great as the variety of environmental perturbations that need to be compensated."

p.297 Phrased in the terminology of complexity, Ashby's law of requisite variety says that in order to assure internal stability, a controller must be at least as complex as the environment it contends with... For systems characterized by unpredictable environmental change, other things being equal, favored entities will tend to evolve structures and strategies that succeed under a large number of conditions. In the face of unpredictable environments, entities with more capabilities have a higher probability of success than entities with fewer capabilities.

p.318 I speculated in chapter 10 that thought is possible because brain action incorporates the computational strategy formalized by the engine of complexity.

p.319 It is incredibly useful to have a mental model of one's environment. Without it every movement is effectively random with respect to the environment... A mental model is necessary for even rudimentary thought. Asking mental questions involves placing oneself in different places or posing alternative actions.

p.323 One natural consequence of the computational viewpoint is that you, your body, your thoughts, your accomplishments, and your interactions with others are current states of ongoing computations.

p.324 The engine of complexity is a simple yet remarkable computational strategy that allows the improvement of instructions when nothing is known in advance about what changes would constitute improvements. [JLJ - perhaps as well, the improvement of a position in a complex game of strategy.]

Abductive Inference, Josephson, Josephson, 1994

p.25 one can explain without being in a position to predict... one can be in a position to predict without being in a position to explain.

p.25 Predictions, then, are neither abductions nor (typically) deductions... Rather, predictions and abductions are two distinct kinds of plausible inference... predictions go from hypothesis to expected data.

p.28 Learning is the acquisition of knowledge. [JLJ - yes, including knowledge that can be executed as a scheme, which accomplishes something of value. Consider an experienced salesman. He will select a sales approach based on what he can see and infer from the customer, possibly making more money than his colleagues. The knowledge has to be executed as part of a scheme to go on, before it can generate performance results which can be measured.]

p.37 Potentially, an expert system might be able to reason even better than people, since the reasoning pattern, once put into a machine, can be refined, and experiments can be run to see which strategies work best. Thus, for example, since many abductive systems do diagnosis, people can study how these machines do it and then potentially learn to do diagnosis better.

p.47 Roger Schank... The claim is that intelligence arises largely because memory holds knowledge in the form of organized chunks corresponding to scripts, goals, plans, and various other higher level packages... the script, a frame that holds a stereotypical representation of the temporal sequence of events in a generic episode... the... script is retrieved and a number of default expectations that are encoded in the script are set up. These expectations... can be used to predict what will happen and help to decide what actions to take.

p.54 In diagnosis, the elementary generic tasks into which many diagnostic problems can be decomposed are the following: hierarchical classification, hypothesis matching, knowledge-directed data retrieval, and abductive assembly of hypotheses.

p.54-55 Hierarchical classification... The concepts in the hierarchy can be represented by active agents... Because each such agent contains knowledge about establishing or rejecting only one particular conceptual entity, it may be termed a specialist, in particular, a classification specialist. One can view the inference machinery as being embedded directly in each specialist. that is, each classification specialist has how-to knowledge, for example, in the form of two clusters of rules: confirmatory rules and exclusionary rules. The evidence for confirmation and exclusion is suitably weighed and combined to arrive at a conclusion to establish, reject, or suspend judgment of the concept... The entire collection of specialists engages in distributed problem solving. The control regime that is implicit in the structure can be characterized as an establish-refine type. That is, each concept specialist first tries to establish or reject itself (i.e., its concept). If it establishes itself, the refinement process consists of determining which of its successors can establish themselves.

p.56-57 Hypothesis matching... Each concept in the classification process is evaluated by appealing to the appropriate concept-matching structures that map from relevant data to a symbolic confidence value for that particular concept hypothesis... overall, hypothesis matching works by mapping patterns of situation features to confidence values (or applicability values)

p.57 Knowledge-directed data retrieval. A hypothesis matcher requires values for specific data items, and this is the function of knowledge-directed data retrieval.

p.114 A method can be described in terms of the operators that it uses, the objects upon which it operates, and any additional knowledge about how to organize operator application to satisfy the goal. At the knowledge level, the method is characterized by the knowledge that the agent needs in order to set up and apply the method.