John L Jerz Website II Copyright (c) 2015

Complexity and the Economy (Arthur, 2015)

Home
Current Interest
Page Title

W. Brian Arthur

"Structural change in the economy is not just the addition of a novel technology and replacement of the old, and the economic adjustments that follow these. It is a chain of consequences where the arrangements that form the skeletal structure of the economy continually call forth new arrangements."

JLJ - A collection of papers written by Brian Arthur over the years. Arthur trashes an entire branch of economics - classical equilibrium economics. He gets away with it because this is his book.

Economics fails us when it fails to predict complex interactions among agents - specifically when it declares that searches for such interactions (when designing and setting policies, for example) are unnecessary because the economy will seek an equilibrium where agents have no incentive to modify their behavior.

Advances in technologies (perhaps as simple as combinations of existing mechanisms) also disrupt any balance, and are difficult to predict. Perhaps "market forces" represent merchants (and their clients) responding to a standard array of "tricks" to move merchandise. Supply and demand comes to mind as an example. But people often behave emotionally or irrationally - perhaps in purchasing over-priced real estate in the middle of a speculation "bubble" (price has to go up, doesn't it? Or does it?). Everyone in an economy tries to "game" the system - and some will be more successful than others.

Arthur perhaps could take this work one step further and respond to criticism - it is too much of the "Brian Arthur show". Yes, yes, your readers will say, maybe interesting. But as written, yawnfull. We find you and your ideas less interesting than you find yourself. We ask - so what? Where do we go from here? What does this tell us about economists in general, absorbed in their indicators and metrics and models, the divining rods of what will be, that somehow fail to predict what will be? "Normal" people look at economists much as they look at weathermen/women: sometimes right, a necessary evil, unaware of what they are unaware of. We are told p.22 "Studies here are still at their beginning". Ok, so go run your studies. We will be waiting for the results.

Arthur touches on themes which I find interesting and important for the developing science of programming machines to play games of strategy, especially in difficult positions where no clear plan is obvious. Arthur's insight just needs a slight rephrasing to apply in these situations - watch how I format and present his words on p.110-111 as a procedure to apply for game theory. Not too different from what I proposed in my (unpublished) academic paper A Proposed Heuristic, and follow-on notes on this website. My current insight has been that we need to look to social science for our answers - not to the hard, physical, lab-coat-and-clipboard-and-experiment-and-results sciences. The social sciences have come a long way - much of this progress has come from embracing complexity and complexity science.

Maybe we can learn as well from new trends in economics - that is, newly infused with insights from complexity science. Arthur's efforts to re-invent economics look promising - what can we borrow from his (quixotic?) efforts for our own?

[Complexity Economics, Brian Arthur, written for this volume, p.1-29]

p.3 To look at the economy, or areas within the economy, from a complexity viewpoint then would mean asking how it evolves, and this means examining in detail how individual agents' behaviors together form some outcome and how this might in turn alter their behavior as a result. Complexity, in other words, asks how individual behaviors might react to the pattern they together create, and how that pattern would alter itself as a result. This is often a difficult question. In some cases agents are well informed... but in many other cases - in fact in most cases - they have no basis to do this, they simply do not know... I must make a move but I have genuine not-knowingness - fundamental uncertainty. There is no "optimal" move. [JLJ - yes, but there things such as diagnostic tests, richly detailed cues, and probes for information. There is not knowing, and then there is not being able to scheme a plan. If you do not know, maybe you just did not try hard enough to arrive at a good enough scheme.]

p.5 First, fundamental uncertainty. All problems of choice in the economy involve something that takes place in the future, perhaps almost immediately, perhaps at some distant time. Therefore they involve some degree of not knowing.

[JLJ - yes, but when I decide to go to the store to buy food, I can expect that the store is still there, and that the food prices will be generally the same as the last time I was there. If the store has unexpectedly closed, or prices raised 50 percent, I can still go to a nearby store. When I drive to work in the morning, I can expect Interstate 66 to still be there. And so on. Most schemes for "going on" in life can still work if an unexpected substitution or "patch" has to be made because someone was unaware that something had or has changed. An employee quits? Hire another one. Turnover rate too high? Give people nice benefits and good pay so they will stay. There is not knowing, and then there is not being robust enough to handle an unexpected change in the makeup of things. Being robust or resilient makes it 'recoverable' to read the cues present and then guess wrong, not knowing that something unexpected is about to happen. You will likely be able to immediately re-organize and smoothly handle the situation. Making reasonable assumptions, from a robust position, is 'almost' as good as knowing, and perhaps an acceptable position to take, in a crisis.]

p.5 rationality... is not well-defined... for the simple reason that there cannot be a logical solution to a problem that is not logically defined. [JLJ - there are practical solutions and then there are logical solutions. What we often need is a practical solution.]

p.6 as Shackle (1992) puts it, "The future is imagined by each man for himself and this process of the imagination is a vital part of the process of decision."

p.7 agents must explore their way forward, must "learn" about the decision problem they are in, must respond to the opportunities confronting them... these very explorations alter the economy itself and the situation agents encounter. So agents are not just reacting to a problem they are trying to make sense of; their very actions in doing so collectively re-form the current outcome, which requires them to adjust afresh. [JLJ - Arthur could very well be speaking of game theory - with two players (perhaps even generals) confronting each other in a complex game of strategy.]

p.8 The economy is a system whose elements are constantly updating their behavior based on the present situation.

p.14 Complexity, as I said, is the study of the consequences of interactions; it studies patterns, or structures, or phenomena, that emerge from interactions among elements

p.15 Nonequilibrium in the economy forces us to study the propagation of the changes it causes; and complexity is very much the study of such propagations.

p.19 let us define the economy as the set of arrangements and activities by which a society fulfills its needs... The economy we can then say emerges from its arrangements, its technologies: it is an expression of its technologies. Seen this way, the economy immediately becomes an ecology of its means of production (its technologies), one where the technologies in use need to be mutually supporting and economically consistent.

p.20 the economy... forms from its technologies and mediates the creation of further technologies and thereby its own further formation. Here again we are very much in complexity territory.

p.21 Notice that the theory I have outlined is algorithmic: it is expressed as a set of processes triggered by other processes, not as a set of equations... Equations do well with changes in number or quantities within given categories, but poorly with the appearance of new categories themselves... Biology then is theoretical but not mathematical; it is process-based, not quantity-based. In a word it is procedural. By this token, a detailed economic theory of formation and change would also be procedural.

p.22 Studies here are still at their beginning. The overall view we end up with is one of creative formation: of new elements forming from existing elements, new structure forming from existing structure, formation itself proceeding from earlier formation. This is very much a complexity view.

p.22 It should be clear by now that we have a different framework for thinking about the economy, one that emphasizes not the physics of goods and services, but processes of change and creation.

p.24 Complexity economics... teaches us that the economy is permanently open to response and that every part of it is open to new behavior - to be exploited for gain, or to abrupt changes in structure.

p.25 Complexity economics... shows us an economy perpetually inventing itself, perpetually creating possibilities for exploitation, perpetually open to response. An economy that is not dead, static, timeless, and perfect, but one that is alive, ever-changing, organic, and full of messy vitality.

[Inductive Reasoning and Bounded Rationality - The El Farol Problem, Brian Arthur, 1994, p.30-38]

p.34 N people decide independently each week whether to go to a bar that offers entertainment on a certain night... let us set N at 100. Space is limited, and the evening is enjoyable if things are not too crowded - specifically, if fewer than 60% of the possible 100 are present... a person or agent goes... if he expects fewer than 60 to show up or stays home if he expects more than 60 to go.

[JLJ - An exercise that is meant to be an entry point to agent-based (non-equilibrium) economics. What apparently goes unnoticed is that everyone in the problem shares the same requirement: that the total number of patrons not exceed 60. This is unreasonable to assume, making the problem itself moot. What if the bar sold tickets, claiming that they would not let the total number of patrons exceed 60? What if drink prices were discounted when more than 60 were present? Would that encourage more to stay? It was a mistake to build a 100-person bar if no-one wanted to be there with more than 60 people.]

[Process and Emergence in the Economy, W. Brian Arthur, Steven N. Durlauf, David A. Lane, 1997, p.89-102]

p.93 we see agents as having to cognitively structure the problems they face - as having to "make sense" of their problems - as much as solve them. And they have to do this with cognitive resources that are limited. To "make sense," to learn, and to adapt, agents use a variety of distributed cognitive processes. The very categories agents use to convert information about the world into action emerge from experience, and these categories or cognitive props need not fit together coherently in order to generate effective actions. Agents therefore inhabit a world that they must cognitively interpret... agents generally do not optimize... because the very concept of an optimal course of action often cannot be defined. [JLJ - in reality, agents have other problems within their "predicament", and optimizing a solution that is already "good enough" might take time away from other activities.]

p.94 the viewpoint we are outlining... asks how new "things" arise in the world - cognitive things... physical things... social things... if we posit a world of perpetual novelty, then outcomes cannot correspond to steady-state equilibria... The only descriptions that can matter in such a world are about transient phenomena - about process and about emergent structures.

p.95 The central cognitive issues raised in this volume are ones of interpretation... The two papers... explore problems in which a group of agents take actions whose effects depend upon what the other agents do. The agents base their actions on expectations they generate about how other agents will behave. Where do these expectations come from?

p.96 both papers suppose that each agent has access to a variety of "interpretive devices" that single out particular elements in the world as meaningful and suggest useful actions on the basis on the "information" these elements convey. Agents keep track of how useful these devices turn out to be, discarding ones that produce bad advice and tinkering to improve those that work. In this view, economic action arises from an evolving ecology of interpretive devices that interact with one another through the medium of the agents that use them to generate their expectations.

p.97 The idea that "interpretive devices" such as explicit forecasting models and technical-trading rules play a central role in agent cognition fits with a more general set of ideas in cognitive science, summarized in Clark. This work rejects the notion that cognition is all "in the head." Rather, interpretive aids... provide a "scaffolding," an external structure on which much of the task of interpreting the world is off-loaded. Clark argues that the distinctive hallmark of in-the-head cognition is "fast pattern completion,"...

p.97 Lane and Maxfield consider the problem of interpretation from a different perspective... cognition has an unavoidable social dimension. What interpretations are possible depend on who interacts with whom, about what. They also argue that new functionality attributions cannot be foreseen outside the particular generative relationships in which they arise. This unforeseeability has profound consequences for what constitutes "rational" action in situations of rapid change in the structure of agent-artifact space.

[All Systems Will Be Gamed, Brian Arthur, 2010, published here for the first time, p.103-118]

p.109-110 In general we have a given policy system and a mental model or analytical studies of how it is expected to work, and we would like to anticipate where the system might in real life be exploited. So how do we proceed in general? How would we go about failure mode analysis in a particular economic situation? There is no prescribed answer to these questions, but we can usefully borrow some directives from engineering failure analysis. [JLJ - numbering added for clarity]

  1. An obvious first step is to have at hand knowledge of how similar systems have failed in the past... we need a failure mode analysis of how policy systems have been exploited in the past...
  2. Second, we can observe that in general the breakdown of a structure starts at a more micro level than that of its overall design. Breakdown in an engineering design happens... because stresses cause hairline cracks in some part of an assembly, or some component assembly fails, and these malfunctions propagate to higher levels, possibly to cause eventual whole-system degradation... we will have to have detailed detailed knowledge of the options and possibilities agents possess if we want to understand how manipulation may happen.
  3. Third... we can look for places of high "stress" in the proposed system and concentrate our attention there... Typically, in an analytical model, points of behavioral action are represented as rates... or as simple rules... The modeler needs to query whether simple rates or rules are warranted, given patterns of incentives agents face.

p.110-11 All of this would suggest that if we have a design for a social system and an analytical model of it, we can "stress test" it by [JLJ - numbering added for clarity]

  1. first identifying where actual incentives would yield strong inducements for agents to engage in behavior different from the assumed behavior. [JLJ - I have looked at this concept as "coercion"] These might... be places where agents have power to affect other players' well-being...
  2. Next we construct the agents' possibilities from our sense of the detailed incentives and information the agents have at this location. That is, we construct detailed strategic options for the agents. The key here is "detailed": the options or opportunities possible here are driven by the imagination and experience of the analyst looking at the system, they are drawn from the real world... we will need to have knowledge of the fine-grained information and opportunities the agents will draw from to create their actions... [JLJ - I have looked at this previously as asking-and-answering the question "how might we proceed?"]
  3. Once we have identified where and how exploitation might take place, we can break open the overall economic model of the policy system at this location, and insert a module that "injects" the behavior we have in mind. We now have a particular type of exploitation in mind, and a working model of it that we can use to study what difference the strategic agents will in the behavior of the overall system... [JLJ - the problem is that someone out there might be better at gaming the system than our ability to predict s/he is there, gaming it. Now what? Are we claiming that we are smarter than everyone, everywhere? This is like instructing a police department to make a model of a city, and predict where the crimes will happen, based on agent based models, then put policemen there to arrest the people as they commit the crimes. While this might catch some, the sophisticated will just post lookouts. The 1972 Watergate burglars were done in by police who arrived at the scene in an unmarked car, with one non-uniformed officer wearing shoulder-length hair. The look-out team did not think they were policemen. Exactly how will Brian Arthur respond to similar-style schemes to defraud which are not predictable by models? Unbelievably, for this reason, it appears that we need economic police, much like we have metropolitan police.]
  4. What is important here is that we are looking for weak points in a policy system and the consequences that might follow from particular behaviors that system might be prone to. It is important that this testing not be rushed...
  5. Things can be speeded up if multiple designers work in parallel and are invited to probe a model to find its weak points... Here the overall simulation model or overall policy situation would be given, and we would be inviting outside participants to submit strategies to exploit it... [JLJ - I have looked this as conceptually building adaptive capacity]
  6. To do this in the more general systems context, participants would need to study the system thoroughly, identify its myriad incentives, home in on the places where it leaves open opportunities for exploitations, and model these. [JLJ - I have looked at this previously as "how much should we care" about the ideas just generated for how to "go on".]

p.115 If some randomly generated strategy is monitored and proves particularly effective, certain agents will quickly "discover" it. To an outsider it will look as if the strategy has suddenly been switched on - it will suddenly emerge and have an effect. In reality, the agents are merely inductively probing the system to find out what works, thereby at random times "discovering" effective strategies that take advantage of the system. Exploitation thus "appears."

[JLJ - Arthur perhaps fails to consider that support for a particular economic policy (such as one moving through congress) might happen if a particular group knows in advance how they are going to exploit that policy. Exploitation "appears", yes, but it was likely plotted and schemed long before. Also, concerning exploitation, the same effect is observed if we have an effective process which asks-and-answers the idea-generating question "how might I proceed?", followed by asking-and-answering the practical (and strategic) question "how much should I care about that?" But in chess as in business, we can do better than "randomly generated strategies" - we can use critical success factors (such as king safely, multiple-move piece mobility, piece development, light-or-dark-square control, constraint effectiveness) to effectively and practically "orient" our efforts at generating "maybe" moves, which we then test. The exponential explosion of move possibilities penalizes exploration efforts ("searching") that do not operate this way, except in tactical positions, where more-thorough-yet-possibly-more-shallow strategies often yield better results.]

p.115 But I want to emphasize my main point. Nothing special needs to be added by way of "scheming" or "exploitive thinking" to agent-based simulation models when we want to model system manipulation. Agents are faced with particular information about the system and the options available to them when it becomes available, and from these they generate putative actions. From time to time they discover particularly effective ones and "exploitation" - if we want to call it that - emerges. Modeling this calls for standard procedures already available in agent-based modeling.

[JLJ - ummm... what if the schemer is smarter (or more clever, say) than you are? The tour-de-France did not catch schemer-Armstrong, until they outsmarted him by creating a test which detected that his blood products had (at one time) been placed in a plastic bag typically used by blood dopers for blood doping. NASCAR deals with teams who race illegal cars. There is fraud in the social security disability system. People keep cashing the checks of relatives on Social Security who have died. There is sophisticated fraud and cheating everywhere (Enron, for example), especially where there are greedy smart people with time on their hands. Look at viruses and malware and ransomware - people spend time to write this code because it works - it has payoff. What if you had access to Microsoft's bug database for Windows, especially of known bugs that are not yet patched? If there is a prize of any kind, there will always be those who will plot ways to win it, legal or not. So good luck trying to discover and undermine such schemes using just a computer program. Swing and a miss, Brian. Better to examine the concept of resiliency, and the adoption of a resilient position in the face of the unknown.]

p.116 What we really want, in this case, is to have the computer proceed completely without human prompting, to "ponder" the problem... and "discover" the category... or invent some other plausible way to defeat obstacles, without these being built in... There is more than a whiff of artificial intelligence here. We are really asking for an "invention machine" [JLJ - not really, just a machine that muses, using pre-programmed human-crafted schemes. These might be based on "practical tricks" that "often work", and "cues" which predict when they might apply.]

p.116 In the case of the 2003 US invasion of Iraq, such computation or simulation would have run through previous invasions in history, and would have encountered previous insurgencies that followed from them, and would have warned of such a possibility in the future of Iraq, and built the possibility into the simulation. It would have anticipated the "emergent" behavior. [JLJ - It is likely that such a "simulation" as proposed would be thrown out, and another substituted that showed the US winning. An argument can always be made that our weapons are much stronger now, that we have learned from previous conflicts, etc. And so it will go through history...]

p.116-117 economics... what it hasn't been able to do is prevent financial and economic crises, most of which are caused by exploitative behavior... Many economists - myself included - would say that unwarranted faith in the ability of free markets to regulate themselves bears much of the blame... But so too does the absence of a systematic methodology in economics of looking for possible failure modes in advance of policy implementation. Failure-mode studies are not at the center of our discipline for the simple reason that economics' adherence to equilibrium analysis assumes that the system quickly settles to a place where no agent has an incentive to diverge from its present behavior, and so exploitive behavior cannot happen... I suggest that it is time to revise our thinking on this... We need to... anticipate where the systems we study might be exploited. We need to stress test our policy designs, to find their weak points and see if we can "break" them. [JLJ - yes, but ultimately you create just one "voice" in a sea of other voices. Anyone can have an opinion, or create a stress test of their own which passes. Now what?]

[The Economy Evolving as Its Technologies Evolve, Brian Arthur, 2009, p.134-143]

p.135 I will define the economy as the set of arrangements and activities by which a society satisfies its needs. (This makes economics the study of this.)

p.140 Structural change in the economy is not just the addition of a novel technology and replacement of the old, and the economic adjustments that follow these. It is a chain of consequences where the arrangements that form the skeletal structure of the economy continually call forth new arrangements.

p.141 the economy is never quite at stasis... From within, the system is always poised for change... The economy... exists always in a perpetual openness of change - in perpetual novelty. It exists perpetually in a process of self-creation... The economy is perpetually constructing itself.

[On the Evolution of Complexity, Brian Arthur, 1994, p.144-157]

p.156 I have suggested three ways in which complexity tends to grow as evolution takes place. It may grow by increases in diversity that are self-reinforcing; or by increases in structural sophistication that break through performance limitations; or by systems "capturing" simpler elements and learning to "program" these as "software" to be used to their own ends.

p.156 As we study evolution more deeply, we find that biology provides by no means all of the examples of interest. Any system with a lineage of inherited, alterable structures pressured to improve their performance shows evolutionary phenomena. And so, it is likely that increasingly we will find connections between complexity and evolution by drawing examples not just from biology, but from the domains of economics, adaptive computation, artificial life, and game theory.

[Cognition: The Black Box of Economics, Brian Arthur, 2000, p.158-170]

p.159 Human decision makers do not back off from a problem because it is difficult or unspecified. We might say that when problems are too complicated to afford solution, or when they are not well-specified, agents face not a problem but a situation. They must deal with that situation; they must frame the problem, and that framing in many ways is the most important part of the decision process.

p.162 our brains are "associative engines"... We're wonderful at association, and in fact, in cognition, association is just about all we do. In association we impose intelligible patterns.

p.163 I'm not saying that association is all the human brain does, but cognitively, association is the main thing we do. And we do it fast.

p.163 Our minds then are extremely good at associating things, using metaphors, memories, structures, patterns, theories. In other words, the mind is not given. It's not an empty bucket for pouring data in. The mind itself is emergent.

p.164 How might we model the thinking process in problems that are complicated or ill-defined? ...economic agents... try to associate temporary internal models or patterns or hypotheses to frame the situation... where agents face problems of complication or ill-definition, they use clues from the situation to form hypothetical patterns, frameworks and associations. These hypothetical patterns fill the gaps in the agent's understanding... to deal with complication: we construct plausible, simpler models that we can cope with. It enables us to deal with ill-definedness [JLJ - yes, but we also construct and execute diagnostic tests. Imagine a company hiring a new employee. The job interview is a standard way a company estimates how competent a person is, combined with perhaps phone calls to previous managers or co-workers.]

p.169 Economic agents bring to their actions not just their preferences and endowments, but also their understandings - the associations and meanings they have derived from their history of previous actions and experiences. [JLJ - yes, and their assorted array of tricks that might work. I was recently in line to purchase a book at a store that was going out of business and was liquidating its large inventory of books. A large sign was posted at checkout which said "no returns".  The books remaining in the store all had been marked down. A lady in front of me was trying to exchange a book she had purchased for another one - she tried a variety of "tricks that might work" - at first she tried to tell the clerk that it wasn't a return but an "exchange" - she had got the wrong one by mistake. Then she tried to tell the store that she had supported it over the years, how about doing a favor for a long-term customer? The clerk patiently held her ground - returns of any kind were not permitted, and in fact, the assets of the store had been purchased by a holding company - the store itself she knew was no longer in business. In the end, the exchange did not happen. In another example, I read a book which advised you, when getting a hotel room, to ask the clerk if the price quoted was "the best he could do." Then stop and say nothing. The silence often prompts the clerk to offer a further discount. Other tricks that work include discount coupons, "loss leader" sales (a Toys-R-Us store I worked in sold diapers at a loss, which were kept in the back of the store. Parents had to walk through aisles of expensive toys to get them, often making additional purchases). And so on. Expert salesmen read books on "closing the deal", learning to overcome objections, learning tricks that might work.]

[The End of Certainty in Economics, Brian Arthur, 1999, p.171-181]

p.175 There is a logical hole in standard economic thinking. Our forecasts co-create the world our forecasts are attempting to predict... The idea that we can separate the subjects of the economy - the agents who form it - from the object, the economy, is in trouble. [JLJ - and so "agent based modeling" is born - just like that.]

p.181 The view I am giving here... says that the economy itself emerges from our subjective beliefs. These subjective beliefs, taken in aggregate, structure the micro-economy... They are the DNA of the economy. [JLJ - ok, so in a complex game of strategy the position on the board emerges from the subjective beliefs of the players.]

[Complexity and the Economy, Brian Arthur, 1999, p.182-187]

p.183 Conventional economic theory chooses not to study the unfolding of the patterns its agents create but rather to simplify its questions in order to seek analytical solutions.

p.184 Once we adopt the complexity outlook, with its emphasis on the formation of structures rather than their given existence, problems involving prediction in the economy look different.