xvi Questioning our own beliefs... isn't easy, but it is the first step in forming new, hopefully more accurate,
beliefs... learning to think like a sociologist means learning to question precisely your instincts about how things
work, and possibly to unlearn them altogether.
p.10-11 People who lack common sense... never seem to understand what it is that they should be paying attention
to, and they never seem to understand what it is that they don't understand. And for exactly the same reason that
programming robots is hard, it's surprisingly hard to explain to someone lacking in common sense what it is that they're doing
wrong.
p.14-15 among humans, what seems reasonable to one might seem curious, bizarre, or even repugnant to another...
"...that, so far as they are concerned, is the beginning, the middle, and the end of the matter: it is precisely what they
think has occurred, it is all they think has occurred, and they are puzzled only at my puzzlement at their lack of puzzlement."
Disagreements over matters of common sense, in other words, are hard to resolve because it's unclear to either side
on what grounds one can even conduct a reasonable argument... whatever it is that people believe to be a matter of
common sense, they believe it with absolute certainty. They are puzzled only at the fact that others disagree.
p.26 The basic problem here is that whenever people get together in groups... they interact with one another...
these influences pile up in unexpected ways, generating collective behavior that is "emergent" in the sense that it cannot
be understood solely in terms of its component parts.
p.27 Regardless of which trick we use, however, the result is that our explanations
of collective behavior paper over most of what is actually happening.
p.28 By providing ready explanations for whatever particular circumstances the world throws at us, commonsense
explanations give us the confidence to navigate from day to day and relieve us of the burden of worrying about whether what
we think we know is really true, or is just something we happen to believe... we think we have understood things that
in fact we have simply papered over with a plausible sounding story... this illusion of understanding... actually inhibits
our understanding of the world.
p.32 it should come as little surprise that theories about how people make choices are also central to most
of the social sciences.
p.34 all human behavior can be understood in terms of individuals' attempts to satisfy their preferences.
p.41-42 people digest new information in ways that tend to reinforce what they already think.
In part, we do this by noticing information that confirms our existing beliefs more readily than information that does not.
And in part, we do it by subjecting disconfirming information to greater scrutiny and skepticism than confirming information.
Together, these two closely related tendencies - known as confirmation bias and motivated reasoning respectively - greatly
impede our ability to resolve disputes... in which the different parties look at the same set of "facts" and come away with
completely different impressions of reality.
p.42 the physicist Max Planck famously acknowledged, it is often that "A new scientific
truth does not triumph by convincing its opponents and making them see the light, but rather its opponents eventually die."
p.42,43-44 the evidence from psychological experiments makes clear that there are a
great many potentially relevant factors that affect our behavior in very real and tangible
ways but that operate largely outside our conscious awareness... The ability to know what
is relevant to a given situation is of course the hallmark of commonsense knowledge
p.44 there is a big difference between knowing what is relevant in practice and being able to explain how
it is that we know it.
p.45 The intractability of the frame problem effectively sank the original vision of AI, which was to replicate
human intelligence more or less as we experience it ourselves.
p.46 We don't think the way we think we think
p.61-62 arguably the central intellectual problem of sociology... The problem, in a nutshell,
is that the outcomes that sociologists seek to explain are intrinsically "macro" in nature... At the same time, however,
it is necessarily the case that all these outcomes are driven in some way by the "micro" actions of individual humans...
So how do we get from the micro choices of individuals to the macro phenomena of the social world? ...This is the micro-macro
problem... something like the micro-macro problem comes up in every realm of science, often under the label "emergence."
p.63 Increasingly, however, the questions that scientists find most interesting... are forcing them
to consider more than one scale at a time, and so to confront the problem of emergence head-on.
p.63 Individual plants and animals interact with each other in complex ways, via prey-predator relations,
symbiosis, competition, and cooperation, to produce ecosystem-level properties that cannot be understood in terms of individual
species.
p.64 In the kinds of systems that sociologists study, in fact, the interactions come in so many
forms and carry such consequence, that our own version of emergence - the micro-macro problem -
is arguably more complex and intractable than in any other discipline... Emergence, remember,
is a hard problem precisely because the behavior of the whole cannot easily be related to the behavior of its
parts
p.66 Unfortunately, attempts to construct the kind of rock-bottom explanations that methodological individualists
imagined have run smack into the macro-micro problem.
p.123 it may indeed be difficult or even impossible to understand what is happening at the time it is happening.
But the difficulty derives solely from a practical problem about the speed with which one can realistically assemble the relevant
facts.
p.126 History cannot be told while it is happening... because what is happening can't be made sense
of until its implications have been resolved.
p.129 In much of life... the very notion of a well-defined "outcome," at which point we can evaluate,
once and for all, the consequences of an action is a convenient fiction. In reality, the events that we label as outcomes
are never really endpoints. Instead, they are artificially imposed milestones, just as the ending of a movie is really
an artificial end to what in reality would be an ongoing story.
p.130-131 at no point in time is the story ever really "over." Something always happens afterward, and what
happens afterward is liable to change our perception of the current outcome, as well as the perception of the outcomes that
we have already explained. It's actually quite remarkable in a way that we are able to completely rewrite our previous explanations
without experiencing any discomfort about the one we are currently articulating, each time as if now is the right
time to evaluate the outcome... there is no reason to think that now is any better time to stop and evaluate than any other.
p.135 Humans love to make predictions
p.141 Nobody really agrees on what makes a complex system "complex" but it's generally
accepted that complexity arises out of many interdependent components interacting in nonlinear ways.
p.141-142 In complex systems, tiny disturbances in one part of the system can get amplified to produce large
effects somewhere else - the "butterfly effect" from chaos theory... When every tiny factor in a complex system can
get potentially amplified in unpredictable ways, there is only so much a model can predict. As a result, models of
complex systems tend to be rather simple - not because simple models perform well, but because incremental improvements make
little difference in the face of the massive errors that remain... all the models of complex systems are bad.
p.143 For complex systems... the best that we can hope for is to correctly predict the probability
that something will happen.
p.149 what is relevant cannot be known until later. The kinds of predictions we
most want to make, that is, require us to first determine which of all the things that might happen in the future will turn
out to be relevant, in order that we can start paying attention to them now... This relevance problem is fundamental,
and can't be eliminated simply by having more information or a smarter algorithm.
p.151 If our prediction does not somehow help to bring about larger results, then it is of little interest
or value to us.
p.155 the future is more like a bundle of possible threads, each of which is assigned some
probability of being drawn, where the best we can manage is to estimate the probabilities of the different threads.
p.171 when it comes to complex systems... there are strict limits to how accurately we can predict what
will happen... But... it seems that one can get pretty close to the limit of what is possible with relatively simple
methods.
p.172 Predictions about complex systems, in other words, are highly subject to the law
of diminishing returns: The first pieces of information help a lot, but very quickly you exhaust whatever potential
for improvement exists.
Of course, there are circumstances in which we may care about very small improvements in
prediction accuracy... Under these circumstances, it's probably worth the effort and expense to invest in sophisticated methods
that can exploit the subtlest patterns. But in just about any other business... where the predictions you are making
are usually just one aspect of your decision-making process, you can probably predict about as well as possible with the help
of a relatively simple method.
p.174 As the psychologist Robyn Dawes once pointed out, "the whole trick is to know what variables to look
at and then know how to add."
p.176 there is simply a level of uncertainty about the future that we're stuck with, and this uncertainty
inevitably introduces errors into the best-laid plans.
p.180 The main cause of strategic failure, Raynor argues, is not bad strategy, but great strategy that just
happens to be wrong... Whether great strategy succeeds or fails depends entirely on whether the initial vision happens to
be right or not. And that is not just difficult to know in advance, but impossible.
p.180-181 This is the strategy paradox. The main cause of strategic failure, Raynor argues,
is not bad strategy, but great strategy that just happens to be wrong... The solution to the strategy
paradox, Raynor argues, is to acknowledge openly that there are limits to what can be predicted, and to develop
methods for planning that respect those limits. In particular, he recommends that planners look for ways to integrate
what he calls strategic uncertainty - uncertainty about the future of the business you're in - into the planning
process itself. Raynor's solution, in fact, is a variant of a much older planning technique
called scenario planning... scenario planners attempt to sketch out a wide range of these hypothetical
futures, where the main aim... is... to challenge possibly unstated assumptions that underpin existing strategies... Once
these scenarios have been sketched out, Raynor argues that planners should formulate not one strategy, but rather
a portfolio of strategies, each of which is optimized for a given scenario... Managing strategic uncertainty
is then a matter of creating "strategic flexibility" by building strategies around the core elements and hedging
the contingent elements through investments in various strategic options.
p.182 Raynor's approach to managing uncertainty through strategic flexibility is certainly
intriguing. However, it is also a time-consuming process - constructing scenarios, deciding what is core and what
is contingent, devising strategic hedges, and so on
p.182 [Raynor] argues that they [senior management] should devote all their time to managing strategic
uncertainty, leaving the operational planning to division heads... the only way to deal adequately with strategic
uncertainty is to manage it continuously - "Once an organization has gone through the process of building scenarios,
developing optimal strategies, and identifying and acquiring the desired portfolio of strategic options, it is time to do
it all over again."
p.185 when we look to our own future, what we see... is myriad potential trends, any one of which could
be game changing and most of which will prove fleeting or irrelevant. How are we able to know which is which? And without
knowing what is relevant, how wide a range of possibilities should we consider?
p.186 an emphasis on strategic flexibility can help them [managers] manage the uncertainty that
the scenarios expose. But no matter how you slice it, strategic planning involves prediction, and prediction runs
into the fundamental "prophecy" problem... that we just can't know what it is that we should be worrying about until after
its importance has been revealed to us. An alternative approach, therefore... is to rethink the whole philosophy of
planning altogether, placing less emphasis on anticipating the future, or even multiple futures, and more on reacting to the
present.
p.187-188 Rather then trying to anticipate what shoppers will buy next season, [Spanish clothing
retailer] Zara effectively acknowledges that it has no idea. Instead, it adopts what we might call a measure-and-react strategy.
First, it sends out agents to scour shopping malls, town centers, and other gathering places to observe what people are already
wearing, thereby generating lots of ideas about what might work. Second, drawing on these and other sources
of inspiration, it produces an extraordinarily large portfolio of styles, fabrics, and colors - where each
combination is initially made in only a small batch - and sends them out to stores, where it can then measure directly
what is selling and what isn't. And finally, it has a very flexible manufacturing and distribution
operation that can react quickly to the information that is coming directly from stores, dropping those styles that
aren't selling... and scaling up those that are.
p.188 Mintzberg recommended that planners should rely less on making predictions about long-term
strategic trends and more on reacting quickly to changes on the ground. Rather than attempting to anticipate correctly
what will work in the future, that is, they should instead improve their ability to learn about what is working right now.
Then, like Zara, they should react to it as rapidly as possible, dropping alternatives that are not working - no matter how
promising they might have seemed in advance - and diverting resources to those that are succeeding, or even developing new
alternatives on the fly.
p.196,199 In many circumstances, however, merely improving our ability to measure things does not,
on it own, tell us what we need to know... Without experiments, it's actually close to impossible to ascertain cause and effect,
and therefore to measure the real return on investment of an advertising campaign.
p.201 finding out that something doesn't work is also the first step toward learning what does work.
p.202 the only way to improve one's marketing effectiveness over time is to first know what is working and
what isn't. Advertising experiments, therefore, should not be viewed as a one-off exercise that either yields
"the answer" or doesn't, but as part of an ongoing learning process that is built into all advertising.
A small but growing community of researchers is now arguing that the same mentality should
be applied not just to advertising but to all manner of business and policy planning
p.204 In many circumstances, it may well be true that realistically all one can do is pick the course of
action that seems to have the greatest likelihood of success and commit to it.
p.205 the knowledge on which plans should be based is necessarily local to the concrete situation
in which it is to be applied.
p.209-210 Bootstrapping... The basic idea is that production systems should be
engineered along "just in time" principles, which assure that if one part of the system fails, the whole system must stop
until the problem is fixed... it forces organizations to address problems quickly and aggressively. It also forces
them to trace problems to their "root causes" - a process that frequently requires looking beyond the immediate cause of the
failure to discover how flaws in one part of the system can result in failures somewhere else... bootstrapping goes one step
further [than a technique previously discussed called bright spots], sniffing out not only what is working, but also
what could work if certain impediments were removed, constraints lifted, or problems solved elsewhere in the system.
p.210 planners must recognize that no matter what the problem is... that somebody out there already
has part of the solution and is willing to share it with others... planners can... devote their resources to finding
the existing solutions, wherever they occur, and spreading their practice more widely.
p.211 A planner thinks he already knows the answer... A Searcher admits he doesn't know
the answers in advance... and hopes to find answers to individual problems by trial and error.
p.211 As different as they appear on the surface, in fact, all these approaches to planning... are really
just variations on the same general theme of "measuring and reacting."
p.228, 229 Much of life, however, is characterized by what the sociologist Robert Merton called the
Matthew Effect, named after a sentence from the book of Matthew in the Bible, which laments "For to all those who
have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken
away." ...Merton argued that the same rule applied to success more generally... Success... leads ... to more opportunities
to succeed, more resources with which to achieve success
p.244 Sociologists, for example, have long believed that the meaning of individual action can only
be properly understood in the context of interlocking networks of relationships - a concept that is called embeddedness.
p.263 When the subject is human behavior, in other words, it is actually hard to
imagine anything that social scientists could possibly discover that wouldn't sound obvious to a thoughtful person, no matter
how difficult it might have been to figure out.
p.264 At some level, we accept that the future is unpredictable, but we do not
know how much of that unpredictability could be eliminated simply by thinking through the possibilities more carefully,
and how much is inherently random in the way that a roll of the dice is random. Even less clear to us is how this
balance between predictability and unpredictability ought to change the kinds of strategies we deploy to prepare for future
contingencies, or the kinds of explanations we come up with for the outcomes we observed.
p.265 Nor is it necessarily the case that the most complicated and pressing real-world problems...
can ever be "solved" in an engineering sense, no matter how much basic science we acquire. For problems like these,
we can still discover that some solutions work better than others... the exact cause-and-effect mechanisms
may remain forever elusive.
p.266 three hundred years after Alexander Pope argued that the proper study of mankind should lie not in
the heavens but in ourselves, we have finally found our telescope. Let the revolution begin...
back flap Only by understanding how and when common sense fails, Watts argues, can we improve how we plan
for the future and understand the present - an argument that has important implications... in science and everyday life.