p.1 Introduction: The End of the Information Revolution
p.2-3 This book is about telling in an age of information overload what's relevant to what you're trying to do. The trouble with an overload of information isn't just that it's confusing. It's that the data have conflicting implications... This book helps find what information is most relevant despite the overload.
p.6-7 A major idea in this book is that the usefulness of any piece of information depends on two complementary values: its specificity - which you can think of as its content, or simply how surprising it is - and its relevance. Specificity gives you a sense of the power of a piece of information because you can draw more conclusions from precise outcomes than from vague ones... Relevance complements specificity. While specificity is intrinsic to information - it's the same no matter what your goal is - relevance depends entirely on that goal, or at least on the assumptions behind it... specificity and relevance drive the value of information
p.16 the testable strategies, relevant metrics, and reviews of results proposed in the following pages are more than a guide for coping with performance information overload. They embody an experimental approach to management and problem solving. It's an approach that reflects a thorough fallibilism about the strategies we construct to meet our goals and an optimism that we can always do better.
p.17 Chapter 1: Growing to Learn
p.18-19 A strategy is testable if it spells out goals and assumptions about how to achieve them that could conceivably prove wrong... lack of a testable clarity in a company's strategy can impair its ability to learn and adapt.
p.24-25 The rarity of strategic clarity helps explain why it's so hard to tell what matters in our burgeoning performance reports... A strategy is testable if it spells out goals and assumptions about how to achieve them that could conceivably prove wrong... clear strategies are few and far between... Strategies often become stagnant because their goals are stagnant.
p.26 Some ideas take a long time to bear fruit... The problem with this defense of long-term goals is that it affords no way to tell exactly which one really is your best new initiative... To see whether one of several possible initiatives really is most promising, you need to devise some near-term indicator that it's working. Such an indicator lets you test and adapt the strategy behind the initiative."
p.29 clear strategies are rare... they're hard to specify. They need to set aggressive goals to avoid stagnation. And they need to propose a way of achieving them that experience can help refine over time. Clear strategies are hard because they really force you to find out, if not to know, what you're talking about.
p.39 Chapter 2: Goals, Assumptions, and the Eight-Line Strategy
p.40-41 To learn anything from the heap of data our management information systems grind out, we need a basis for telling which metrics matter. The examples of Chapter One suggest that strategic assumptions - basically, our biggest bets or guesses - provide such a basis. The idea is to focus on the performance metrics that best test those assumptions and to try out new assumptions in place of the ones that fail... Three things distinguish these organizations [JLJ - clients of McKinsey consulting services that are deemed to be adaptable]: they make bold assumptions about the unknown as a matter of course, they spend their time looking for errors rather than trying to prove themselves right, and when they find an error in an assumption, they're quick to revise it and change course.
p.63 Chapter 3: Knowing What Matters
p.64 A performance indicator is relevant to a strategic assumption, according to the definition, if the assumption's truth or falsity greatly affects the results expected.
p.71 Derive performance metrics from your assumptions... a concrete definition of relevance:
A performance indicator is relevant to a strategic assumption if the assumption's truth or falsity greatly affects the results you expect.
If the results from an indicator are equally likely whether your assumptions are true or false, then it is irrelevant.
p.74-75 most organizations use a definition of relevance that's starkly different from the one I set out in the previous section. Their performance metrics either ignore strategy altogether, track red herrings, or get lost in endless demands for more data. As a result, they tend to become less adaptable or mired in complexity.
p.91 Chapter 4: Key Performance Indicators and the Metrics Matrix
p.99-103 To test a strategic assumption, you need to find an indicator that's relevant to it... If you want to focus on data that can improve decisions, you must find a way to screen indicators for relevance... a performance indicator is relevant to a strategic assumption if the assumption's truth or falsity greatly affects the results you expect... Choose a strategic assumption to test... The first part of the relevance screen... asks whether an indicator's results could possibly disprove your assumption... The second half of the screen focuses on possible results that would be favorable to an assumption... The main purpose of this screen is to pick out the handful of indicators most relevant to each key assumption you want to test... Even if you find an indicator that's highly relevant to a key assumption, it may be too vague to be useful. All other things equal, a more specific performance measure will be more valuable than a less specific one.
p.111-112 Imagine what effects you would see if your assumptions were right... Look for ways to measure those effects... Make sure what you measure would be unlikely if your assumptions were wrong.
p.116 "Only infrmtn esentl to undrstandg mst b transmtd."
p.121 Chapter 5: Three Kinds of Performance Gaps and the Strategy Review
p.122-123 Cisco... has... designed a short, elegant list of... indicators to test a wide range of assumptions about demand... Here's a firm that has learned to use performance indicators to test strategy as well as manage execution. This chapter and the next argue that unless organizations radically change how they use performance results, their expanding information resources will paralyze them. Performance results may still assess the execution of strategic plans - their traditional focus - but should do so only as a by-product of testing strategic assumptions.
p.140 my advice for closing a strategy gap... is to be creative... Setting up your planning and performance system to learn from mistaken assumptions may look like a losing proposition over the next performance period, but it is the only proven way to make progress over many iterations and longer periods of time. The implication for closing strategy gaps is to try new stuff.
p.141 Wells Fargo shifted from profit per load to profit per employee, anticipating the rise of fee-based services in banking.
[JLJ - Yes, and Wells Fargo is now struggling to regain the trust of customers who were betrayed by employees fradulently opening new accounts in their names... all to meet metrics and quotas which called for new accounts as a way of measuring employee performance.]
p.146 Of course, there is no such thing as a purely rational goal.
[JLJ - Well, I would say that having enough food to eat and a roof over your head, and a body that mostly works, and friends and family and occasional fun, and some margin to play with on these, some useful skills and a job, would certainly be good candidates.]
p.151 Chapter 6: Planning to Evolve
p.151 This chapter... argues that our organizations won't sustain growth unless we use the patterns of gaps in our performance results to review strategy and revise tactics continuously.
p.153 By telling us which of our assumptions are in most need of repair... allows us to allocate scare talent to our most persistent problems. The result is a fully engaged management team that evolves strategy continuously by testing its strategic assumptions in parallel and revising the weakest ones. These assumptions - the guesses of the guess-test system - largely replace formal planning. We will still plan, but only to evolve.
p.155 the answer may be to identify new risk factors and try to forecast and measure them. If a new risk factor helps explain performance surprises, it will increase the variability of your uncontrollable gaps... you should measure only what you're willing to forecast.
p.157 Truly predictive metrics are indeed leading indicators but they are rarely interesting.
p.158 our best metrics... Their real value is their power to improve gradually our understanding of our jobs and organizations by explaining what just happened.
p.160 often... you just need a better way of thinking about how to achieve your goals... In principle, you should be willing to rethink your assumptions every time results surprise you... if it means reviewing an eight-line strategy, you're more likely to rebuild your top-line model of the business whenever it breaks down.
p.170 More broadly, you can never be sure which of your key assumptions may be responsible for a performance surprise.
p.171 Since the risk factor is uncontrollable by definition, all you can do is mitigate it, work around it, or set a level of expectations high enough to clear your goal most of the time in spite of it.
p.175 Conclusion: The Beginning of the Relevance Revolution
p.175 This book argues we must use performance results to sharpen strategy instead of pursuing them... solely as ends in themselves. Sharpening strategy through performance results basically requires three things: the development of performance strategies explicit enough to be testable, the derivation of performance metrics from the asumptions behind those strategies, and the use of perfromance gaps to reveal errors in goals and assumptions as well as execution.
p.175 Learning through experience starts with performance strategies explicit enough to be testable. A strategy is testable it it spells out goals and assumptions about how to achieve themthat could conceivably prove wrong.
p.181 The metrics that matter are the ones that are relevant to assumptions that matter... The best metric for testing a strategic assumption is both relevant - in the sense of avoiding false positives and negatives - and specific. Relevance makes sure your expectations are sensitive to the truth or falsity of the assumption. Specificity makes sure the metric really captures those expectations. Together, relevance and specificity reflect the real value of a metric for someone proceeding on the basis of the assumption.
|