John L Jerz Website II Copyright (c) 2015

Evaluation in the Face of Uncertainty (Morell, 2010)

Home
Current Interest
Page Title

Anticipating Surprise and Responding to the Inevitable

Jonathan A. Morell

"Our problem is that we react only when surprise falls upon us, when we need to put out a fire... What we need is a framework for anticipation and for systematic action."

"This book is about surprises that challenge evaluation."

"I think about organizing my evaluation work in a manner that will help me detect surprise as early as possible"

"If we really cannot predict how a program will unfold, or what impact it will have, or how the program will affect an evaluation plan, then we should favor evaluation plans that combine 'quick hit' methods (short cycle time from design to findings), with an emergent design approach to inquiry."

"For truly unforeseeable change, however, it is only in retrospect that one can identify the assumptions that, had they been anticipated, would have kept an evaluation out of trouble. For these kinds of situations, minimizing trouble requires early detection and quick response."

"We may not be able to escape surprise but we can appreciate how it works, and by so doing, we can develop strategies that will leave us better off. Unfortunately, no matter how much better off we become, there is no magic elixir that will turn the invisible visible, or that will make known all that cannot be predicted, or that will always allow us to react to change in a timely fashion. The best that we can do is to increase the range of what we can see. We can give ourselves longer lead times to react. We can find ways to glimpse the hazy outlines of what was previously invisible."

JLJ - Morell is talking about evaluating social programs for effectiveness, but he just as easily could be speaking about techniques useful for machines playing complex games of strategy.

Note how my quotes below, when interpreted as new directions for game theory, apply to both Morell's issues, and to those of my own research interests - maybe your difficult problems as well.

This book is surprisingly appropriate for game theory.

On a deeper note, we shouldn't be surprised, that we are occasionally surprised. In order to 'go on,' we must constantly gather information, categorize, simplify, double- and cross-check, but ultimately we must act - at every waking moment - without full insight into our present state of affairs. That is the nature of the complex predicament we live, move, breathe, and act in. We should not be too 'surprised' - that we are occasionally 'surprised'. On the other hand, we should be surprised, if in fact we are always surprised, or if we are never surprised.

Then again, it is one thing to be 'surprised' by a nuclear power plant that suddenly acts in an unexpected way, versus a pick-up game of American football where a team chooses a running play in what is normally a passing situation. In the first case, we ought never to be 'truly surprised', because we should or ought to have multiple layers of protection and warning systems in place. In the second case, so what, it is just a game...

xii I want to help the field of evaluation develop a community of interest that is dedicated to understanding why surprises occur in evaluation, how to anticipate them, and how to deal with those that cannot be anticipated.

p.1 This book is about surprises that challenge evaluation.

[JLJ - We have no business 'evaluating' in a professional way if we have not made intelligent efforts to minimize surprise in our 'evaluations'.]

p.2 Until recently, evaluators have not stepped back to look at surprise in a systematic manner, as a field of inquiry in its own right... The systems view is important because much surprise comes from relationships, fluctuations, and uncertainties in how parts of the whole affect each other.

p.2 The one article I could find that dealt with how evaluators might treat unintended consequences was titled "Identifying and Measuring Unintended Outcomes" (Sherrill, 1984).

p.3 complex systems by their nature can yield unpredictable behavior because of the interplay of factors such as uncertain environments, cross-linkages, self-organized behavior, ecological adaptation, and feedback loops of different lengths... For our purposes, though, we must accept the fact that the potential for unpredictable behavior is inherent in the settings where we work.

[JLJ - ...and play.]

p.3-4 In addition to the principles of complex systems, three behavioral/ organizational dynamics are at play. First, our decision making is always based on less information than we are able to collect. But to say that we can get "more relevant information" is to say that we know what information is relevant and how much information is enough. We can only know this in retrospect. Second, we are not as vigilant as we could be in scouting for developing changes in the system we are working with... Third, the nature of the planning process is such that opportunity for major intervention occurs only infrequently along a program's life cycle... Thus it is almost certain that knowing that action is needed will not synchronize with the ability to act.

p.4 In sum, our efforts to change systems take place during intermittent windows of opportunity, at which time we scramble to organize relationships among a finite (and often small) number of components (e.g., staff, money, time, client charateristics, material, procedure, information, and treatments).

p.4-5 As evaluators our problem is not that the unexpected occurs. Our problem is that we react only when surprise falls upon us, when we need to put out a fire... What we need is a framework for anticipation and for systematic action... For that, we need an answer to three questions.

  1. When is the likelihood of surprise high?
  2. Under what circumstances will surprise disrupt evaluation?
  3. When the probability of surprise and disruption is high, what can we do about it?

A word of caution is in order. No matter how well we answer these questions we will always find ourselves in trouble. One problem we cannot escape is that any solution we conjure will entail overhead... our evaluations are often themselves composed of many tightly linked elements. As we tinker with them, other surprises will occur. Our situation is like the problem faced by safety managers who must confront what Perrow calls a "normal accident" (Perrow, 1999)... interactions among many tightly linked elements of complicated systems... Error in system design is not the problem. Rather, the very existence of those elements and their dependencies creates conditions where small changes in parts of the system can cause major disruptions.

[JLJ - ...well no, error in the design is the problem. More margin for contingency should have been planned into the system - even if this is inconvenient. Look at the recent Fukushima Japanese reactor disaster for similar 'it won't likely happen so I won't plan for it' style of thinking. Or the Challenger 'it is too inconvenient to launch only in warm temperatures so we will do what we please' line of thinking. Face it - when you play with fire, you will get burned, it is just a question of when. Safety systems are pushed aside if they are inconvenient - cars have to beep at you if you do not wear your seatbelt. Volkswagen tampered with the EPA's emission devices because they hurt performance. NASCAR cheating is historic. Where money and ego are involved, people just do whatever they damn well please.]

p.5 We may not be able to escape surprise but we can appreciate how it works, and by so doing, we can develop strategies that will leave us better off. Unfortunately, no matter how much better off we become, there is no magic elixir that will turn the invisible visible, or that will make known all that cannot be predicted, or that will always allow us to react to change in a timely fashion. The best that we can do is to increase the range of what we can see. We can give ourselves longer lead times to react. We can find ways to glimpse the hazy outlines of what was previously invisible.

[JLJ - How about constructing diagnostic tests which bring the invisible, interlocking forces at work, visible?]

p.5 In fact, surprise comes in different flavors. "Unexpected" can be divided into events that might have been foreseen had proper mechanisms been in place, and events that can never be foreseen. Different procedures are needed to buffer against each of these two types of surprise.

[JLJ - Yes, but we need only see enough of the unexpected to make progress in whatever it is we want to accomplish. We do not need to cross the goal line, so to speak, only move 5 yards down the field.]

p.7 I am convinced that we will all be better evaluators if we amass as many cases as possible and use the collection to build a corpus of knowledge.

p.9 Evaluation is intimately bound up with innovation both because innovation is often the subject of evaluation, and because the act of evaluation can itself be regarded as an innovation.

p.11 I believe that, above all, evaluation must be practical. It must inform action in real-world settings... evaluation must guide practical action.

[JLJ - Practical action - perhaps even in times of extreme crisis - must be based on experienced, proven tricks that work, which themselves must be based on strategic schemes which assess characteristics, and essentially these rest on diagnostic tests and assessments of likelihood of counter schemes. Evaluation you say? Show me your (simplified, out of necessity) evaluation scheme, and I will find a sophisticated way to corrupt it, and therefore misdirect your 'practical action.' ]

p.11 As Jarvie (1972) puts it: "The aim of technology is to be effective rather than true, and this makes it very different from science."

p.11 I believe that evaluation can be practical because while change is ever present and the future is always unknowable, time horizons can be set at which a reasonable degree of certainty exists and within which evaluation can provide a reliable guide for further action.

[JLJ - Ultimately, we evaluate in order to guide (practical, or effective) action. We do this instinctively as part of our human nature, using the full scope of our experience and analytical capacity.]

p.11 My view is that programs represent investments in a course of action that are designed to achieve specific objectives.

p.11 evaluation must conform to a basic assumption that is made by program planners: that their programs as designed will make a difference.

p.14 In this chapter I tried to make the case that we would be better off if we moved our efforts to deal with surprise from the realm of crisis management to the realm of systematic inquiry.

p.17 So far I have used the terms "unexpected" and "surprise" as if they were simple and unitary, that is, that either something happened that was expected, or that something happened that was not expected. Either we are surprised, or we are not. In fact, surprise can come in shades of gray.

[JLJ - Surprise...! Surprise can also be withdrawn, for example, for years I had been surprised at how poorly my lawn had been responding to my constant watering and often expensive lawn-care. Then I learned that the heavy clay soil underneath my lawn needs amendments and aerating, two treatments I had not been providing. In sum, I am no longer surprised at the lack of progress, and I am now providing new types of treatments.]

p.19 In contrast to unexpected but foreseeable events, there are events that are not just difficult to discern, but for theoretical reasons are impossible to discern. These are the kinds of events that flow from the behavior of complex systems - nonlinear interactions, feedback loops of different lengths, evolutionary adaptation to continually changing circumstances, sensitive dependence on initial conditions, and rich cross-linkages. Because of these dynamics, complex systems can exhibit self-organization, emergence, and phase shift behavior as they continue to adapt to ever-changing circumstances.

p.22 It would help... if we could distinguish problems that were flights of fancy... from more realistic possibilities that we might want to take seriously... we would do better evaluation if as many potential surprises as possible were driven as far as possible toward the "foreseeable" end of the continuum.

p.22 I visualize the situation as an exercise in herding invisible cats. We are faced with invisible cats wandering within poorly defined boundaries. Our challenge is to herd as many of them as possible into a particular corner of the territory.

p.23 Can we know in advance how much surprise to expect? ...The question is both unanswerable and important. It is unanswerable because innovations are embedded in complex systems, complete with interacting feedback loops of various lengths, dense network nodes, self-organization, shifting environments, co-evolution of programs and their fitness landscapes, and all the other aspects of systems that induce uncertainty and make prediction impossible... The question is important because it has implications for methodology.

p.23 If we really cannot predict how a program will unfold, or what impact it will have, or how the program will affect an evaluation plan, then we should favor evaluation plans that combine "quick hit" methods (short cycle time from design to findings), with an emergent design approach to inquiry. Short evaluation cycle time and emergent design are not the same. [JLJ - bullets below added for readability]

  • Short-cycle approaches can provide rapid feedback to help planners and evaluators adjust to changing circumstances. There is nothing to preclude these designs from being rapid and fixed.
  • Emergent designs can be used in long-term evaluation. (In fact, that is where they are most useful because the more protracted the evaluation horizon, the greater the likelihood that programs, or their boundaries, or their environments, will change.) Because emergent designs are built to be flexible, they can be used to explain why a program is operating as it is at any given point in time, and how it got there.

p.25 How much surprise should we expect? The answer is both important and unanswerable. So what to do? As in many unanswerable questions, it is still useful to seek an answer. The struggle itself provides guidance.

p.26 There are techniques we can apply to minimize surprise and to maximize adaptability to what we cannot predict.

p.27 Design, implementation, and management are exercises in foresight. They involve actions taken now with the intent of either eliciting a future state, or responding to likely future states.

p.28 Much of what we do is in settings where a program is not proven, where outcomes are uncertain, and where alternate approaches are sensible. This is the realm of innovation that cannot be understood or controlled with traditional approaches to planning and management (Stacey, 1992).

p.33 For minimal surprise we need to combine fidelity with robustness.

p.34 Time erodes predictability because time acts as a window that continually widens to admit the effects of shifting environments, longer feedback loops, changes in programs' internal operations, new customer needs, shifts in stakeholder demands, and the multitude of other factors that make open systems unpredictable.

p.34 When is the probability of evaluation surprise high? The answer is that surprise increases with the amount of R&D content and the length of time between action and observation. The likelihood of surprise decreases with fidelity to proven program models, robustness, and knowledge of operating context.

p.35 As evaluators, we need to understand both the conditions in which uncertainty is high and the conditions in which uncertainty poses difficulties for doing good evaluation. Once those situations are identified, it becomes possible to develop appropriate evaluation tactics.

p.35 Whatever else evaluation requires, it requires data and it requires methodology... one way to ask the question of when evaluation is threatened by unanticipated change is to ask the question: Under what circumstances will program change make it difficult for the evaluator to use data and methodology in the service of assessing a program?

p.39 In methodological terms, rapid-cycling evaluation design puts less of a burden on accessing causal relationships. The burden is lower because the shorter time between measurements, the less chance there is for outside factors to affect observations, and controlling for outside influences is one of the main reasons for having to keep an evaluation design intact over time.

[JLJ - Ok, let's say we can pin one of our opponent's pieces. How much should we value this? Rather than obsessing over things like a 'pin is worth 1/6 of a pawn', we simply generate promising moves and 'see' how damaging this 'pin' becomes down the road. Of course, at the end of our search-exploration, we can assess a 1/6 pawn penalty on a 'pin' if we wish, and if self-run tournaments of many games indicate that such a penalty has performance value. It is unlikely that our exact search path will be followed in the actual game played. Our 'pin' becomes absorbed by, and precisely evaluated to be worth, the intelligently-assessed, typical played-out consequences of the pin.]

p.40 Earlier I claimed that two essential elements of evaluation are data and methodology.

[JLJ - I would say the two essential elements of evaluation are: prior experience making practical evaluation, and prior experience learning from successes and mistakes, in the evaluations made in the past. An evaluation is useful only if it is generated from an intelligent experience, which 'knows' practically which tea leaves to read.]

p.48 Because of its explanatory power, theory can be useful in pointing us in the right direction to get a sense of what might happen.

p.51 The field of economics provides a set of concepts that can reveal unique insight into a very wide range of human and system behavior.

p.57 the choice of particular theories does matter over the long run. The choice matters because, over time, any particular theoretical orientation will guide an evaluation in a particular direction.

p.63 One way to reduce the probability that an unexpected event will spoil an evaluation is to reduce the “distance” between the innovation being evaluated and the outcomes that are measured... Decreasing “distance” between implementation and measurement decreases opportunity for system forces to come into play.

p.71 The methods I discuss here begin to have a greater range of application. They are applicable along the evaluation life cycle, and begin to shift from an emphasis on advance planning to early detection of the need for change. I begin with a discussion of detecting and using leading indicators of change. I conclude with a discussion of system-based logic modeling, which can be useful in its own right, and also valuable as a method of organizing leading indicator data.

p.72 A great deal of change is preceded by a collection of signs and indicators that, if detected and interpreted correctly, would presage the impending new circumstances... Forecasting is needed because the essence of dealing with unexpected events is to detect impending change as early as possible. Thus any method to help with early detection is worth consideration.

p.72 Planning is a prospective exercise. It seeks to answer the question: What can we do now to bring about a desired state in the future?

p.73 Assumption-Based Planning (Dewar, 2002) begins by identifying critical assumptions that must be realized if a plan is to be effective. It then goes on to identify load-bearing assumptions (those that are susceptible to breakage by future events), signposts (leading indicators of potential problems), shaping actions (to prepare for the possibility of failure)... These planning methods can be useful in detecting evaluation surprise by applying them continually (or at least at defined intervals) as an evaluation proceeds; that is, they need to be embedded in a systematic program-monitoring process.

p.78 I think about organizing my evaluation work in a manner that will help me detect surprise as early as possible

p.78 Weick and Sutcliffe (2001) [JLJ - Managing the Unexpected] offer five characteristics of organizations with reputations for being able to react successfully to surprise. First, they have a preoccupation with failure... Second, there is a reluctance to simplify... Third, they maintain high degrees of situational awareness within their organizations... Fourth, high-reliability organizations are committed to keeping errors small and to developing alternate methods of keeping their organizations going... Finally, in high-reliability ofganizations there is respect for expertise.

p.83 As we delve into the intricacies of agile evaluation, it is worth keeping in mind what part of the evaluation business I am in. Put overly simply, evaluators can try to answer one of two questions: (1) How can I provide information to help planners innovate in constructive ways as needs and circumstances evolve? and (2) How can I provide information to help planners know how well their predetermined decisions are working? ...The data available to an evaluation define the boundaries of what can ultimately be said about the program. The choices are critical.

p.86 “Burden” refers to the amount of work needed to collect data.

p.87 By “methodology” I mean the logic in which observations are embedded... some... logic is needed to interpret the data. By “agile” I mean the ability to change quickly in the face of new circumstances.

p.87 I regard evaluation as... a collection of processes, resources, and structures constructed in a manner that allows both a logic of analysis to exist and a data acquisition mechanism to feed the analysis.

p.90 The real question is whether enough of the design is agile enough to allow adaptation to unforeseen circumstance. By thinking of evaluation in this way we will be led to deliberate efforts at identifying specific aspects of our evaluations that affect their agility. We could work at making our designs more, rather than less, agile.

p.90 When I devise evaluation plans I find it useful to think in terms of sequencing and cross-linkage timing.

p.95 As with all the tactics I have suggested, CI [JLJ - Continuous Improvement] has its limitations for helping evaluators deal with surprise. It is not useful in cases of long latencies between program action and impact, or when an external assessment of a program is required... rapid feedback is often appropriate... the essential logic of CI is to rapidly respond to emerging trends. As a methodology, CI is inherently agile.

p.95-96 The arguments above imply that there is a time at the beginning of an evaluation when all efforts should be made to anticipate as much of the “foreseeable” as possible, and thereafter, effort should be focused on early detection of surprise and adaptation to it... Without this view, one is guaranteed to get into trouble... Evaluators do have more choice when data collection is just beginning than they do after all data are collected and stakeholders' expectations are firmly set.

p.98 Despite all our efforts to anticipate and minimize surprise, some surprise is inevitable because programs and evaluations are embedded in complex systems.

p.104 Any change in time, complexity, or resources has the potential to affect some of the others.

p.106 Complexity increases the chance that something will go awry.

p.113 Any tactic that can be invoked to minimize the impact of surprise on an evaluation carries the risk of making things worse. This risk exists because the very same tactics that can minimize surprise or increase adaptability to it might also adversely affect the time needed to implement an evaluation, the complexity of the evaluation, or the resources needed to carry out the evaluation

p.115-116 The truth is that when we plan our evaluations we are looking into a future... in which chance events can change what seemed like a sure thing. When considering events in a complicated future, our vision is dim, our strategic and tactical choices are many, and we are always gambling. We try to identify all the relevant critical choices, but if we are too good at identifying those choices, we end up dealing with so many eventualities that we risk producing an unworkable evaluation.

p.118 the essential purpose of CI... is to improve continuously by continually monitoring system behavior, implementing corrective actions, and testing the consequences of the innovation. To this end CI has developed tools and analysis logic that will serve evaluators' needs for agility in the face of unexpected change (Morell, 2000).

p.119 one might argue that CI is designed to handle unintended consequences. It is the method that should be chosen when one knows in advance that: (1) there is uncertainty about what will happen, (2) program design allows for rapid midcourse corrections, and (3) determining longer-term program effects are not a priority.

p.121 The fact that so much surprise showed up in later stages speaks to the need for more systematic efforts to anticipate surprise early, to scout for impending surprise as evaluation proceeds, and to use agile designs.

p.125 If we know how surprise is distributed across the social/organizational landscape, we will have some guidance as to where to direct our efforts when... monitoring programs for signs of change.

p.146-147 detecting problems early is a good idea. It is sobering, however, to see the actual differences between early and late detection in real evaluations that have been executed in the field.

p.147 Use of pilots and feasibility assessments early in an evaluation strike me as the essence of good practice.

p.149 As with any surprise, the more lead time the evaluator has to take corrective action, the better.

p.156 Pilot and feasibility tests seem to work as a technique for detecting surprise when the analysis works at the level of individual behavior and examines how people interact with an evaluation.

p.172 The previous sections were based on the notion that surprise might be anticipated if only the right methods were applied. For truly unforeseeable change, however, it is only in retrospect that one can identify the assumptions that, had they been anticipated, would have kept an evaluation out of trouble. For these kinds of situations, minimizing trouble requires early detection and quick response.

[JLJ - Minimizing trouble requires intelligently identifying the portfolio of events which can emerge from the predicament, and intelligently weilding an equally responsive portfilio of tools and techniques which can - in an effective agile fashion - respond in a way that maintains or advances a position, as time and interlocking forces unfold.]

p.186 If we are going to measure unintended outcomes, agility is necessary.

[JLJ - Maybe foresight as well. You can't measure what isn't there. You can make intelligent predictions, such as ones based on last year's outcomes, and then make practical estimates based on these.]

p.187 Predicting the need for agile evaluation is impossible. It is impossible because we operate in an uncertain environment where the degree of uncertainty is hard to estimate and where we can never be sure if the dynamics of complex systems are at play.

p.187 If causality is important, and if real-time observation alone won't do the trick, and if comparison data won't be available, then we better take steps to build agility into our evaluation.

p.188 By its essence (i.e., that it is unexpected), specifics about what will happen cannot be included in evaluation planning... Certainly knowing the details would be helpful, but the nature of "agility" is that the details do not have to be known.

p.189 With respect to the difficulty of achieving an adequately agile evaluation, the key issue is the degree to which it is possible to specify program outcomes in advance.

[JLJ - Why even try to specify exact program outcomes in advance...? Why not instead adopt a posture, or make certain investments of resources, and then monitor the critical success factors, the guides to action, and see what tomorrow brings?]

p.190 agile evaluation - a system in which methodology, context, and program combine to give the evaluation more (or less) potential to assess program outcomes that were not envisioned at the beginning of the evaluation exercise.

p.191 Looking at the design alone will not tell us whether we should invest in increasing the agility of our design.

[JLJ - We would need to create diagnostic tests involving developmental scenarios and would need to estimate the potential to absorb change and 'go on,' or alternately - become defeated by it...]

p.192 If we do need to design for agility, how difficult will it be to do so?

p.194 I came to appreciate just how much evaluation is a craft – an endeavor requiring special skills and long practice... our work still comes down to judgment about the application of specific tools to specific circumstances. We have guidelines for which tools to pick and how to apply them.

Program Evaluation in Social Research, Morell, 1979, Evaluation as Social Technology, Ch. 5, p.94-124:

p.95 Evaluators strive to make their work as useful for decision making as possible.

p.95 a good many of the problems which impede the relevance of social research will be lessened by reconceptualizing evaluation as a technological rather than a scientific process.

p.96 The basic goal of science is to discover what is true (Popper 1965, chap. 3).

[JLJ - What is true does not tell you what to do. E=mc2 does not tell you whether or not to have shoulder surgery. Science has been used to develop surgical tools which have a certain effect when weilded by a skilled surgeon. You might still awaken with nerve damage or an infection. A spark of innovation is still required to create the surgical tools, and a certain amount of experience still required to perform the surgical procedure. The basic goal of science should be to generate plausible, rational answers to questions, ideally supported by experiment, usually by 'proclamation' from technical journals, and especially to those previously answered by 'we do not know,' or ones where yesterday's generation of scientists have answered poorly. We as scientists are under the impression that the answers we provide are true. We just cannot at present imagine experiments or interpretations, which might tell us otherwise. Elsewhere I argue that the truth is, there are only truth claims.]

p.100 There are important differences between science and technology, and many problems of relevance will be solved or lessened if social research is based on the technological model.

p.101 When guides to action are being sought, practicality becomes a central, organizing principle of an entire research effort... The search for guides to action might actually be hampered by seeking too much truth.

[JLJ - Practically we have a repertoire of tricks that work, and we select among them, in the moment.]

p.103 The technologist who operates on the scientific model is likely to invest time, effort, and resources in pursuing levels of accuracy which are unnecessary and, perhaps, even misleading as guides to practical action.

p.104 According to Agassi (1968) these differences can lead to important practical problems if the wrong model is followed. In science, standards of criticism can be raised as high as possible in any given situation. The more stringent the criticism, the better, as it is truth which is at stake. In technology, however, it is possible to raise standards of criticism too high, and in so doing, impede the implementation of needed innovations.

p.104-105 Wiesner (1970, p. 85), in a discussion about making psychology more relevant to the problems of society claims that:

Technologists apply science when they can, and in fact achieve their most elegant solutions when an adequate theoretical basis exists for their work, but normally are not halted by the lack of a theoretical base. They fill in gaps by drawing on experience, intuition, judgment and experimentally obtained information.

p.105 The difference between science and technology is best summed up by Jarvie (1972) who wrote: "The aim of technology is to be effective rather than true, and this makes it very different from science."

[JLJ - Yes, but to develop technology you need to use science, and to advance science, you need to develop new technology.]

p.105 Although science and technology share many surface similarities, an investigation of the logical structures which underlie each endeavor reveals quite clearly that they are not one in the same.

p.110 Too much precision is useless and distracting because it does not reflect the actual amount of influence which can be brought to bear on a problem. 

[JLJ - It would be better to say that precision often has diminishing returns...]

p.110 As Agassi (1968) has shown, standards of criticism in science should be raised as high as possible. If this is done in the technological field, however, it is unlikely that decision makers will ever have enough confirmatory evidence to be willing to risk a new solution to a persistent problem. Thus technologists must constantly consider which level of evidence will guard against the choice of poor solutions but encourage experimenting with solutions which are among the better alternatives to a problem.

p.110 search and decision strategies in science are aimed at developing theory which is true, at honing information to the highest possible level of accuracy, and at being as critical as possible of new ideas and of the tests of those ideas. All of these three objectives may be dysfunctional in a technological sense. In technology, accurate prediction is more important than truth, levels of accuracy are situationally determined, and the function of research has a much stronger practical confirmatory function than it does in science.

[JLJ - You can aim for the truth, but practically we will settle for what seems true, and for what has emerged from a caldron of debate. In the business world, there is truth and there is technology, but there is ultimately the business case for acting, for spending dollars, and this is an anticipated rate of return.]

p.111 there is a correspondence between the goals and intellectual tools of everyday decision making, and the technological approach to problem solving. This correspondence is lacking in science, which generates rewards not for how well solutions operate within practical constraints, but for how well those practical constraints are transcended in the pursuit of truth and theory development.

p.112 The aim of technology is... to seek novel solutions which are the best possible under the limitations imposed. Scientists are rewarded for transcending those limitations. Technologists are rewarded for working successfully within them.

p.116 Valid developmental research can be conducted if three conditions obtain. First, variables must be chosen for the express purpose of being powerful enough to make a discernible difference in the settings where they are destined to operate... Second, the research setting must include the most powerful or important factors which might mitigate against the proper action of experimental variables or programs... Finally, research settings must be simplified enough to allow observation of the relationships among important variables. If these conditions are met, there is a good chance of obtaining valid information which has a reasonable chance of directing meaningful action in real world settings.

p.118 [Ackoff, Gupta, and Minas] technologists have the luxury of setting limits on the amount of needed accuracy. This luxury is not shared by scientists who, in their search for truth, must try always to be as accurate as possible.

p.119 technology is more responsive than science to the task of developing practical innovations.

p.119 evaluation must generate information which will be useful to decision makers.

[JLJ - Evaluation often sits within a framework of 'what ought I do, now, at present'. Without a need for practical action, evaluation loses most meaning. An evaluation can be paired with an explanation of the methods, and ultimately should generate an argument that it is useful to decision makers, and that there are costs associated with ignoring it.]

p.120 Because evaluation's highly specific and applied focus, the organizing theme of evaluation research must be technological.

p.121 Bunge... (1966, p. 128)... knowledge considerably improves the chances of correct doing, and doing may lead to knowing more (now that we have learned that knowledge pays), not because action is knowledge but because, in inquisitive minds, action may trigger questioning.

p.123 evaluation is not "science or technology," as both are indispensable aspects of efforts to achieve practical solutions and workable innovations.

[JLJ Evaluation is a means to an end.]