p.2 Most observers of judgment and decision making accept the following two-part
argument: First, to make effective decisions, all cues which are diagnostic or predictive of the outcome should be included
in a decision. In complex real-world environments, there will be numerous sources of diagnostic information. It follows that
experts should base their judgment on many cues.1
Second, most decision makers use simplifying heuristics when making judgments (Tversky & Kaheman, 1974).
This leads to reliance on less than optimal amounts and inappropriate sources of information. That means decision makers generally
base their judgments on a small number of cues, often used suboptimally.
p.2-3 Not only do experts make use of little information, the evidence suggests
their judgments can be described by simple linear models. Despite extensive efforts to find nonlinear (configural)
cue utilization (e.g., Hoffman, Slovic, & Rorer, 1968), almost all the variance in judgments can be accounted
for by a linear combination of cues (Goldberg, 1968). "The judgments of even the most seemingly configural
clinicians can often be estimated with good precision by a linear model" (Wiggins & Hoffman, 1968, p. 77). As
Dawes and Corrigan (1974, p. 105) conclude, "the whole trick is to decide what variables to look at and then to know
how to add."
Consequently, expert judgments often lack the complexity expected from superior decision makers, either in
the number of significant cues or in the model used to describe their judgments. These findings paint a picture of experts
making judgments in much the same way as naive subjects, with little evidence of any special abilities (Kahneman, 1991).
p.4 It appears that experts and novices differed in their ability to discriminate between relevant
and irrelevant information... Where experts differ from novices is in what information is used,
not how much.
p.5-6 What separates the expert from the novice, in my view, is the
ability to discriminate what is diagnostic from what is not. Both experts and novices know how to recognize and make
use of multiple sources of information. What novices lack is the experience or ability to separate relevant from irrelevant
sources. Thus, it is the type of information used – relevant vs. irrelevant – that distinguishes between
experts and others.
The problem for novices is that information diagnosticity is context
dependent. What is relevant in one context may be irrelevant in another. Only a highly skilled judge
can determine what is relevant in a given situation – precisely the skill that distinguishes experts from non-experts.
Thus, it is the ability to evaluate task context that is central to expertise.
This view has three implications for research on judgment and decision making:
First, the assumption that experts should use more information than novices in making a decision is not correct. The number
of significant cues does not reflect degree of expertise. As reported in the studies summarized here, mid-level and even entry-level
subjects often have as many or more significant cues as experts. Indeed, there is evidence to suggest that novices may rely
on too much information and that experts are better because they are more selective (Shanteau, 1991). Thus, the information-use
hypothesis is inappropriate.
Second, the crucial difference between mid-level and advanced expert
is the ability to evaluate what information is relevant in a given context. The problem, however, is that analysis
of context is difficult, even for experienced professionals (Howell & Dipboye, 1988). Nonetheless, top experts
through insights gained from experience know which cues are relevant and which are not. An interesting question
for future research is to determine how this experience is translated into the ability to distinguish relevant from irrelevant.
One possibility pointed out by Neale and Northcraft (1989) in an organizational setting is that experience leads experts to
develop a "strategic conceptualization" of how to make rational decisions.
Third, these arguments imply that efforts to analyze experts across domains
are fruitless. The Information-Use Hypothesis reflects an effort to evaluate expertise generically without reference to specific
decision contexts. As the studies cited here show, the hypothesis does not work. This illustrates that it is difficult, if
not impossible, for decision researchers to draw generalizations about experts without reference to specific problem domains.
In future discussions of experts, any conclusions should be verified in more than just one domain...
Instead, researchers should ask, "How do experts know what kind of information to use?"
p.6 In their paper on linear models, Dawes and Corrrigan (1974,
p.105) concluded that "the whole trick is to decide what variables to look at and to know how to add." Although this sounds
simple, it can be quite difficult to accomplish... by concentrating on number of significant cues, observers may have
overlooked what makes experts special - their ability to evaluate what is relevant in specific contexts. It is the
study of that skill, not the number of cues used, that should guide future research on experts.