p.4 According to a central tradition in Western philosophy, thinking
(intellection) essentially is rational manipulation of mental symbols (viz., ideas)... Computers... can manipulate
arbitrary "tokens" in any specified manner whatsoever; so apparently we need only arrange for those tokens
to be symbols, and the manipulations to be specified as rational, to get a machine that thinks... computers
actually do something very like what minds are supposed to do. Indeed, if that traditional theory is correct, then our imagined
computer ought to have "a mind of its own": a (genuine) artificial mind.
p.4-5 According to the symbol manipulation theory, intelligence
depends only on a system's organization and functioning as a symbol manipulator... if Artificial Intelligence really
has... much more to do with abstract principles of mental organization, then the distinctions among AI, psychology, and even
philosophy of mind seem to melt away. One can study those basic principles using tools and techniques from computer science,
or with the methods of experimental psychology, or in traditional philosophical terms - but it's the same subject in each
case... For [this] new "unified" field, [experts] have coined the name cognitive science.
p.11 given a system capable of knowing, how can we make it capable of acquiring?... AI has discovered that knowledge itself is extraordinarily complex and difficult to implement -
so much so that even the general structure of a system with common sense
is not clear. Accordingly, it's far from apparent what a learning
system needs to acquire; hence the project of acquiring some can't get
off the ground [5].
In other words, Artificial Intelligence must start
by trying to understand knowledge (and skills
and whatever else is acquired) and then, on that basis, tackle
learning... But it does not appear that learning is the most basic problem,
let alone a shortcut or a natural starting point.
p.12 What gets programmed directly is just a bunch of general information and principles, not unlike what teachers instill
in their pupils. What happens after that, what the system does with all
this input, is not predictable by the designer (or the teacher or anyone else.) The most striking
current examples are chess machines that outplay their programmers, coming
up with brilliant moves that the latter would never have found. Many people are amazed by this fact; but if you reflect that invention
is often just a rearrangement (more or less dramatic) of previously available
materials, then it shouldn't seem so surprising.
p.50 formal systems are self-contained; the "outside world" (anything not
included in the current position) is strictly irrelevant. For instance, it makes no difference to a chess game, as such, if
the chess set is stolen property [perhaps it was purchased at a pawn shop - JLJ] or if the building is on fire or if the fate
of nations hangs on the outcome - the same moves are legal in the same position, period.
p.57-58 Consider the difference between accidentally messing up a chess game and a billiards game. Chess
players with good memories could reconstruct the position perfectly (basically because displacing the pieces by fractions
of an inch wouldn't matter). A billiards position, by contrast, can never be reconstructed perfectly... The digitalness of
formal systems is profoundly relevant to Artificial Intelligence... Formal systems are independent of the medium in which
they are "embodied".
p.83 Obviously the chooser [of the move to play in a game of chess] needn't always find the best move; even
the greatest champions don't play perfect chess. The goal, rather, is a system that chooses relatively well most of
the time. In other words, an infallible test for the better move (i.e., an algorithm) is not really required; it
would be enough to have a fairly reliable test, by which the machine could usually eliminate the worst choices and settle
on a pretty good one. Such fallible but "fairly reliable" procedures are called heuristics (in the AI literature)...
There are many rules of thumb for better chess.
p.99-100 Formal systems can be interpreted; their tokens can be assigned meanings and taken as symbols
about the outside world. This may be less than shocking news by now; but a century ago the development of interpreted
formal systems was a major innovation, with revolutionary consequences throughout logic and mathematics. Moreover, if Artificial
Intelligence is right, the mind is a (special) interpreted formal system - and the consequences will be even more revolutionary
for psychology.
p.106 A computer is an interpreted automatic formal system - that is to say, a
symbol-manipulating machine.
p.176-177 Artificial Intelligence... The proud parents were a prolific team
of three: Allen Newell, Cliff Shaw, and Herbert Simon... The essential difference between Newell, Shaw, and
Simon (hereafter NS&S) and earlier work in cybernetics and machine translation was their explicit focus on thinking.
More specifically, they conceived of intelligence as the ability to solve problems;
and they conceived of solving problems as finding solutions via heuristically guided search... it's easy
enough to cast ordinary activity, or even conversation, as a series of mental quests...Every search has two
basic aspects: its object (what is being looked for) and its scope (the region or set of things within which
the object is sought). For actual system design, each aspect must be made explicit, in terms of specific structures
and procedures. For instance, a system cannot seek an object that it couldn't recognize: it has to be able to "tell" when
it reaches its goal. Consequently, the design must include a practical (executable) test for success,
and that test then effectively defines what the system is really seeking.
The designer must also invent some procedure for working through the relevant
search space more or less efficiently... More generally, any well-designed searcher needs a practical generator
that comes up with prospective solutions by slogging methodically through relevant possibilities; and again, the generator
itself then defines the effective search space.
Given a concrete system with procedures for generating and testing potential solutions,
the basic structure of search becomes an alternating cycle: the generator proposes a candidate, and the tester checks
it out. If the test succeeds, the search is finished; if not, the system returns to the generator and goes around
again (at least until the search space is exhausted).
p.178 Thus I boldly predict that no computer will ever play perfect chess by means of exhaustive
search.
This difficulty is called, picturesquely but vividly, the combinational explosion.
p.178-179 In one way or another, controlling or circumventing combinatorial explosion has been a
central concern of Artificial Intelligence from its inception; the issue is broad and deep.
In general, therefore, search must be selective, that is, partial
and risky. The crucial insight, however, is that the selection need not be random. Newell, Shaw, and Simon propose
that problem-solving search always follows heuristic guidelines... thereby dramatically improving the odds;
they even suggest that the degree of improvement (over random chance) is one measure of a system's intelligence.
Applying such heuristics, then, is what it means to think about a hard problem, trying to find a solution. And the
challenge of designing an intelligent machine reduces to the chore of figuring out and implementing suitable "powerful" heuristics
for it to employ.
p.184,186 The idea of using explicit selection heuristics to tame the combinatorial explosion is
a major intellectual milestone. It was perhaps the crucial element in actually launching the field of Artificial
Intelligence, and it has been a conceptual mainstay ever since... Genuine intelligence calls for a fuller, more versatile
familiarity with the objects and events within its ken [mental perception].
p.209 AI systems seem to lack the flexible, even graceful "horse sense" by which people adjust and adapt
to unexpected situations and surprising results.