Copyright (c) 2012 John L. Jerz

Minds, Brains, and Programs (Searle, 1980)

Home
A Proposed Heuristic for a Computer Chess Program (John L. Jerz)
Problem Solving and the Gathering of Diagnostic Information (John L. Jerz)
A Concept of Strategy (John L. Jerz)
Books/Articles I am Reading
Quotes from References of Interest
Satire/ Play
Viva La Vida
Quotes on Thinking
Quotes on Planning
Quotes on Strategy
Quotes Concerning Problem Solving
Computer Chess
Chess Analysis
Early Computers/ New Computers
Problem Solving/ Creativity
Game Theory
Favorite Links
About Me
Additional Notes
The Case for Using Probabilistic Knowledge in a Computer Chess Program (John L. Jerz)
Resilience in Man and Machine

 
JLJ - Searle proposes a classic "Chinese room" problem and argues from it that machines cannot be intelligent. Simply processing symbols according to rules, even if the results somehow are interpreted as understanding, cannot represent actual understanding because there is no intentionality.
 
I would propose that intelligence is the ability to generate a custom diagnostic test or tests to reduce a situation of complexity or partial awareness, allowing (eventually) effective adaptive maneuvers toward a desired or substitute goal of importance or of vital importance. Such a custom diagnostic test should "work" in the presence of a planned trick, regular day-to-day operation, an unexpected blockage or setback, or an opponent or agent also seeking to maneuver towards a desired goal, with the original entity now functioning as the obstacle. 
 
The test should should rely on the cues present in the environment, the power relations of the objects, and the ability to maneuver in the presence of uncertainty and resistance. The test should belong to a system or a strategy that has a promise of effectiveness, perhaps derived from childhood play if not actual experience. The test should allow one to adopt a position or a sequential series of positions in the environment which are sustainable, ideally with a margin which allows for oversight, inattention, misinterpretation, unexpected "perfect storm" -type events, laziness, attention to other matters, cleverness of opponents also performing similar maneuvers, or simple "duh" mistakes.

p.5 My car and my adding machine, on the other hand, understand nothing: they are not in that line of business...  I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.
 
p.11 But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running.
 
p.11 "But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"
 
This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
 
"Why not?"
 
Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
 
p.12 only something that has the same causal powers as brains can have intentionality
 
p.12 Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?
 
p.14 the strong AI project hasn't got a chance. The project is to reproduce and explain the mental by designing programs, but unless the mind is not only conceptually but empirically independent of the brain you couldn't carry out the project, for the program is completely independent of any realization.
 
p.14 Of course the brain is a digital computer. Since everything is a digital computer, brains are too. The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.

Enter supporting content here