Copyright (c) 2013 John L. Jerz

Artificial Intelligence: A Modern Approach (Russell, Norvig, 2003)

Home
A Proposed Heuristic for a Computer Chess Program (John L. Jerz)
Problem Solving and the Gathering of Diagnostic Information (John L. Jerz)
A Concept of Strategy (John L. Jerz)
Books/Articles I am Reading
Quotes from References of Interest
Satire/ Play
Viva La Vida
Quotes on Thinking
Quotes on Planning
Quotes on Strategy
Quotes Concerning Problem Solving
Computer Chess
Chess Analysis
Early Computers/ New Computers
Problem Solving/ Creativity
Game Theory
Favorite Links
About Me
Additional Notes
The Case for Using Probabilistic Knowledge in a Computer Chess Program (John L. Jerz)
Resilience in Man and Machine

ai.jpg

Reviews of this Book:

The book is the most comprehensive and most insightful introduction to artificial intelligence that I have seen. It provides a unified view of the field organized around the rational decision making paradigm. It covers the traditional topics of search, logic, planning, and knowledge representation along with current research in reasoning under uncertainty, machine learning, robotics, and more. - Prof. Elisha Sacks (Purdue)

Russell and Norvig's book is terrific: well-written and well-organized, with comprehensive coverage of the material that every AI student should know. It includes pseudo-code versions of all the major AI algorithms, presented in a clear, uniform fashion. The authors have done an excellent job of relating work in AI to work in other fields, both in and out of computer science. It's a pleasure to teach from this book. - Prof. Martha Pollack (Michigan)

A remarkably comprehensive and incisive treatment of the field. By organizing the material around the task of building intelligent agents, Russell and Norvig present AI as a body of inter-related design principles, rather than a loose grab bag of techniques and tricks. Students hungry for meaty ideas will find ample nourishment from this text. ... A masterful pedagogic achievement. - Prof. Mike Wellman (Michigan)

Quotations from the second edition of the Eastern Economy Edition, 2003

p.3 If we are going to say that a given program thinks like a human, we must have some way of determining how humans think. We need to get inside the actual workings of human minds. There are two ways to do this: through introspection - trying to catch our own thoughts as they go by - and through psychological experiments. Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a computer program.
 
p.5 One important point to keep in mind: We will see before too long that achieving perfect rationality - always doing the right thing - is not feasible in complicated environments. The computational demands are too high. For most of the book, however, we will adopt the working hypothesis that perfect rationality is a good starting point for analysis. It simplifies the problem and provides the appropriate setting for most of the foundational material in the field. Chapters 6 and 17 deal explicitly with the issue of limited rationality - acting appropriately when there is not enough time to do all the computations one might like.
 
p.32 An agent is anything that can be viewed as perceiving its environment through sensors and acting on the environment through actuators... We use the term percept to refer to the agent's perceptual inputs at any given instant. An agent's percept sequence is the complete history of everything the agent has ever perceived.
 
p.35 A performance measure embodies the criterion for success of an agent's behavior. ...an agent ...generates a sequence of actions according to the percepts it receives.
 
p.36 a definition of a rational agent: For each percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
 
p.95 Heuristic functions are the most common form in which additional knowledge of the problem is imparted to the search algorithm.
 
p.107 A problem with fewer restrictions on the actions is called a relaxed problem. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem.
 
p.108 A program called ABSOLVER can generate heuristics automatically from problem definitions, using "relaxed problem" method and various other techniques (Prieditis, 1993). ABSOLVER generated a new heuristic for the 8-puzzle [a variation of the famous Sam Loyd 15-puzzle] better than any preexisting heuristic and found the first useful heuristic for the famous Rubik's cube puzzle. [JLJ - I wonder if ABSOLVER could be used to find an evaluation function for chess...]
 
p.162 Game playing was one of the first tasks undertaken in AI [Artificial Intelligence]. By 1950, almost as soon as computers became programmable, chess had been tackled by Konrad Zuse (the inventor of the first programmable computer and the first programming language), by Claude Shannon (the father of information theory), by Norman Wiener (the creator of modern control theory), and by Alan Turing. Since then, there has been steady progress in the standard of play, to the point that machines have surpassed humans in checkers and Othello, have defeated human champions (although not every time) in chess and backgammon, and are competitive in many other games. The main exception is [the game] Go, in which computers perform at the amateur level.
 
p.171 Shannon's 1950 paper, Programming a computer for playing chess, proposed instead that programs should cut off the search earlier [not proceed to checkmate] and apply a heuristic evaluation function to states in the search, effectively turning nonterminal nodes into terminal leaves.  In other words, the suggestion is to alter minimax or alpha-beta in two ways: the utility function is replaced by a heuristic evaluation function EVAL, which gives an estimate of the position's utility, and the terminal test is replaced by a cutoff test that decides when to apply EVAL...
 
p. 171 An evaluation function returns an estimate of the expected utility of the game from a given position... It should be clear that the performance of a game-playing program is dependent on the quality of its evaluation function. ...How exactly do we design good evaluation functions? First, the evaluation function should order the terminal states in the same way as the true utility function... the evaluation function should be correlated with the actual chances of winning.
 
p.194-195 This chapter introduces knowledge-based agents... Our final reason for studying knowledge-based agents is their flexibility. They are able to accept new tasks in the form of explicitly described goals, they can achieve competence quickly by being told or learning new knowledge about their environment, and they can adapt to changes in the environment by updating the relevant knowledge... The central component of a knowledge-based agent is its knowledge base, or KB.
 
p. 261 The knowledge engineering process
 
Knowledge engineering projects vary widely in content, scope, and difficulty, but all such projects include the following steps:
1. Identify the task...
2. Assemble the relevant knowledge...
3. Decide on a vocabulary of predicates, functions, and constants...
4. Encode general knowledge about the domain...
5. Encode a description of the specific problem instance...
6. Pose queries to the inference procedure and get answers...
7. Debug the knowledge base...
 
p.386 It turns out that neither forward nor backward search is efficient without a good heuristic function. Recall from Chapter 4 that a heuristic function estimates the distance from a state to the goal... The basic idea is to look at the effects of the actions and at the goals that must be achieved and to guess how many actions are needed to achieve all of the goals. Finding the exact number NP is hard, but it is possible to find reasonable estimates most of the time without too much computation.
 
p.408 Planning systems are problem-solving algorithms that operate on explicit propositional (or first order) representations of state and actions. These representations make possible the derivation of effective heuristics and the development of powerful and flexible algorithms for solving problems.
 
p.464 Probability provides a way of summarizing the uncertainty that comes from our laziness and ignorance.
 
p.466 The fundamental idea of decision theory is that an agent is rational if and only if it chooses the action that yields the highest expected utility, averaged over all the possible outcomes of the action. This is called the principle of Maximum Expected Utility (MEU)... the agent can make probabilistic predictions of action outcomes and hence select the action with highest expected utility [usefulness].
 
p.585 If an agent maximizes a utility function that correctly reflects the performance measure by which its behavior is being judged, then it will achieve the highest possible performance score if we average over the environments in which the agent could be placed.
 
p.600 One of the most important parts of decision making is knowing what questions to ask.
 
p.601 The value of information derives from the fact that with the information, one's course of action can be changed to suit the actual situation. One can discriminate according to the situation, whereas without the information, one has to do what's best on average over the possible situations. In general, the value of a given piece of information is defined to be the difference in expected value between best actions before and after information is obtained.
 
p.602 In sum, information has value to the extent that it is likely to cause a change of plan and to the extent the new plan will be significantly better than the old plan.
 
p.603 Implementing an information-gathering agent: A sensible agent should ask questions of the user in a reasonable order, should avoid asking questions that are irrelevant, should take into account the importance of each piece of information in relation to its cost, and should stop asking questions when that it appropriate. All of these capabilities can be achieved by using the value of information as a guide.
 
p.863 Perception provides agents with information about the world they inhabit. Perception is initiated by sensors. A sensor is anything that can record some aspect of the environment and pass it as input to an agent program.
 
p.903 Sensors: Sensors are the perceptual interface between robots and their environments. Passive sensors, such as cameras, are true observers of the environment: they capture signals that are generated by other sources in the environment. Active sensors, such as sonar, send energy into the environment. They rely on the fact that this energy is reflected back to the sensor. Active sensors tend to provide more information than passive sensors, but at the expense of increased power consumption and with the danger of interference when multiple active sensors are used at the same time.
 
p.969 Keeping track of the state of the world: This is one of the core capabilities required of an intelligent agent. It requires both perception and updating of internal representations.

Enter supporting content here