4. A Refinement of the Search Process
This idea has to do with the standard Nega-Max search algorithm, which
is explained well at this link:
http://chess.verhelst.org/1997/03/10/search/#more-4
We see in the "pseudo code" for AlphaBeta that each move is
searched to the same depth in the "tree". Perhaps moves that are more likely (score higher) can be searched at an increased
depth.
For analysis purposes, there may be an advantage to a thorough search,
including time spent looking at unpromising positions (such as a correspondence chess game), but for over the board play,
simply examining the likely moves in more detail (extending the depth of search for these particular moves) seems to be a
good strategic plan.
"Selectivity means imbalancing your search tree - spending relatively more resources on lines you think are important,
and fewer resources on lines you don't think are important.
I'm pretty sure that selectivity works better at higher
depths (and longer time controls), although this is a complicated topic. At fast time controls, your worst-case behavior tends
to matter more than at longer time controls, and worst-case behavior is (by definition) where selectivity has trouble.
Vas"
10-15-2007 rybkaforum posting
Continue to next section