Skip to content
JacobRoberts edited this page May 5, 2014 · 5 revisions

Here's an overview of how the AI searches a position:

Func AlphaBeta is called with a specified depth. The greater the depth, the more total boards are searched. This increases both accuracy and search time, so finding the right depth for your preferred environment is important.

AlphaBeta is modeled after the alpha-beta pruning algorithm, which improves upon the standard minimax algorithm by cutting off the search when a better line is known to have been searched. Take the following position as an example, with white to move:

white to move

An intelligent move ordering function tells AlphaBeta to search Bxf8 first and Bxa5 second, and AlphaBeta is called with a depth of 2, or one turn each. Bxf8 is searched along with all of black's replies, and the board evaulation function determines white to have an advantage because white's pawn is more advanced than black's, with even material. Next, Bxa5 is searched, and only one of black's replies is considered. The board evaluation function says that black has a major advantage, up a queen to a bishop. AlphaBeta reasons that, given Bxa5, black can force a line where he has a major advantage. The remainder of black's moves don't need to be searched, because white will not play Bxa5. At a depth of 2, this has already reduced the search space by about half. The performance scales well with a larger depth.

Move ordering in this AI is done relatively crudely. Checks are always searched first, followed by captures. No exceptions. This is done because it mimics how humans search a position, by examining the most forcing moves first. Next, a search with a depth of 1 is done on all the remaining moves, and they are ordered by the resulting board evaluation. This is done with the idea that moves that immediately increase positional value, such as advancing a pawn or moving a piece towards the center of the board, correlate to moves that increase positional value long-term.

In some cases, a search with a strictly fixed depth will produce inaccurate results. Most obviously, this can happen when material can be captured by one side, but doing so would allow a disadvantageous reply from the other side, such as a queen capturing a pawn only to be recaptured by another pawn. When the very last move evaluated is this capture, the board evaluation function will report that one side has won material, when in reality, a search of one extra move would have revealed that side to be losing. This can never be fully remedied, as seen in Ruslan Ponomariov vs Deep Fritz, where this problem is said to have led to Ponomariov's victory. Quiescence search is the standard way of minimizing it's presence. Ideally, once AlphaBeta reaches max depth, it will continue if the last move is a check or a capture. During this continuation, it will only evaluate checks and captures, allowing forcing lines to be resolved. Before each move, the side to move can choose to "hold position", returning the current evaluation of the board, instead of continuing to search through potentially disadvantageous checks and captures. My implementation of Quiescence search is not fully formed because of deadline issues. Instead, I simply extend the search by a maximum depth of 1 if the last move was a check or capture. This allows basic captures to be resolved, but still has holes. For example, in a sequence of NxP, RxN, QxR, only the first two moves would be evaluated, leading to an incorrect assessment of the position.

Clone this wiki locally