Skip to content
geeeeeeeeek edited this page Jan 2, 2015 · 2 revisions

Core Algorithms

  • Search Model

    Here in this project, I adopted Negamax Algorithm to emulate both sides’ selection, which is developed upon Shannon’s Minimax Algorithm. Suppose black player generates several moves, it surely wants to pick up a move that is most advantageous for it. For each move, red player will respond with a most disadvantageous move for black player. So on and so forth until search depth has been reached. To represent in actual figures, the search model picks up a move with the minimum or maximum evaluation alternatively.

    However, the size of game tree is too large. If we stick to the entire tree, we lose the opportunity to a deeper search, which means a smarter decision. Alpha beta pruning in the search greatly improves the performance by cut off unnecessary nodes.

    The graph (source: Wikipedia) below shows a demonstration of this algorithm. enter image description here Dynamic deepen search depth. According to multiple tests, the search depth is set to be 3 initially, and deepens to no more than 7 as pieces getting fewer. The dynamic method is a balance of game experience and intelligence.

    Multithreading. Several threads are running at depth 2 to accelerate the evaluation speed. Code blocks that modifying pieces are locked by java ‘synchronized’, to avoid different moves interfering each other.

  • Evaluation Model

    Typically, the evaluation model of a chess game includes these five parts: piece value, piece position, piece control, piece flexibility, piece protection and threat, and piece feature. In this project only piece value and piece positon are implemented and other parts are discarded for they are relatively trivial and time costly.

  • Piece value

Jiang, Shi, Xiang, Ma, Ju, Pao, Zu are given 1000000, 110, 110, 300, 600, 300, 70 respectively. The value corresponds to how important the piece is, and thus Jiang’s value is set extremely large.

  • Piece position

A great piece position evaluation together with a deep search compensates the lacking of other parts of evaluation. Here we consider four major pieces. Suppose current player is red.

    int[][] pPosition = new int[][]{

            {6, 4, 0, -10, -12, -10, 0, 4, 6},

            {2, 2, 0, -4, -14, -4, 0, 2, 2},

            {2, 2, 0, -10, -8, -10, 0, 2, 2},

            {0, 0, -2, 4, 10, 4, -2, 0, 0},

            {0, 0, 0, 2, 8, 2, 0, 0, 0},

            {-2, 0, 4, 2, 6, 2, 4, 0, -2},

            {0, 0, 0, 2, 4, 2, 0, 0, 0},

            {4, 0, 8, 6, 10, 6, 8, 0, 4},

            {0, 2, 4, 6, 6, 6, 4, 2, 0},

            {0, 0, 2, 6, 6, 6, 2, 0, 0}

    };

    int[][] mPosition = new int[][]{

            {4, 8, 16, 12, 4, 12, 16, 8, 4},

            {4, 10, 28, 16, 8, 16, 28, 10, 4},

            {12, 14, 16, 20, 18, 20, 16, 14, 12},

            {8, 24, 18, 24, 20, 24, 18, 24, 8},

            {6, 16, 14, 18, 16, 18, 14, 16, 6},

            {4, 12, 16, 14, 12, 14, 16, 12, 4},

            {2, 6, 8, 6, 10, 6, 8, 6, 2},

            {4, 2, 8, 8, 4, 8, 8, 2, 4},

            {0, 2, 4, 4, -2, 4, 4, 2, 0},

            {0, -4, 0, 0, 0, 0, 0, -4, 0}

    };

    int[][] jPosition = new int[][]{

            {14, 14, 12, 18, 16, 18, 12, 14, 14},

            {16, 20, 18, 24, 26, 24, 18, 20, 16},

            {12, 12, 12, 18, 18, 18, 12, 12, 12},

            {12, 18, 16, 22, 22, 22, 16, 18, 12},

            {12, 14, 12, 18, 18, 18, 12, 14, 12},

            {12, 16, 14, 20, 20, 20, 14, 16, 12},

            {6, 10, 8, 14, 14, 14, 8, 10, 6},

            {4, 8, 6, 14, 12, 14, 6, 8, 4},

            {8, 4, 8, 16, 8, 16, 8, 4, 8},

            {-2, 10, 6, 14, 12, 14, 6, 10, -2}

    };

    int[][] zPosition = new int[][]{

            {0, 3, 6, 9, 12, 9, 6, 3, 0},

            {18, 36, 56, 80, 120, 80, 56, 36, 18},

            {14, 26, 42, 60, 80, 60, 42, 26, 14},

            {10, 20, 30, 34, 40, 34, 30, 20, 10},

            {6, 12, 18, 18, 20, 18, 18, 12, 6},

            {2, 0, 8, 0, 8, 0, 8, 0, 2},

            {0, 0, -2, 0, 4, 0, -2, 0, 0, 0},

            {0, 0, 0, 0, 0, 0, 0, 0, 0},

            {0, 0, 0, 0, 0, 0, 0, 0, 0},

            {0, 0, 0, 0, 0, 0, 0, 0, 0}

    };
  • Weight of piece value and piece position

The coefficient matters. If piece value takes up too much, AI would be too aggressive. If piece position takes up too much, AI would be sluggish. After multiple tests, the coefficient is set to 1:8.

Clone this wiki locally