Skip to content

Parabul/NinePebbles

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NinePebbles

Nine Pebbles (Togyzkumalak) - nomad's board game game_screen

Get it on Google Play Download on the App Store

Abstract

Background

About the game

Togyzkumalak, is a traditional board game played in Central Asia, particularly in Kazakhstan and Kyrgyzstan. The name "Togyzkumalak" translates to "nine pebbles" in the Kazakh language. The game is part of the mancala family, which includes various games played with small stones or pebbles and rows of holes or pits on a board.

The game is typically played on a special board with two rows of nine pits or cups, each representing a player's side. Small stones, pebbles, or other playing pieces are placed in the pits, and players take turns sowing the pieces around the board according to specific rules.

The objective of the game is to capture more pebbles than your opponent. The game involves strategic thinking, as players must decide when to sow pebbles and when to capture their opponent's pebbles. Togyzkumalak has a rich cultural history and is often played in social gatherings, providing entertainment and a way for people to connect.

Game theory

Game theory, a branch of mathematics and economics, provides a valuable framework for understanding strategic decision-making in various interactive scenarios, including mobile gaming. At its core, game theory analyzes the choices of rational actors within competitive situations, where the outcome of one player's decision depends on the decisions of others. In the context of mobile gaming, developers often employ game theoretical concepts to design engaging gameplay mechanics, balance competition, and incentivize player interaction.

Moreover, the advancement of artificial intelligence (AI) in mobile gaming has significantly impacted gameplay experiences, many modern game engines utilize sophisticated AI algorithms like AlphaZero, Leela Zero. These algorithms enable AI opponents to assess potential moves, predict outcomes, and make strategic decisions, enhancing the challenge and depth of gameplay for players. This article explores how game theory principles, coupled with AI techniques, can be leveraged in board games, shaping player experiences and strategic outcomes.

Evaluation function

In game theory, an evaluation function is a mathematical function used to assess the desirability of a particular game state or position. It assigns a numerical value to each possible state of the game, indicating how favorable or unfavorable it is for a player. The evaluation function typically takes into account various factors such as the current board configuration, the potential for future moves, the relative strength of pieces, and strategic considerations.

Evaluation functions are commonly employed in games with discrete states, such as chess, checkers, and Go, where it is not feasible to exhaustively analyze every possible move. Instead, an evaluation function guides the search process by providing a heuristic estimate of the value of a given position. In algorithms like Minimax and Monte Carlo Tree Search (MCTS), the evaluation function plays a crucial role in assessing the quality of game states and guiding decision-making.

The design of an effective evaluation function often requires domain-specific knowledge and expertise in the game being analyzed. It aims to capture important features of the game that influence the likelihood of winning or losing.

Minimax Theorem

Minimax is a decision rule used in game theory for minimizing the possible loss for a worst-case scenario. It is commonly employed in two-player zero-sum games, where one player's gain is equivalent to the other player's loss (e.g., chess, tic-tac-toe). The Minimax theorem asserts that in such games, each player selects moves to minimize the maximum possible loss. This concept is fundamental to creating AI agents that can play strategically against human players or other AI opponents.

Monte Carlo Tree Search (MCTS)

MCTS is a heuristic search algorithm used in decision processes, particularly in games with large branching factors and incomplete information. It builds a search tree by simulating random sequences of moves, evaluating the outcomes, and selecting the most promising moves based on the results of these simulations. Over multiple iterations, MCTS refines its search tree to focus on the most relevant parts of the game space, thereby improving decision-making efficiency.

While Minimax and MCTS are distinct techniques, they can be complementary, with Minimax providing principles for strategic decision-making and MCTS offering efficient exploration and exploitation of the game space. Integrating aspects of Minimax into MCTS enhances its performance in decision processes, especially in complex games. MCTS typically uses random simulations to evaluate nodes in the search tree, integrating a heuristic evaluation function guided by Minimax provides more accurate assessments of game states. This can guide MCTS towards more promising branches of the search tree.

Integration of Neural Networks and Monte Carlo Tree Search

In the realm of strategic board games, the synergy between deep neural networks and Monte Carlo Tree Search (MCTS) has revolutionized gameplay. AlphaGo, a pioneering example, showcases this integration brilliantly. Neural networks, particularly deep ones, are trained to predict optimal moves and evaluate game states. Meanwhile, MCTS efficiently explores the game tree, providing valuable data for network training.

The process involves iteratively refining the neural network's understanding of the game through self-play and reinforcement learning. With each iteration, the network becomes more adept at predicting moves and assessing game states accurately. By combining the insights from neural networks with the search capabilities of MCTS, these systems achieve remarkable gameplay performance, outmaneuvering even top human players.

Implementation details

Monte Carlo Tree Search

Apache Beam pipeline

Apache Beam Pipeline generates Monte Carlo Search Tree (most promissing game states with corresponding outcomes). See details in https://github.com/Parabul/game.

    ExplorationPipelineOptions options =
        PipelineOptionsFactory.fromArgs(args).withValidation().as(ExplorationPipelineOptions.class);

    Pipeline pipeline = Pipeline.create(options);

    List<Integer> instances =
        IntStream.range(0, options.getNumpebbles()).boxed().collect(Collectors.toList());

    pipeline
        .apply("Create seeds", Create.of(instances))
        .apply(
            "Generate Random Game States",
            MapElements.into(BeamTypes.stateProtos).via(i -> GameSimulator.randomState().toProto()))
        .apply("Expand Monte Carlo Search Tree", ParDo.of(new ExpandFn()))
        .apply(
            "Enrich Less Visited Nodes",
            MapElements.into(BeamTypes.stateNodes)
                .via(MonteCarloTreeSearchExplorationPipeline::enrich))
        .apply(
            "Encode As TensorFlow Example",
            MapElements.into(BeamTypes.examples)
                .via(MonteCarloTreeSearchExplorationPipeline::encode))
        .apply(
            "Map To ByteArrays", MapElements.into(BeamTypes.byteArrays).via(Example::toByteArray))
        .apply(
            "Write TFRecords",
            TFRecordIO.write()
                .withNumShards(options.getNumOutputShards())
                .to(options.getOutputPath()));

    pipeline.run();

Tensorflow model

# Inputs
score = layers.Input(shape=(2,), name='score')
special = layers.Input(shape=(2,), name='special')
board = layers.Input(shape=(18,), name='board')
next = layers.Input(shape=(1,), name='next')

# Score sub-model
model_score = layers.Dense(2, activation="relu")(score)
model_score = keras.Model(name='model_score', inputs=score, outputs=model_score)

# Special sub-model
model_special = layers.CategoryEncoding(name="multi_hot", num_tokens=18, output_mode="multi_hot")(special)
model_special = keras.Model(name='model_special', inputs=special, outputs=model_special)

# Board sub-model
model_board = layers.Reshape((2, 9, 1))(board)
model_board = layers.Conv2D(16, (2, 3), padding='same', activation='relu')(model_board)
model_board = layers.Conv2D(32, (2, 3), padding='same', activation='relu')(model_board)
model_board = layers.Flatten()(model_board)
model_board = layers.Dense(64, activation="relu")(model_board)
model_board = keras.Model(name='model_board', inputs=board, outputs=model_board)

# Next move sub-model
model_next = layers.Dense(1, activation="relu")(next)
model_next = keras.Model(name='model_next', inputs=next, outputs=model_next)

# Combine the output of the all branches
ensemble = layers.concatenate([model_score.output, model_special.output, model_board.output, model_next.output])
model_ensemble = layers.Dense(320, activation="relu")(ensemble)
model_ensemble = layers.Dense(3, name="output", activation='softmax')(model_ensemble)

model = keras.Model(name='model_ensemble', inputs=[model_score.input, model_special.input, model_board.input, model_next.input],
                    outputs=model_ensemble)

model.summary()

model_topology

Model evaluation

TBD

Model evaluation

About

Nine Pebbles (Togyzkumalak) - nomad's board game

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages