This repository contains a Python-based chess engine developed using Monte Carlo Tree Search (MCTS) algorithm coupled with Transformer models for reinforcement learning.
The goal of this project is to create a robust chess-playing program employing MCTS and leveraging Transformer models for state representation and reinforcement learning.
Language | Deep Learning Framework | Algorithm | Models |
---|---|---|---|
Python | PyTorch | Monte Carlo Tree Search | Decoder-only Transformers |
-
Piece Implementation:
- Individual pieces coded:
Piece Quantity Pawn 8 Knight 2 Bishop 2 Rook 2 King 1 Queen 1 - Piece representations:
Black as negative, White as positive, empty squares as 0
-
Class Structure:
- Classes designed for pieces, display, and the chess board
- Game rule-specific classes (e.g., Pawn Promotion, Fifty-move rule)
-
User Interaction:
- Input system for moves or potential GUI implementation
-
MCTS Implementation:
- End-game condition checked
- Neural Networks to evaluate value + action probabilities
- Upper Confidence Bound used for move selection
-
Transformer Models:
- Decoder-only architecture
- Attention mechanism for feature learning
- Tokenized piece representation with positional encoding
-
Output and Learning:
- Model outputs: Outcome probabilities and actions
- One-hot encoding for classification
- Reinforcement learning using Binary Cross Entropy loss minimization
- Implement pieces: Pawns, Knights, Bishops, Rooks, King, Queen
- Define Class Structures: Piece classes, Display, Chess Board, Rule implementations
- Integrate MCTS: End-game checks, Neural Network integration
- Develop Transformer Model: Decoder architecture, Attention Mechanism, Tokenization
- Output and Learning Setup: Model output configuration, Reinforcement Learning implementation
- UCI Protocol Integration