Open-Zero is a research project that aims to make an open source implementation of AlphaZero and MuZero's methods from DeepMind on the game of chess.
We use Deep Reinforcement Learning methods such as Asynchronous Advantage Actor-Crique or A3C.
A3C methods, unlike synchronous methods, use multithreading to get a larger amount of training data, making the process of having promising result with the AI faster.
The AI instanciates as much workers as possible, with each of theses workers working on a copy of the global network. Once a worker has finished an episode of training, it updates the global network and starts a new episode with a copy of the latest global network. This method allows for a faster training, but the wider variety of training data gives it a higher quality of training and a better result.
The AI trains by playing against itself using A3C methods.
We can test the AI multiple ways:
- Watch the AI play against itself
- Make an evaluation of a game using Stockfish
git clone https://github.com/PoCInnovation/Open-Zero.git
cd Open-Zero
docker build . -t openzero
The launch-project.sh
script is the tool you use to do almost everything in this project.
Get the usage help by doing:
docker run openzero
Gino Ambigaipalan → Github
Jean-Baptiste Debize → Github
Nell Fauveau → Github
Bogdan Guillemoles → Github