Implementation of TD-Gammon in TensorFlow.
Clone or download
Latest commit 3a027b5 Sep 18, 2016
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
backgammon Fix import Mar 27, 2016
checkpoints Add pre-trained model checkpoint Mar 28, 2016
.gitignore Add pre-trained model checkpoint Mar 28, 2016
README.md Update README.md Sep 18, 2016
main.py Move functions back into model; rename some things Mar 27, 2016
model.py Update README Mar 27, 2016

README.md

TD-Gammon

Implementation of TD-Gammon in TensorFlow.

Before DeepMind tackled playing Atari games or built AlphaGo there was TD-Gammon, the first algorithm to reach an expert level of play in backgammon. Gerald Tesauro published his paper in 1992 describing TD-Gammon as a neural network trained with reinforcement learning. It is referenced in both Atari and AlphaGo research papers and helped set the groundwork for many of the advancements made in the last few years.

The code features eligibility traces on the gradients which are an elegant way to assign credit to actions made in the past.

Training

  1. Install TensorFlow.
  2. Clone the repo: git clone https://github.com/fomorians/td-gammon.git && cd td-gammon
  3. Run training: python main.py

Play

To play against a trained model: python main.py --play --restore

Things to try

  • Compare with and without eligibility traces by replacing the trace with the unmodified gradient.
  • Try different activation functions on the hidden layer.
  • Expand the board representation. Currently it uses the "compact" representation from the paper. A full board representation should remove some ambiguity between board states.
  • Increase the number of turns the agent will look at before making a move. The paper used a 2-ply and 3-ply search while this implementation only uses 1-ply.