Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
jimfleming committed Mar 27, 2016
1 parent 029f9d3 commit 4d10d8e
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 4 deletions.
13 changes: 10 additions & 3 deletions README.md
Expand Up @@ -2,9 +2,9 @@

Implementation of [TD-Gammon](http://www.bkgm.com/articles/tesauro/tdl.html) in TensorFlow.

Before DeepMind + Atari there was TD-Gammon, an algorithm that combined reinforcement learning and neural networks to play backgammon at an intermediate level with raw features and expert level with hand-engineered features. This is an implementation using raw features: one-hot encoding of each point on the board.
Before DeepMind tackled playing Atari games or built AlphaGo there was TD-Gammon, an algorithm that combined reinforcement learning with neural networks to play backgammon. While not as famous, it actually shares a lot of history with AlphaGo. Originally published in 1992 by Gerald Tesauro, it was the first algorithm to reach an expert level of play in backgammon. It is referenced in both Atari and AlphaGo research papers and helped set the groundwork for many of the advancements made in the last few years. Including the notion of self-play seen in AlphaGo.

The code also features [eligibility traces](https://webdocs.cs.ualberta.ca/~sutton/book/ebook/node87.html#fig:GDTDl) on the gradients which are an elegant way to assign credit to actions made in the past.
The code features [eligibility traces](https://webdocs.cs.ualberta.ca/~sutton/book/ebook/node87.html#fig:GDTDl) on the gradients which are an elegant way to assign credit to actions made in the past.

## Training

Expand All @@ -24,4 +24,11 @@ The code also features [eligibility traces](https://webdocs.cs.ualberta.ca/~sutt

# Play

To play against a trained model: `python main.py --play`
To play against a trained model: `python main.py --play --restore`

## Things to try

- Compare with and without eligibility traces by replacing the trace with the unmodified gradient.
- Try different activation functions on the hidden layer.
- Expand the board representation. Currently it uses the "compact" representation from the paper. A full board representation should remove some ambiguity between board states.
- Increase the number of turns the agent will look at before making a move. The paper used a 2-ply and 3-ply search while this implementation only uses 1-ply.
2 changes: 1 addition & 1 deletion model.py
Expand Up @@ -200,7 +200,7 @@ def train(self):
players = [TDAgent(Game.TOKENS[0], self), TDAgent(Game.TOKENS[1], self)]

validation_interval = 1000
episodes = 10000
episodes = 5000

for episode in range(episodes):
if episode != 0 and episode % validation_interval == 0:
Expand Down

0 comments on commit 4d10d8e

Please sign in to comment.