From 4d10d8ef05242a1ad9f8231e2ef35d17e780e793 Mon Sep 17 00:00:00 2001 From: Jim Fleming Date: Sun, 27 Mar 2016 11:43:34 -0700 Subject: [PATCH] Update README --- README.md | 13 ++++++++++--- model.py | 2 +- 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 56dbfbf..0d2fb04 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,9 @@ Implementation of [TD-Gammon](http://www.bkgm.com/articles/tesauro/tdl.html) in TensorFlow. -Before DeepMind + Atari there was TD-Gammon, an algorithm that combined reinforcement learning and neural networks to play backgammon at an intermediate level with raw features and expert level with hand-engineered features. This is an implementation using raw features: one-hot encoding of each point on the board. +Before DeepMind tackled playing Atari games or built AlphaGo there was TD-Gammon, an algorithm that combined reinforcement learning with neural networks to play backgammon. While not as famous, it actually shares a lot of history with AlphaGo. Originally published in 1992 by Gerald Tesauro, it was the first algorithm to reach an expert level of play in backgammon. It is referenced in both Atari and AlphaGo research papers and helped set the groundwork for many of the advancements made in the last few years. Including the notion of self-play seen in AlphaGo. -The code also features [eligibility traces](https://webdocs.cs.ualberta.ca/~sutton/book/ebook/node87.html#fig:GDTDl) on the gradients which are an elegant way to assign credit to actions made in the past. +The code features [eligibility traces](https://webdocs.cs.ualberta.ca/~sutton/book/ebook/node87.html#fig:GDTDl) on the gradients which are an elegant way to assign credit to actions made in the past. ## Training @@ -24,4 +24,11 @@ The code also features [eligibility traces](https://webdocs.cs.ualberta.ca/~sutt # Play -To play against a trained model: `python main.py --play` +To play against a trained model: `python main.py --play --restore` + +## Things to try + +- Compare with and without eligibility traces by replacing the trace with the unmodified gradient. +- Try different activation functions on the hidden layer. +- Expand the board representation. Currently it uses the "compact" representation from the paper. A full board representation should remove some ambiguity between board states. +- Increase the number of turns the agent will look at before making a move. The paper used a 2-ply and 3-ply search while this implementation only uses 1-ply. diff --git a/model.py b/model.py index e93dec5..ff65ab7 100644 --- a/model.py +++ b/model.py @@ -200,7 +200,7 @@ def train(self): players = [TDAgent(Game.TOKENS[0], self), TDAgent(Game.TOKENS[1], self)] validation_interval = 1000 - episodes = 10000 + episodes = 5000 for episode in range(episodes): if episode != 0 and episode % validation_interval == 0: