This package provides a Lasagne/Theano-based implementation of the deep Q-learning algorithm described in:
Playing Atari with Deep Reinforcement Learning Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller
Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.
Here is a video showing a trained network playing breakout (using an earlier version of the code):
- A reasonably modern NVIDIA GPU
- Theano (https://github.com/Theano/Theano)
- Lasagne (https://github.com/Lasagne/Lasagne
- Pylearn2 (https://github.com/lisa-lab/pylearn2)
- Arcade Learning Environment (https://github.com/mgbellemare/Arcade-Learning-Environment)
dep_script.sh can be used to install all dependencies under Ubuntu.
Use the scripts
run_nature.py to start all the necessary processes:
$ ./run_nips.py --rom breakout
$ ./run_nature.py --rom breakout
run_nips.py script uses parameters consistent with the original
NIPS workshop paper. This code should take 2-4 days to complete. The
run_nature.py script uses parameters consistent with the Nature
paper. The final policies should be better, but it will take 6-10
days to finish training.
Either script will store output files in a folder prefixed with the
name of the ROM. Pickled version of the network objects are stored
after every epoch. The file
results.csv will contain the testing
output. You can plot the progress by executing
$ python plot_results.py breakout_05-28-17-09_0p00025_0p99/results.csv
After training completes, you can watch the network play using the
$ python ale_run_watch.py breakout_05-28-17-09_0p00025_0p99/network_file_99.pkl
THEANO_FLAGS or in the
significantly improves performance at the expense of a slight increase
in memory usage on the GPU.
The deep Q-learning web-forum can be used for discussion and advice related to deep Q-learning in general and this package in particular.
This is the code DeepMind used for the Nature paper. The license only permits the code to be used for "evaluating and reviewing" the claims made in the paper.
Working Caffe-based implementation. (I haven't tried it, but there is a video of the agent playing Pong successfully.)
Defunct? As far as I know, this package was never fully functional. The project is described here: http://robohub.org/artificial-general-intelligence-that-plays-atari-video-games-how-did-deepmind-do-it/
This is an almost-working implementation developed during Spring 2014 by my student Brian Brown. I haven't reused his code, but Brian and I worked together to puzzle through some of the blank areas of the original paper.