Reproduce (performance of) the following reinforcement learning methods:
Nature-DQN in: Human-level Control Through Deep Reinforcement Learning
Double-DQN in: Deep Reinforcement Learning with Double Q-learning
Dueling-DQN in: Dueling Network Architectures for Deep Reinforcement Learning
A3C in Asynchronous Methods for Deep Reinforcement Learning. (I used a modified version where each batch contains transitions from different simulators, which I called "Batch-A3C".)
Install dependencies by
pip install 'gym[atari]'.
With ALE (paper's setting):
Download an atari rom, e.g.:
./DQN.py --env breakout.bin # use `--algo` to select other DQN algorithms. See `-h` for more options.
Watch the agent play:
# Download pretrained models or use one you trained: wget http://models.tensorpack.com/DeepQNetwork/DoubleDQN-breakout.bin.npz ./DQN.py --env breakout.bin --task play --load DoubleDQN-breakout.bin.npz
Evaluation of 50 episodes:
./DQN.py --env breakout.bin --task eval --load DoubleDQN-breakout.bin.npz
With gym's Atari:
Install gym and atari_py. Use
--env BreakoutDeterministic-v4 instead of the ROM file.
Claimed performance in the paper can be reproduced, on several games I've tested with.
On one GTX 1080Ti, the ALE version took ~2 hours of training to reach 21 (maximum) score on Pong, ~10 hours of training to reach 400 score on Breakout. It runs at 100 batches (6.4k trained frames, 400 seen frames, 1.6k game frames) per second on GTX 1080Ti. This is likely the fastest open source TF implementation of DQN.
A3C code and models for Atari games in OpenAI Gym are released in examples/A3C-Gym