A3C code and models for Atari games in gym
Multi-GPU version of the A3C algorithm in Asynchronous Methods for Deep Reinforcement Learning.
Results of the same code trained on 47 different Atari games were uploaded to OpenAI Gym. Most of them were the best reproducible results on gym. However OpenAI has later completely removed leaderboard from their site.
To train on an Atari game:
./train-atari.py --env Breakout-v0 --gpu 0
In each iteration it trains on a batch of 128 new states.
The speed is about 6~10 iterations/s on 1 GPU plus 12+ CPU cores.
With 2 TitanX + 20+ CPU cores, by setting
SIMULATOR_PROC=240, PREDICT_BATCH_SIZE=30, PREDICTOR_THREAD_PER_GPU=6, it can improve to 16 it/s (2K images/s).
Note that the network architecture is larger than what's used in the original paper.
The pretrained models are all trained with 4 GPUs for about 2 days. But on simple games like Breakout, you can get good performance within several hours. Also note that multi-GPU doesn't give you obvious speedup here, because the bottleneck in this implementation is not computation but simulation.
Some practicical notes:
- Prefer Python 3; Windows not supported.
- Training with a significant slower speed (e.g. on CPU) will result in very bad score, probably because of the slightly off-policy implementation.
- Occasionally, processes may not get terminated completely. If you're using Linux, install python-prctl to prevent this.
To test a model:
Download models from model zoo.
Watch the agent play:
./train-atari.py --task play --env Breakout-v0 --load Breakout-v0.npz
Dump some videos:
./train-atari.py --task dump_video --load Breakout-v0.npz --env Breakout-v0 --output output_dir --episode 3
This table lists available pretrained models and scores (average over 100 episodes), with their submission links. The old submission site is not maintained any more so the links might become invalid any time.
All models above are trained with the
-v0 variant of atari games.
Note that this variant is quite different from DeepMind papers, so the scores are not directly comparable.
The most notable differences are:
- Each action is randomly repeated 2~4 times.
- Inputs are RGB instead of greyscale.
- An episode is limited to 60000 steps.
- Lost of live is not end of episode.
Also see the DQN implementation here