A3C code and models for Atari games in gym
Multi-GPU version of the A3C algorithm in Asynchronous Methods for Deep Reinforcement Learning.
Results of the code trained on 47 different Atari games were uploaded to OpenAI Gym and available for download. Most of them were the best reproducible results on gym. However OpenAI has later removed the leaderboard from their site.
To train on an Atari game:
./train-atari.py --env Breakout-v0 --gpu 0
In each iteration it trains on a batch of 128 new states. The speed is about 20 iterations/s (2.5k images/s) on 1 V100 GPU plus 12+ CPU cores. Note that the network architecture is larger than what's used in the original paper.
The pretrained models are all trained with 4 GPUs for about 2 days. But on simple games like Breakout, you can get decent performance within several hours. For example, it takes only 2 hours on a V100 to reach 400 average score on Breakout.
Some practicical notes:
- Prefer Python 3; Windows not supported.
- Training with a significant slower speed (e.g. on CPU) will result in very bad score, probably because of the slightly off-policy implementation.
- Occasionally, processes may not get terminated completely. If you're using Linux, install python-prctl to prevent this.
To test a model:
Download models from model zoo.
Watch the agent play:
./train-atari.py --task play --env Breakout-v0 --load Breakout-v0.npz
Dump some videos:
./train-atari.py --task dump_video --load Breakout-v0.npz --env Breakout-v0 --output output_dir --episode 3
This table lists available pretrained models and scores (average over 100 episodes), with their submission links. The old submission site is not maintained any more so the links might become invalid any time.
All models above are trained with the
-v0 variant of atari games.
Note that this variant is quite different from DeepMind papers, so the scores are not directly comparable.
The most notable differences are:
- Each action is randomly repeated 2~4 times.
- Inputs are RGB instead of greyscale.
- An episode is limited to 60000 steps.
- Lost of live is not end of episode.
Also see the DQN implementation in tensorpack