Skip to content
Branch: master
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
DQN.py
DQNModel.py
README.md
atari.py
atari_wrapper.py
breakout.jpg
common.py
curve-breakout.png
expreplay.py

README.md

breakout

video demo

Reproduce (performance of) the following reinforcement learning methods:

Usage:

Install dependencies by pip install 'gym[atari]'.

With ALE (paper's setting):

Download an atari rom, e.g.:

wget https://github.com/openai/atari-py/raw/gdb/atari_py/atari_roms/breakout.bin

Start Training:

./DQN.py --env breakout.bin
# use `--algo` to select other DQN algorithms. See `-h` for more options.

Watch the agent play:

# Download pretrained models or use one you trained:
wget http://models.tensorpack.com/DeepQNetwork/DoubleDQN-breakout.bin.npz
./DQN.py --env breakout.bin --task play --load DoubleDQN-breakout.bin.npz

Evaluation of 50 episodes:

./DQN.py --env breakout.bin --task eval --load DoubleDQN-breakout.bin.npz

With gym's Atari:

Install gym and atari_py. Use --env BreakoutDeterministic-v4 instead of the ROM file.

Performance

Claimed performance in the paper can be reproduced, on several games I've tested with.

DQN

Environment Avg Score Download
breakout.bin 465 ⬇️
seaquest.bin 8686 ⬇️
ms_pacman.bin 3323 ⬇️
beam_rider.bin 15835 ⬇️

Speed

On one GTX 1080Ti, the ALE version took ~2 hours of training to reach 21 (maximum) score on Pong, ~10 hours of training to reach 400 score on Breakout. It runs at 100 batches (6.4k trained frames, 400 seen frames, 1.6k game frames) per second on GTX 1080Ti. This is likely the fastest open source TF implementation of DQN.

A3C code and models for Atari games in OpenAI Gym are released in examples/A3C-Gym

You can’t perform that action at this time.