Skip to content

Latest commit

 

History

History
25 lines (17 loc) · 1.01 KB

dqn_per.rst

File metadata and controls

25 lines (17 loc) · 1.01 KB

Atari 2600: Pong with DQN & Prioritized Experience Replay

In this notebook we solve the Pong environment using a version of a DQN </examples/stubs/dqn> agent, trained using a PrioritizedReplayBuffer <coax.experience_replay.PrioritizedReplayBuffer> instead of the standard SimpleReplayBuffer <coax.experience_replay.SimpleReplayBuffer>.

This notebook periodically generates GIFs, so that we can inspect how the training is progressing.

After a few hundred episodes, this is what you can expect:

Beating Atari 2600 Pong after a few hundred episodes.


dqn_per.py

Open in Google Colab

dqn_per.py