Skip to content
Go to file


Failed to load latest commit information.


If you have any question or want to report a bug, please open an issue instead of emailing me directly.

Modularized implementation of popular deep RL algorithms in PyTorch.
Easy switch between toy tasks and challenging games.

Implemented algorithms:

  • (Double/Dueling/Prioritized) Deep Q-Learning (DQN)
  • Categorical DQN (C51)
  • Quantile Regression DQN (QR-DQN)
  • (Continuous/Discrete) Synchronous Advantage Actor Critic (A2C)
  • Synchronous N-Step Q-Learning (N-Step DQN)
  • Deep Deterministic Policy Gradient (DDPG)
  • Proximal Policy Optimization (PPO)
  • The Option-Critic Architecture (OC)
  • Twined Delayed DDPG (TD3)
  • ReverseRL/COF-PAC/GradientDICE/Bi-Res-DDPG/DAC/Geoff-PAC/QUOTA/ACE

The DQN agent, as well as C51 and QR-DQN, has an asynchronous actor for data generation and an asynchronous replay buffer for transferring data to GPU. Using 1 RTX 2080 Ti and 3 threads, the DQN agent runs for 10M steps (40M frames, 2.5M gradient updates) for Breakout within 6 hours.


  • PyTorch v1.5.1
  • See Dockerfile and requirements.txt for more details

Usage contains examples for all the implemented algorithms.
Dockerfile contains the environment for generating the curves below.
Please use this bibtex if you want to cite this repo

  author = {Zhang, Shangtong},
  title = {Modularized Implementation of Deep RL Algorithms in PyTorch},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub Repository},
  howpublished = {\url{}},

Curves (commit 9e811e)

BreakoutNoFrameskip-v4 (1 run)



  • DDPG/TD3 evaluation performance. Loading... (5 runs, mean + standard error)

  • PPO online performance. Loading... (5 runs, mean + standard error, smoothed by a window of size 10)


Code of My Papers

They are located in other branches of this repo and seem to be good examples for using this codebase.

You can’t perform that action at this time.