Skip to content

RLToolkit is a flexible and high-efficient reinforcement learning framework. Include implementation of DQN, AC,A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and ....

License

Notifications You must be signed in to change notification settings

jianzhnie/deep-rl-toolkit

Repository files navigation

Deep-RL-Toolkit

logo

Overview

Deep RL Toolkit is a flexible and high-efficient reinforcement learning framework. RLToolkit is developed for practitioners with the following advantages:

  • Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.

  • Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.

  • Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.

  • Elastic: allows to elastically and automatically allocate computing resources on the cloud.

  • Lightweight: the core codes <1,000 lines (check Demo).

  • Stable: much more stable than Stable Baselines 3 by utilizing various ensemble methods.

Table of Content

Supported Algorithms

RLToolkit implements the following model-free deep reinforcement learning (DRL) algorithms:

../_images/rl_algorithms_9_15.svg

Supported Envs

  • OpenAI Gym
  • Atari
  • MuJoCo
  • PyBullet

For the details of DRL algorithms, please check out the educational webpage OpenAI Spinning Up.

Examples

logo

Breakout

NeurlIPS2018 Half-Cheetah

If you want to learn more about deep reinforcemnet learning, please read the deep-rl-class and run the examples.

Quick Start

git clone https://github.com/jianzhnie/deep-rl-toolkit.git

# Run the DQN algorithm on the CartPole-v0 environment
python examples/cleanrl/cleanrl_runner.py --env CartPole-v0 --algo dqn
python examples/cleanrl/cleanrl_runner.py --env CartPole-v0 --algo ddqn
python examples/cleanrl/cleanrl_runner.py --env CartPole-v0 --algo dueling_dqn
python examples/cleanrl/cleanrl_runner.py --env CartPole-v0 --algo dueling_ddqn

# Run the C51 algorithm on the CartPole-v0 environment
python examples/cleanrl/cleanrl_runner.py --env CartPole-v0 --algo c51

# Run the DDPG algorithm on the Pendulum-v1 environment
python examples/cleanrl/cleanrl_runner.py --env Pendulum-v0 --algo ddpg

# Run the PPO algorithm on the CartPole-v0 environment
python examples/cleanrl/cleanrl_runner.py --env CartPole-v0 --algo ppo

References

Reference Papers

  1. Deep Q-Network (DQN) (V. Mnih et al. 2015)
  2. Double DQN (DDQN) (H. Van Hasselt et al. 2015)
  3. Advantage Actor Critic (A2C)
  4. Vanilla Policy Gradient (VPG)
  5. Natural Policy Gradient (NPG) (S. Kakade et al. 2002)
  6. Trust Region Policy Optimization (TRPO) (J. Schulman et al. 2015)
  7. Proximal Policy Optimization (PPO) (J. Schulman et al. 2017)
  8. Deep Deterministic Policy Gradient (DDPG) (T. Lillicrap et al. 2015)
  9. Twin Delayed DDPG (TD3) (S. Fujimoto et al. 2018)
  10. Soft Actor-Critic (SAC) (T. Haarnoja et al. 2018)
  11. SAC with automatic entropy adjustment (SAC-AEA) (T. Haarnoja et al. 2018)

References code

About

RLToolkit is a flexible and high-efficient reinforcement learning framework. Include implementation of DQN, AC,A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and ....

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages