Skip to content
PyTorch Reinforcement Learning Framework for Researchers
Python Shell Makefile
Branch: master
Clone or download
seba-1511 Support torch 1.3.0 (#20)
* Automatically choose gels or lstsq.

* Fix tests for PyTorch 1.3.0.
Latest commit e13d617 Nov 4, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
benchmarks Fix atari and pybullet benchmarks. Jun 8, 2019
docs Update URLs. Sep 7, 2019
examples Version bump. Oct 3, 2019
tests Support torch 1.3.0 (#20) Nov 5, 2019
.gitignore Finish bsuite example. Aug 18, 2019
.travis.yml Fix travis for linting. May 2, 2019
LICENSE Initial commit Nov 30, 2018
Makefile Silence all test warnings. Aug 18, 2019 Update URLs. Sep 7, 2019
requirements-dev.txt Update docs to support KaTex equations. Aug 17, 2019
requirements.txt Updated install instructions. Feb 28, 2019
setup.cfg Populated README Dec 10, 2018 Update more URLs. Sep 7, 2019

Build Status

Cherry is a reinforcement learning framework for researchers built on top of PyTorch.

Unlike other reinforcement learning implementations, cherry doesn't implement a single monolithic interface to existing algorithms. Instead, it provides you with low-level, common tools to write your own algorithms. Drawing from the UNIX philosophy, each tool strives to be as independent from the rest of the framework as possible. So if you don't like a specific tool, you don’t need to use it.


  • Pythonic and low-level interface à la Pytorch.
  • Support for tabular (!) and function approximation algorithms.
  • Various OpenAI Gym environment wrappers.
  • Helper functions for popular algorithms. (e.g. A2C, DDPG, TRPO, PPO, SAC)
  • Logging, visualization, and debugging tools.
  • Painless and efficient distributed training on CPUs and GPUs.
  • Unit, integration, and regression tested, continuously integrated.

To learn more about the tools and philosophy behind cherry, check out our Getting Started tutorial.


The following snippet showcases some of the tools offered by cherry.

import cherry as ch

# Wrap environments
env = gym.make('CartPole-v0')
env = ch.envs.Logger(env, interval=1000)
env = ch.envs.Torch(env)

policy = PolicyNet()
optimizer = optim.Adam(policy.parameters(), lr=1e-2)
replay = ch.ExperienceReplay()  # Manage transitions

for step in range(1000):
    state = env.reset()
    while True:
        mass = Categorical(policy(state))
        action = mass.sample()
        log_prob = mass.log_prob(action)
        next_state, reward, done, _ = env.step(action)

        # Build the ExperienceReplay
        replay.append(state, action, reward, next_state, done, log_prob=log_prob)
        if done:
            state = next_state

    # Discounting and normalizing rewards
    rewards =, replay.reward(), replay.done())
    rewards = ch.normalize(rewards)

    loss = -th.sum(replay.log_prob() * rewards)

Many more high-quality examples are available in the examples/ folder.


Note Cherry is considered in early alpha release. Stuff might break.

pip install cherry-rl


Documentation and tutorials are available on cherry’s website:


First, thanks for your consideration in contributing to cherry. Here are a couple of guidelines we strive to follow.

  • It's always a good idea to open an issue first, where we can discuss how to best proceed.
  • If you want to contribute a new example using cherry, it would preferably stand in a single file.
  • If you would like to contribute a new feature to the core library, we suggest to first implement an example showcasing your new functionality. Doing so is quite useful:
    • it allows for automatic testing,
    • it ensures that the functionality is correctly implemented,
    • it shows users how to use your functionality, and
    • it gives a concrete example when discussing the best way to merge your implementation.

We don't have forums, but are happy to discuss with you on slack. Make sure to send an email to to get an invite.


Cherry draws inspiration from many reinforcement learning implementations, including

Why 'cherry' ?

Because it's the sweetest part of the cake.

You can’t perform that action at this time.