Skip to content
A toolkit for reproducible reinforcement learning research
Python Shell Makefile
Branch: master
Clone or download

Latest commit

lywong92 Add clear method to PathBuffer (#1173)
* Add clear method to PathBuffer

* Fix typo and pylint errors

* Add tests

* Fix typo and add tests

* Fix argument type
Latest commit 482a26a Feb 18, 2020


Type Name Latest commit message Commit time
Failed to load latest commit information.
docker Fix torch packaging and use package in the CI (#1031) Nov 19, 2019
docs Cache parameters shape and values without tag (#1092) Dec 10, 2019
examples Wrap experiment examples (#1155) Jan 26, 2020
scripts Install script changes for macOS 10.15.1 (#1043) Nov 19, 2019
src/garage Add clear method to PathBuffer (#1173) Feb 17, 2020
tests Add clear method to PathBuffer (#1173) Feb 17, 2020
.codecov.yml Stop codecov from reporting too early (#980) Oct 31, 2019
.editorconfig Fix local package names in flake8 config (#77) Jun 14, 2018
.gitignore Added DeterministicPolicy and ContinuousNNQFunction in torch (#764) Jul 19, 2019
.mergify.yml Correct .mergify.yml Oct 30, 2019
.pre-commit-config.yaml Add separate Travis job for pre-commit checks (#984) Nov 1, 2019
.pylintrc Propose new Sampler API and port Ray Sampler (#881) Dec 28, 2019
.travis.yml Fix nightly tests (#1158) Jan 27, 2020 Update Dec 9, 2019
CODEOWNERS Update CODEOWNERS Nov 2, 2018 Add instructions for documenting tensor shapes (#1065) Nov 23, 2019
LICENSE Update LICENSE Mar 2, 2019 Move garage package to src/ (#665) May 27, 2019
Makefile Add tests for long running examples (#1079) Dec 6, 2019 Update README to include MAML (#1146) Jan 23, 2020
VERSION Update VERSION Dec 9, 2019
readthedocs.yml Enable API doc generation (#802) Jul 17, 2019
setup.cfg Base class and utilities for Meta-RL Algorithms (#1142) Jan 23, 2020 Fix nightly tests (#1158) Jan 27, 2020

Docs Build Status License codecov PyPI version


garage is a toolkit for developing and evaluating reinforcement learning algorithms, and an accompanying library of state-of-the-art implementations built using that toolkit.

The toolkit provides wide range of modular tools for implementing RL algorithms, including:

  • Composable neural network models
  • Replay buffers
  • High-performance samplers
  • An expressive experiment definition interface
  • Tools for reproducibility (e.g. set a global random seed which all components respect)
  • Logging to many outputs, including TensorBoard
  • Reliable experiment checkpointing and resuming
  • Environment interfaces for many popular benchmark suites
  • Supporting for running garage in diverse environments, including always up-to-date Docker containers

See the latest documentation for getting started instructions and detailed APIs.


pip install garage


The table below summarizes the algorithms available in garage.

Algorithm Framework(s)
CEM numpy
CMA-ES numpy
REINFORCE (a.k.a. VPG) PyTorch, TensorFlow
DDPG PyTorch, TensorFlow
DQN TensorFlow
DDQN TensorFlow
ERWR TensorFlow
NPO TensorFlow
PPO PyTorch, TensorFlow
REPS TensorFlow
TD3 TensorFlow
TNPG TensorFlow
TRPO PyTorch, TensorFlow
MAML PyTorch

Supported Tools and Frameworks

garage supports Python 3.5+.

We currently support PyTorch and TensorFlow for implementing the neural network portions of RL algorithms, and additions of new framework support are always welcome. PyTorch modules can be found in the package garage.torch and TensorFlow modules can be found in the package Algorithms which do not require neural networks are found in the package

The package is available for download on PyPI, and we ensure that it installs successfully into environments defined using conda, Pipenv, and virtualenv.

All components use the popular gym.Env interface for RL environments.


The most important feature of garage is its comprehensive automated unit test and benchmarking suite, which helps ensure that the algorithms and modules in garage maintain state-of-the-art performance as the software changes.

Our testing strategy has three pillars:

  • Automation: We use continuous integration to test all modules and algorithms in garage before adding any change. The full installation and test suite is also run nightly, to detect regressions.
  • Acceptance Testing: Any commit which might change the performance of an algorithm is subjected to comprehensive benchmarks on the relevant algorithms before it is merged
  • Benchmarks and Monitoring: We benchmark the full suite of algorithms against their relevant benchmarks and widely-used implementations regularly, to detect regressions and improvements we may have missed.

Supported Releases

Release Build Status Last date of support
v2019.10 Build Status June 30th, 2019

Garage releases a new stable version approximately every 4 months, in February, June, and October. Maintenance releases have a stable API and dependency tree, and receive bug fixes and critical improvements but not new features. We currently support each release for a window of 8 months.

Citing garage

If you use garage for academic research, please cite the repository using the following BibTeX entry. You should update the commit field with the commit or release tag your publication uses.

 author = {The garage contributors},
 title = {Garage: A toolkit for reproducible reinforcement learning research},
 year = {2019},
 publisher = {GitHub},
 journal = {GitHub repository},
 howpublished = {\url{}},
 commit = {be070842071f736eb24f28e4b902a9f144f5c97b}


The original code for garage was adopted from predecessor project called rllab. The garage project is grateful for the contributions of the original rllab authors, and hopes to continue advancing the state of reproducibility in RL research in the same spirit.

rllab was developed by Rocky Duan (UC Berkeley/OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley/OpenAI), John Schulman (UC Berkeley/OpenAI), and Pieter Abbeel (UC Berkeley/OpenAI).

You can’t perform that action at this time.