A Reinforcement Learning Library for Research and Education
Writing reinforcement learning algorithms is fun! But after the fun, we have lots of boring things to implement: run our agents in parallel, average and plot results, optimize hyperparameters, compare to baselines, create tricky environments etc etc!
rlberry
is a Python library that makes your life easier by doing all these things with a few lines of code, so
that you can spend most of your time developing agents.
rlberry
also provides implementations of several RL agents, benchmark environments and many other useful tools.
Install the latest version for a stable release.
pip install rlberry
The documentation includes more installation instructions in particular for users that work with Jax.
In our documentation, you will find quick starts to the library and a user guide with a few tutorials on using rlberry.
Also, we provide a handful of notebooks on Google colab as examples to show you
how to use rlberry
:
See the changelog for a history of the chages made to rlberry.
If you use rlberry
in scientific publications, we would appreciate citations using the following Bibtex entry:
@misc{rlberry,
author = {Domingues, Omar Darwiche and Flet-Berliac, Yannis and Leurent, Edouard and M{\'e}nard, Pierre and Shang, Xuedong and Valko, Michal},
doi = {10.5281/zenodo.5544540},
month = {10},
title = {{rlberry - A Reinforcement Learning Library for Research and Education}},
url = {https://github.com/rlberry-py/rlberry},
year = {2021}
}
The modules listed below are experimental at the moment, that is, they are not thoroughly tested and are susceptible to evolve.
rlberry.network
: Allows communication between a server and client via sockets, and can be used to run agents remotely.rlberry.agents.experimental
: Experimental agents that are not thoroughly tested.
This project was initiated and is actively maintained by INRIA SCOOL team. More information here.
Want to contribute to rlberry
? Please check our contribution guidelines. If you want to add any new agents or environments, do not hesitate
to open an issue!