Skip to content

marcofavorito/yarllib

Repository files navigation

yarllib

PyPI PyPI - Python Version PyPI - Status PyPI - Implementation PyPI - Wheel GitHub

test lint docs codecov

black

Yet Another Reinforcement Learning Library.

Status: development.

Why?

I had the need for a RL library/framework that:

  • was clearly and simply implemented, with good enough performances;
  • highly focused on modularity, customizability and extendability;
  • wasn't merely Deep Reinforcement Learning oriented.

I couldn't find an existing library that satisfied my needs; hence I decided to implement yet another RL library.

For me it is also an opportunity to have a better understanding of the RL algorithms and to appreciate the nuances that you can't find on a book.

If you find this repo useful for your research or your project, I'd be very glad :-) don't hesitate to reach me out!

What

The package is both:

  • a library, because it provides off-the-shelf functionalities to set up an RL experiment;
  • a framework, because you can compose your custom model by implementing the interfaces, override the default behaviours, or use the existing components as-is.

You can find more details in the documentation.

Tests

To run tests: tox

To run only the code tests: tox -e py3.7

To run only the linters:

  • tox -e flake8
  • tox -e mypy
  • tox -e black-check
  • tox -e isort-check

Please look at the tox.ini file for the full list of supported commands.

Docs

To build the docs: mkdocs build

To view documentation in a browser: mkdocs serve and then go to http://localhost:8000

License

yarllib is released under the GNU Lesser General Public License v3.0 or later (LGPLv3+).

Copyright 2020 Marco Favorito

Authors

Cite

If you use this library for your research, please consider citing this repository:

@misc{favorito2020,
  Author = {Marco Favorito},
  Title = {yarllib: Yet Another Reinforcement Learning Library},
  Year = {2020},
}

An e-print will come soon :-)