Skip to content
BorjaFG edited this page Mar 6, 2019 · 57 revisions

This project features a set of tools/applications written in either C++ or C# designed to make experiments with Reinforcement Learning algorithms on control tasks with continuous state and action spaces. The main goal is to provide an easy-to-use environment in which end-users (no programming skills required) can design, run, and monitor/view experiments, and then analyze the results. The most prominent features are:

  • Experiment parameters can be given a set of values to perform a parameter sweep
  • All the different combinations can be run in parallel using the built-in distributed execution mode
  • The results of an experiment can be analyzed with customizable plots
  • The behavior of system can also be viewed live or after an experiment has finished
  • It supports Windows (x86 and x64) and Linux (x64).

Getting started

  • End-users who want to run the binaries should read this guide.
  • Developers who want to compile/adapt the sources should read this guide.

Contribute

Feel free to contribute to this project by either submitting pull-requests or send us an email if you want to be part of the team behind this project.

Referencing the project

If you use our software in your research, we kindly ask you to reference DOI.

Acknowledgements

The code features contributions from:

  • Unai Tercero (Badger and Herd Agent)
  • Asier Rodríguez (Bullet worlds)
  • Alejandro Guerra (Badger and Herd Agent)
  • Roland Zimmermann (Badger, OffPAC, INAC, Tile Coding, ... and all about CNTK and Deep RL)
  • Borja Fernandez-Gauna (corresponding author: borja.fernandez'at'ehu.eus)