Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics of algorithm performance #19

Closed
KeAWang opened this issue Jan 23, 2020 · 7 comments
Closed

Metrics of algorithm performance #19

KeAWang opened this issue Jan 23, 2020 · 7 comments
Assignees

Comments

@KeAWang
Copy link

KeAWang commented Jan 23, 2020

Is your feature request related to a problem? Please describe.
The performance of RL algorithms is very sensitive to the implementation (including tricks not mentioned in their papers). A good RL package should have benchmarks of how well its implementations perform as quantitative checks.

Describe the solution you'd like
Would it be possible to maintain a set of benchmarks for each of the proposed algorithm on standard tasks so that users can be sure that the implementations are faithful to the source code released by the original authors?

Even if the implementations here can't fully reproduce the results in papers, it would be good to benchmark the level of performance one can expect from the implementations in this package.

Describe alternatives you've considered
None

@boris-il-forte
Copy link
Collaborator

Thanks for your interest in MushroomRL.

We are aware that the major drawback of MushroomRL is the lack of benchmarks for experimental results. We are currently working to solve the issue and to provide more than the benchmark you are asking for. We are implementing an extensive (and extendable) benchmarking system with statistically significant plots in different environments.
Unfortunately, we need some time to do scientifically solid work, but we expect to get some results in the next few months.

We are also working to implement most of the common "tricks" for Deep RL algorithms.
Some of them are already in the dev branch (e.g. state normalization). More will come.
Our major concern is to maintain a clean and understandable codebase: the main objective of MushroomRl is to be modular and easy to use, to allow fast development of new RL algorithms and boost Deep RL research.

However, there will be always some major differences between MushroomRL and other frameworks.
We will always treat separately absorbing states and truncated trajectories (something that is not happening in raw OpenAI gym environments).
Furthermore, at least in our baseline algorithm implementations, for readability, we will never support shared network architectures (between actor and critic networks). However, if needed, these architectures can be easily implemented by the user.

Maybe in the future, we will add a tutorial on how to implement a shared network version of PPO.

We need help from the community! We will glad to fix issues reported by the users, improve the algorithms by adding "tricks" (if they don't break the theoretical properties of the algorithm and code structure) and to add other quality of life improvements.

@KeAWang
Copy link
Author

KeAWang commented Jan 23, 2020

Amazing! Thank you so much for this great package!

@angel-ayala
Copy link

Hi!,
First of all, congrats by this package it seems to be very simple and modular enough to understand how works and run and RL algorithm.
I'm working in a project now, and I'm intended to use it, so I must run some test modify some aspects like policy, but most of all are the metrics to benchmark the algorithm, do you have posted anywhere the tasks to be done to accomplish that?
I would like to help you guys, but I'm requiring some guidelines to make a contribution here.

Regards,

@boris-il-forte
Copy link
Collaborator

boris-il-forte commented Aug 10, 2020

dear @angel-ayala,
Unfortunately, due to the coronavirus pandemic, we are a bit delayed on the benchmarking task.
We are currently finishing the benchmark and running the experimental campaign to provide basic metrics on many state-of-the-art benchmarking environments.
We hope that we will have some results soon. We will publish the benchmarking suite as soon as it's ready.
This will take a bit of time, but we expect that it will be ready by the next month.

@boris-il-forte
Copy link
Collaborator

boris-il-forte commented Sep 29, 2020

Ok, the work took much more than expected, but finally we are ready to open source MushroomRL Benchmark!
https://github.com/MushroomRL/mushroom-rl-benchmark

Many thanks to our student @benvoe for the effort and the dedication to this work.
Unfortunately, is still "work in progress", and we have to check that everything is working, tune some algorithms, fix some computational issues we have found.

Also, I'll soon public the results of the benchmarks in ReadTheDocs with the used parameters. Stay tuned for more updates and news!

@boris-il-forte
Copy link
Collaborator

boris-il-forte commented Apr 7, 2021

After a long time, a lot of work, and many trials, we are finally proud to announce our first stable version of the MushroomRL Benchmark:
https://github.com/MushroomRL/mushroom-rl-benchmark/tree/1.0.0

You can use this library along with Mushroom-1.6.1 to reproduce the benchmarks.
Results, used parameters, and documentation can be found on our Readthedocs page:
https://mushroom-rl-benchmark.readthedocs.io/en/latest/?badge=latest

Thank you very much for your patience and for the help!
Thanks also to @benvoe for creating the first draft of this package.

ps: @KeAWang I hope that this solves your issues.

@KeAWang
Copy link
Author

KeAWang commented Apr 7, 2021

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants