-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics of algorithm performance #19
Comments
Thanks for your interest in MushroomRL. We are aware that the major drawback of MushroomRL is the lack of benchmarks for experimental results. We are currently working to solve the issue and to provide more than the benchmark you are asking for. We are implementing an extensive (and extendable) benchmarking system with statistically significant plots in different environments. We are also working to implement most of the common "tricks" for Deep RL algorithms. However, there will be always some major differences between MushroomRL and other frameworks. Maybe in the future, we will add a tutorial on how to implement a shared network version of PPO. We need help from the community! We will glad to fix issues reported by the users, improve the algorithms by adding "tricks" (if they don't break the theoretical properties of the algorithm and code structure) and to add other quality of life improvements. |
Amazing! Thank you so much for this great package! |
Hi!, Regards, |
dear @angel-ayala, |
Ok, the work took much more than expected, but finally we are ready to open source MushroomRL Benchmark! Many thanks to our student @benvoe for the effort and the dedication to this work. Also, I'll soon public the results of the benchmarks in ReadTheDocs with the used parameters. Stay tuned for more updates and news! |
After a long time, a lot of work, and many trials, we are finally proud to announce our first stable version of the MushroomRL Benchmark: You can use this library along with Mushroom-1.6.1 to reproduce the benchmarks. Thank you very much for your patience and for the help! ps: @KeAWang I hope that this solves your issues. |
Thank you! |
Is your feature request related to a problem? Please describe.
The performance of RL algorithms is very sensitive to the implementation (including tricks not mentioned in their papers). A good RL package should have benchmarks of how well its implementations perform as quantitative checks.
Describe the solution you'd like
Would it be possible to maintain a set of benchmarks for each of the proposed algorithm on standard tasks so that users can be sure that the implementations are faithful to the source code released by the original authors?
Even if the implementations here can't fully reproduce the results in papers, it would be good to benchmark the level of performance one can expect from the implementations in this package.
Describe alternatives you've considered
None
The text was updated successfully, but these errors were encountered: