Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New integration to EvoJax #13

Open
dietmarwo opened this issue Sep 8, 2023 · 4 comments
Open

New integration to EvoJax #13

dietmarwo opened this issue Sep 8, 2023 · 4 comments

Comments

@dietmarwo
Copy link

See https://github.com/google/evojax/tree/main/evojax/algo

I added two versions of the algo to EvoJax:

  • a wrapper of the fcmaes C++ implementation
  • a new JAX-based implementation

The algorithm performs exceptionally well for the EvoJax benchmarks.

Additionally there are quite interesting results when using it as part of an QD-algorithm,
see google/evojax#52 (not yet merged)

It also has been added as QD-Emitter in fcmaes, see
https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Diversity.adoc

Finally it was applied at
https://www.esa.int/gsp/ACT/projects/spoc-2023/ (Surface Exploration with Morphing Rovers) and finally ranked 3rd (Team fcmaes).

@nomuramasahir0
Copy link
Owner

nomuramasahir0 commented Nov 16, 2023

Thank you for adding it to EvoJax and sharing your interesting results!
I am particularly pleased to see that CR-FM-NES is performing well across a range of problems, as I have been focused on its practical performance.
Please let me know if there are any improvements needed in the algorithm 😎

@dietmarwo
Copy link
Author

dietmarwo commented Nov 17, 2023 via email

@nomuramasahir0
Copy link
Owner

It may not be easy, but tuning learning rates in CR-FM-NES may be advantageous, if the problem is multimodal and/or noisy.
We recently developed learning rate adaptation for CMA-ES (https://arxiv.org/abs/2304.03473; GECCO'23 best paper nominated). Although applying this method to CR-FM-NES may be a bit tedious, I believe simply changing learning rate (i.e. decreasing it by a scalar factor) is effective if the problem is difficult.

@nomuramasahir0
Copy link
Owner

Note that, however, a small learning rate (and learning rate adaptation) may need to take a sufficient evaluation budget. So if the evaluation of the objective function is not cheap, this is not a very attractive option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants