Skip to content

Add optimizers from nevergrad #591

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

gauravmanmode
Copy link
Collaborator

@gauravmanmode gauravmanmode commented Apr 23, 2025

PR Description

This PR adds support for the following optimizers from the nevergrad optimization library.

  • PSO
  • CMAES
  • ONEPLUSONE
  • RANDOMSEARCH
  • SAMPLINGSEARCH
  • DE
  • BO
  • EDA
  • TBPSA
  • EMNA
  • NGOPT OPTIMIZERS
  • META OPTIMIZERS

Two optimizers from nevergrad are not wrapped namely SPSA and AXP as they are either slow or imprecise.

Features:

  • Parallelization
  • Support for nonlinear constraints

Helper functions

_nevergrad_internal:
Handle the optimization loop, return InternalOptimizeResult .

x _process_nonlinear_constraints:
Flatten vector constraint into a list of scalar constraints for use with Nevergrad.

x _get_constraint_evaluations:
Return a list of constraint evaluations at x

x _batch_constraint_evaluations:
Batch version of _get_constraint_evaluations

Test suite:

x test_process_nonlinear_constraints
x test_get_constraint_evaluations
x test_batch_constraint_evaluations
test_meta_optimizers_are_valid
test_ngopt_optimizers_are_valid

Note:
Nonlinear constraints on hold until improved handling.

Changes to optimize.py:
Currently None bounds are transformed to arrays of np.inf. Handle this case if optimizer does not support infinite bounds.

Added test test_infinite_and_incomplete_bounds.py:
test_no_bounds_with_nevergrad
This test should pass when no bounds are provided to nevergrad optimizers.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Hi @gauravmanmode, thanks for the PR.

I definitely like the idea of your nevergrad_internal function. We currently have several independent nevergrad PRs open and a function like this is good to avoid code duplication.

Regarding the Executor: There was an argument brought forward by @r3kste that suggests it would be better to use the low-level ask-and-tell interface if we want to support parallelism. While I still think the solution with the custom Executor can be made to work, I think that the ask-and-tell interface is simpler and more readable for this.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Currently your tests fail because nevergrad is not compatible with numpy 2.0 and higher. You can pin numpy in the environment file for now.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Or better: Install nevergrad via pip instead of conda. The conda version is outdated. Then you don't need to pin any numpy versions.

Copy link

codecov bot commented Apr 30, 2025

Codecov Report

Attention: Patch coverage is 87.21109% with 83 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/optimagic/optimizers/nevergrad_optimizers.py 65.69% 82 Missing ⚠️
src/optimagic/optimization/optimize.py 87.50% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/optimagic/algorithms.py 87.70% <100.00%> (+1.75%) ⬆️
src/optimagic/optimization/optimize.py 92.17% <87.50%> (+0.35%) ⬆️
src/optimagic/optimizers/nevergrad_optimizers.py 69.68% <65.69%> (+9.42%) ⬆️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gauravmanmode
Copy link
Collaborator Author

gauravmanmode commented May 5, 2025

Hi, @janosg ,
Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review

what kind of tests should i have for the internal helper function ?
Should I have tests for ftol, stopping_maxfun?
Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something.
image
For reference, I have attached a notebook I used while exploring here

@gauravmanmode
Copy link
Collaborator Author

Hi @janosg,
I am thinking of refactoring the code for the already added nevergrad_pso optimizer and nevergrad_cmaes in this pr.
Does this sound good?
Also, I would like your thoughts on this.

  1. currently I am passing the optimizer object to the helper function _nevergrad_internal.
    image
  2. Another approach is to pass the optimizer name as a string as in pygmo
    image
    image
    What would be a better choice?

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi @gauravmanmode, yes please go ahead and refactor the code for pso as well.

I would stick to approach one, i.e. passing the configured optimizer object to the internal function. It is more in line with the design philosophy shown here.

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi, @janosg , Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review
what kind of tests should i have for the internal helper function ? Should I have tests for ftol, stopping_maxfun? Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something. image For reference, I have attached a notebook I used while exploring here

About the names:

  • xtol and ftol are convergence criteria, so the name would be convergence_xtol. Ideally you would also find out if this is an absolute or relative tolerance and then add the corresponding abbreviation (e.g. convergence_xtol_rel). You can find examples of the naming scheme here
  • The otrher names are god

I would mainly add a name for stopping_maxfun. Other convergence criteria are super hard to test.

If you cannot get a loss out of nevergrad for some optimizers you can evaluate problem.fun at the solution for now and create an issue with a minimal example at nevergrad to get feedback. I wouldn't frame it as a bug report (unless you are absolutely sure) but rather frame it as a question whether you are using the library correctly.

@gauravmanmode gauravmanmode changed the title Add CMAES optimizer from nevergrad Add CMAES optimizer from nevergrad and refactor existing code May 22, 2025
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@janosg
Copy link
Member

janosg commented Jul 16, 2025

Can you quickly explain why you removed SPSA?

@gauravmanmode
Copy link
Collaborator Author

SPSA was not accurate and was failing the tests.

@janosg
Copy link
Member

janosg commented Jul 16, 2025

This is something we always need to discuss before we decide to drop an algorithm. Often it is possible to tune the parameters to make algorithms more precise; In extreme cases we can also relax the required precision for algorithms before we drop them.

I merged main into your branch. Now tests are failing due to the changes in #610 but this will be a quick fix.

@gauravmanmode
Copy link
Collaborator Author

Sorry I missed a discussion on this.
But the implentation of SPSA in nevergrad was a WIP (many todos listed) and no tuning parameters exposed, so I decided to skip it

Copy link
Member

@janosg janosg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @gauravmanmode, thanks for the great PR! This looks great! I have two small comments about tests you wrote but I already approve the PR.

@gauravmanmode
Copy link
Collaborator Author

I think the tests are failing because of some package conflicts.
nevergrad >=1.0.4 requires bayesian-optimization==1.4.0.
but in our case, we have pinned it to bayesian-optimization>=2.4.0.
still, nevergrad 1.0.3 is installed in the environment. i dont get why the tests should fail.
it would be nice if @timmens could help me here

@timmens
Copy link
Member

timmens commented Jul 19, 2025

Hey @gauravmanmode. I built the environment locally. You are right that nevergrad=1.0.3 is installed, which expects bayesian-optimization>=1.2.0. I suspect that nevergrad can't work with the breaking change of bayesian-optimization (from 1.x.y to 2.x.y). If you try to import nevergrad in a local environment you will see that this fails because of bayesian-optimization. This is why the tests fail.

I think there are two routes we can take:

  1. We support and use bayesian-optimization < 2 in our own development environment. In this case we would need to make sure that everything works with version<2 and version>=2 (I am not sure if this has been checked; maybe @spline2hg can say something about this?).
  2. We remove nevergrad from the dev environment and test it in a separate environment where we can install bayesian-optimization=1.4. This is a bit annoying right now, but will be a very easy solution once we switch to a modern package manager.

@janosg
Copy link
Member

janosg commented Jul 19, 2025

@timmens lets go for option 2. Could you do those changes since you have done similar changes many times before?

@gauravmanmode could you open an issue at nevergrad to see if there are fundamental reasons why the version needs to be pinned so strictly? Otherwise we could offer to help them make nevergrad compatible with newer versions of bayesian optimization.

@gauravmanmode
Copy link
Collaborator Author

Sure, I’ll open an issue on Nevergrad. As @timmens told, the reason the version is pinned strictly is because nevergrad uses some deprecated functions and not updated since.

@timmens
Copy link
Member

timmens commented Jul 19, 2025

I'll try to explain what you need to do for "Route 2" here @gauravmanmode (seems like the easiest solution as I don't have push access to your branch I believe).

  1. You remove nevergrad from our development environments. I'd suggest commenting it out and leaving a comment saying that it is currently incompatible with bayesian-optimization.
  2. You edit .tools/update_envs.py:
    1. You create a new section for "nevergrad"
    2. Here you remove the bayesian-optimization (much like we remove the numpy or pandas lines for the test environments for numpy < 2)
    3. You insert new lines for "nevergrad" and "bayesian-optimization==1.4" in the index for the pip versions.
    4. You add this new environment to the loop "write environments" under the name "nevergrad".
  3. In main.yml you add a section run-nevergrad-tests. Make sure that the pytest call only calls tests that are related to nevergrad.

And then you should be done!

@gauravmanmode gauravmanmode merged commit 3bf4f05 into optimagic-dev:main Jul 19, 2025
26 checks passed
@spline2hg
Copy link
Collaborator

I see the issue is resolved. Just to clarify the currently implemented Bayesian optimizer is not compatible with bayesian-optimization==1.4. Specifically, the acquisition function we rely on wasn’t properly introduced until version 2.0.0, so the current approach(option 2) used seems appropriate.thanks!

@gauravmanmode
Copy link
Collaborator Author

A related issue on nevergrad facebookresearch/nevergrad#1701.

@gauravmanmode gauravmanmode deleted the add_optimizer_from_nevergrad branch August 12, 2025 10:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants