-
Notifications
You must be signed in to change notification settings - Fork 45
Add optimizers from nevergrad #591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optimizers from nevergrad #591
Conversation
Hi @gauravmanmode, thanks for the PR. I definitely like the idea of your Regarding the Executor: There was an argument brought forward by @r3kste that suggests it would be better to use the low-level ask-and-tell interface if we want to support parallelism. While I still think the solution with the custom Executor can be made to work, I think that the ask-and-tell interface is simpler and more readable for this. |
Currently your tests fail because nevergrad is not compatible with numpy 2.0 and higher. You can pin numpy in the environment file for now. |
Or better: Install nevergrad via pip instead of conda. The conda version is outdated. Then you don't need to pin any numpy versions. |
Codecov ReportAttention: Patch coverage is
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
Hi, @janosg , Here is the list of parameter names I have referred to nevergrad_cmaes
what kind of tests should i have for the internal helper function ? |
Hi @janosg, |
Hi @gauravmanmode, yes please go ahead and refactor the code for pso as well. I would stick to approach one, i.e. passing the configured optimizer object to the internal function. It is more in line with the design philosophy shown here. |
About the names:
I would mainly add a name for stopping_maxfun. Other convergence criteria are super hard to test. If you cannot get a loss out of nevergrad for some optimizers you can evaluate problem.fun at the solution for now and create an issue with a minimal example at nevergrad to get feedback. I wouldn't frame it as a bug report (unless you are absolutely sure) but rather frame it as a question whether you are using the library correctly. |
for more information, see https://pre-commit.ci
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Can you quickly explain why you removed SPSA? |
SPSA was not accurate and was failing the tests. |
This is something we always need to discuss before we decide to drop an algorithm. Often it is possible to tune the parameters to make algorithms more precise; In extreme cases we can also relax the required precision for algorithms before we drop them. I merged main into your branch. Now tests are failing due to the changes in #610 but this will be a quick fix. |
Sorry I missed a discussion on this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @gauravmanmode, thanks for the great PR! This looks great! I have two small comments about tests you wrote but I already approve the PR.
I think the tests are failing because of some package conflicts. |
Hey @gauravmanmode. I built the environment locally. You are right that nevergrad=1.0.3 is installed, which expects bayesian-optimization>=1.2.0. I suspect that nevergrad can't work with the breaking change of bayesian-optimization (from 1.x.y to 2.x.y). If you try to import nevergrad in a local environment you will see that this fails because of bayesian-optimization. This is why the tests fail. I think there are two routes we can take:
|
@timmens lets go for option 2. Could you do those changes since you have done similar changes many times before? @gauravmanmode could you open an issue at nevergrad to see if there are fundamental reasons why the version needs to be pinned so strictly? Otherwise we could offer to help them make nevergrad compatible with newer versions of bayesian optimization. |
Sure, I’ll open an issue on Nevergrad. As @timmens told, the reason the version is pinned strictly is because nevergrad uses some deprecated functions and not updated since. |
I'll try to explain what you need to do for "Route 2" here @gauravmanmode (seems like the easiest solution as I don't have push access to your branch I believe).
And then you should be done! |
for more information, see https://pre-commit.ci
I see the issue is resolved. Just to clarify the currently implemented Bayesian optimizer is not compatible with |
A related issue on nevergrad facebookresearch/nevergrad#1701. |
PR Description
This PR adds support for the following optimizers from the nevergrad optimization library.
Two optimizers from nevergrad are not wrapped namely SPSA and AXP as they are either slow or imprecise.
Features:
Helper functions
_nevergrad_internal
:Handle the optimization loop, return InternalOptimizeResult .
x _process_nonlinear_constraints
:Flatten vector constraint into a list of scalar constraints for use with Nevergrad.
x _get_constraint_evaluations
:Return a list of constraint evaluations at x
x _batch_constraint_evaluations
:Batch version of _get_constraint_evaluations
Test suite:
Note:
Nonlinear constraints on hold until improved handling.
Changes to
optimize.py
:Currently None bounds are transformed to arrays of np.inf. Handle this case if optimizer does not support infinite bounds.
Added test test_infinite_and_incomplete_bounds.py:
test_no_bounds_with_nevergrad
This test should pass when no bounds are provided to nevergrad optimizers.