-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Basin hopping improvements #7819
Conversation
Ok so here's a nice graphic of how the algorithm works from doi:10.1155/2012/674832: SciPy's behaves the same way, always accepting hops to lower energies (C2) and sometimes (randomly) accepting hops to higher energies (C3), in the hope that there will be another better minimum further along in that direction. Otherwise it rejects the step (C4) and goes back to previous location (C3) to make the next step. This is unrelated to the failed minimization issue: It's conceivable that someone could change So
should become
? |
That graphic is quite nice for understanding. C3 should be colored differently I think, because it's not always accepted. But other than that I'd be +1 on adding an illustration like that to the docstring. |
If one of the customized accept_test functions returns None, should that raise an Exception? Currently it's treated as False, but it probably means "the accept test is broken" rather than "the accept test rejected the step". (I did this by accident using the test to plot points.) |
In Python `bool(None) is False`
…On 3 September 2017 at 09:44, endolith ***@***.***> wrote:
If one of the customized accept_test functions returns None, should that
raise an Exception? Currently it's treated as False, but it probably means
"the accept test is broken" rather than "the accept test rejected the
step". (I did this by accident using the test to plot points.)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#7819 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAq51njOOtRXID_aCyzUalsszvIqVwqiks5seehUgaJpZM4PKyZC>
.
--
_____________________________________
Dr. Andrew Nelson
_____________________________________
|
@andyfaff Yes, and that's why it's happening, but I think if |
51c36b3
to
691aa12
Compare
Ok, so I made some preliminary changes, described in the commit messages. Basically:
There are several variables with similar names and it's kind of confusing which is which, so I made a list:
Please check if I'm getting this right. Originally I thought that the initial minimization should be part of the main loop and identical to the others, but I guess that's not right. If we think of the "step" as the random jump in starting point, the algorithm works like this:
So So I changed Anyway, it seems to work ok with constraints now: (eggholder function, blue is starting guess, green are random jump locations, yellow are local minima, and red is global minima, red regions are out of bounds) |
scipy/optimize/_basinhopping.py
Outdated
@@ -25,7 +25,7 @@ def _add(self, minres): | |||
self.minres.x = np.copy(minres.x) | |||
|
|||
def update(self, minres): | |||
if minres.fun < self.minres.fun: | |||
if minres.success and minres.fun < self.minres.fun: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change? If the function, for any reason, is smaller than the current global minimum, it's a better minimum, isn't it? It might not be the global minimum if the minimisation failed, but it's certainly better than a local minimum with a higher function value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Especially if a minimiser is being used with a small nfev
, many minimisations will fail, but basinhopping can still be smart choice for finding approximations to the global minimum. I would argue that it should at least be an option to save as global minimum even if minimisation failed.
I agree that this shouldn't be stored as a local minimum, but it's still the best bet for the global minimum, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@juliusbierk The reason I started making these changes was that it was keeping results even when minimization failed, leading to results that violated the given constraints: #7799
For other types of minimization failure, can we even trust that the value is really correct for the function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@endolith Ah! I see. Yes, with bounds I understand why this is a problem. I see other use-cases, however, where the former behaviour is preferred (for my use cases, that's typically the case). Could this simply be made an option? Or perhaps bounds should be passed directly to basinhopping
? Not sure at all what the best approach, and maybe I'm fairly alone with the preference for the former behaviour.
The values that minimizers return will always be the result of a function evaluation, won't it? At least with the minimizers I know of.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@juliusbierk Well I thought of passing bounds or constraints to basinhopping itself, but it still needs to pass them to the local minimizer, and it would also need some kind of tolerance argument too, or it will reject a lot of things because of floating point inequality, like the local minimizer will find 49.99999998 as meeting a constraint of >=50, which the basinhopping would then reject.
The values that minimizers return will always be the result of a function evaluation, won't it?
I don't know. When it fails for things like "singular matrix" it just means it can't find where to go, but the result given is from a single call to the function given, so it's still a valid result, just not guaranteed to be a real minimum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Yes, that would make it a lot more tedious.
I agree with you. It won't be a real minimum, but it is the "lowest value found", and I would always expect the lowest value found to be returned rather the "the lowest minimum found".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I agree that it should keep "the lowest value found" as long as it meets the bounds and constraints, and is the result of an evaluation of the original function, even if the minimization fails. But how do we do that, since going out of constraints causes a minimization failure, too?
Originally I thought this could be easily adapted to Multistart by rejecting all steps and setting the stepsize wide enough to cover the desired area (like to uniformly sample -5 to +15, set starting guess at +5 and stepsize to 10: But because it does a local minimization even before the first step, this won't work because it moves the starting point to that minimum first. |
So we need to:
Any ideas how to do this? |
@endolith I don't know how consistent the
|
Yes I thought of using that, but not sure if it's reliable enough, and doesn't each solver have different sets of values? and those could change? especially if new solvers are implemented? Maybe the real solution is to change
Or it could return something more explicit, like Actually it probably has the be the latter, since |
|
@endolith I personally like this approach of changing |
@juliusbierk Do you have an opinion on the question marks in my table?
Good point. |
@endolith Sorry, I definitely don't know off-hand. I had a quick look at the TNC code. There are also
I don't use these constrained minimizers much, so I'm really no expert. Hopefully someone else can pitch in. |
Do you have any simple examples of your use case, where the local minimizer fails to converge but the result is still useful? |
ec91c1c
to
b2e742e
Compare
Ok I started over, implementing the InvalidResult flag. Please comment on whether I should keep going with this. |
06d5eb9
to
2a9912e
Compare
2a9912e
to
eafdc5b
Compare
It works now, bounds and constraints are both respected, fixing #7842: |
eafdc5b
to
d37d953
Compare
d37d953
to
7f97eac
Compare
Has a consensus been found on this? Any chance this could be merged soon? |
@chernals I'm waiting for comments by others on whether this is a good idea. |
The PR here really helped bring my optimization a lot closer to the result. However, I ended up checking only for Doing this lead my basinhopping optimization to much more reasonable results compared to what the current 1.4.x branch generates. Before this PR basinhopping would just get further and further away from satisfying the constraints, and focused its attention solely on minimizing. |
Following up on this, because I'm sick of my docker instances taking forever to build scipy instead of using a wheel 😂 Anyways, I guess I fail to see why we store the energy, x, and update the "global_min" on an optimization trial that did not result in This line in particular https://github.com/scipy/scipy/blob/maintenance/1.5.x/scipy/optimize/_basinhopping.py#L154 Thanks in advance. |
So it's been 3 years since I was working on this, and I barely remember what I was doing. It needs rebasing, since changes have been made since. Has anyone read through it lately and can comment on whether this is a good approach or not? Such as the Can anyone help fill in missing values in the table of return values? It primarily needs technical input from other people. |
7f97eac
to
d213e58
Compare
Basinhopping was not working with bounds or constraints, since it would keep failed local minimization results that violated them. However, it's also harmful to reject *all* results from failed minimizations, since some are still valid function values and may improve the estimate of the global minimum (if local minimizer's maxiter is set low, for instance). So created a sentinel value to indicate that minimization has failed *and* the result is invalid and should not be kept by global optimizers. InvalidResult has boolean False value for backwards compatibility with functions that only care whether minimization failed or not. Modified basinhopping to reject InvalidResult from global candidate Modify tnc, sqslsp, cobyla, lbfgsb to return InvalidResult when minima is out of bounds or violates constraints. (TODO: Are there more conditions that need to be flagged?)
Initial minimization should check for InvalidResult, too. Clarify "initial" minimization failed message.
f127d97
to
dd7f394
Compare
If you're interested in finishing this up @endolith, I can review it. Please let me know. |
"Performance" = "likelihood of finding the correct answer", right? Or "computational speed"? I don't know if it would improve it. It's been 5 years since I made this, but if I remember correctly, it would only improve the likelihood of finding the solution when there are bounds or constraints. If there aren't any, it should have no effect. Do the benchmarks use those? |
@@ -152,7 +152,7 @@ def one_cycle(self): | |||
|
|||
accept, minres = self._monte_carlo_step() | |||
|
|||
if accept: | |||
if accept and minres.success is not scipy.optimize.InvalidResult: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will still print accept 1
when it hits self.print_report(minres.fun, accept)
, even though the step is not "accepted". Is that good or bad?
Adding the success
criteria to the accept_tests
isn't possible because those are based only on the new and old values.
Performance = % of successes and number of objective function evaluations. All four other global optimizers tested did better in both categories. I remember adding some problems for global optimizers with constraints... But those may have been tests, not benchmarks. I think that few, if any, of the benchmarks have constraints. |
Reading through all this, the original problem could be solved simply by adding something like elif not minres.success:
accept = False
break to the But seeing how the global optimizers add different keys to their output, I think adding more keys to This algorithm is similar to Dual Annealing, which seems to reject out of bounds values after the local minimization step: https://github.com/scipy/scipy/blob/v1.9.1/scipy/optimize/_dual_annealing.py#L425 scipy/scipy/optimize/_dual_annealing.py Lines 423 to 430 in 2e5883e
|
The minres.success check was the exact approach I had used when we were utilizing a basinhopping minimization with constraints. Before that modification, the procedure would take drastically longer and would fail to find a solution that would satisfy the constraints. |
Yeah, there was no consensus, so I broke off #7954 from this, for example, and other things can be broken off into smaller PRs, too.
|
OK @endolith but please note that |
OK, based on #7819 (comment) I'll close this. |
Based on #7799
Metropolis
, but could happen with a customaccept_test
minimization failsresult is invalidstepsize
of same shape as starting guess (when variables have different units/scales)Support Multistart? (or just explain how to configure it (reject all steps and make stepsize wide enough to cover area of interest))brute()
(separate PR)