New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scipy.optimize.minimize SLSQP leads to out of bounds solution #3056
Comments
Could you give an actually runnable test script that shows this behavior? |
This works correctly:
What sort of initial value etc are you using? |
Good day! Thank you for quick reply. Unfortunately I cannot recover now the state Thank you! |
Thanks! SLSQP is from ACM TOMS 733 (used in Scipy with permission from ACM), and issues like this are probably issues in the algorithm itself, and difficult to debug without a test case. |
Hey PV, I ran into this as well; however it is mostly due to numerical roundoff errors (for example, one of my parameters has a lower bound of 0, but SLSQP returns -1e120 or something silly). There are a few cases where it goes wildly off track, but that typically happens when the initial condition fails to respect the bounds -- I've had this happen using SLSQP as the local optimizer driving basinhopping. |
Happened to me too, here's minimal working example: import scipy.optimize as optimize
import numpy as np
X = np.array([[1.020626, 1.013055], [0.989094, 1.059343]])
freq = 13.574380165289256
x_0 = [1., 1.]
def objective(b):
def foo(r_log, freq):
mu, sd = r_log.mean(), r_log.std()
sd += 0.5 / freq
return mu / sd * np.sqrt(freq)
print(b)
return -foo(np.log(np.maximum(np.dot(X - 1, b) + 1, 0.2)), freq=freq)
cons = ({'type': 'ineq', 'fun': lambda b: 2. - sum(b)},)
res = optimize.minimize(objective, x_0, bounds=[(0., 2.)]*len(x_0), constraints=cons, method='slsqp')
print(res) Algorithm always gets to the correct result ([ 1.18818046 0.81181953]) and then suddenly jumps out of bounds. |
It appears that I've encountered a similar issue. The following code did not produce the results that I expected. I don't know whether it's a bug or not, but I suspect that it is. Problematically (for me at least), the behavior is very different when undefined is set to None, 0.0, 1.0, etc. I expected that the bounds parameter would ensure that f(x) was not invoked with an invalid value of x. SLSQP is not the only minimizer that exhibits this problem. Sample code:
|
@pv yep, working like a charm now. Thanks. |
I actually still find the same issue (also with L-BFGS-B). If the solution is the upper bound, the grandient is computed exceeding the upper bound instead of being computed inside the bound. |
@davidpasquale Can you submit an example? |
@mdhaber Here a basic example (the solution is also wrong!):
|
@mdhaber here another example where also the result if wrong!:
|
@davidpasquale I think that is the same sort of issue reported by @cfcohen (numerical differentiation not respecting bounds) It can be argued that this is not a defect, as we typically consider constraints to be satisfied even when they are violated as long as the violation is within a certain tolerance. This is obvious for equality constraints, but we even allow a tolerance on inequality constraints: inequality constraints are often active at an optimal solution, "exact" equality is generally impossible to achieve, and it would require special logic (often not part of the original algorithm) to ensure that the inequality constraint is strictly respected. By the same logic, a tolerance on bounds might be allowed. Then again, it's unlikely that this behavior is desirable, and it seems possible to change, so we might want to change it anyway. @pv @antonior92 Is this something you think should be fixed? |
@davidpasquale In the meantime you might try But @antonior92
Should I open a separate issue? |
BTW @davidpasquale If you can, give analytical derivatives; all these examples solve without violating bounds when analytical derivatives are provided. |
Dear @mdhaber , thanks for your reply. |
I understand, and it seems you're not alone. Yes, one-sided derivative approximation is one thing we'd need to do. There has been talk recently of overhauling the way the optimizers handle derivative approximation, so I'll add this to the conversation. |
|
@andyfaff In your example, how far outside the bounds is 'L-BFGS-B' asking for the gradient to be evaluated? Considerably so, or is it within some sort of reasonable tolerance? Can you post that example? Do you know of any such examples for I've been looking for an example like that for #4916. I wrote a test to try to observe this sort of behavior, but even with an initial guess outside the bounds, every solver (except You wrote in #10112:
So if possible, would you be in favor of trying to add a |
In #10673 I thought I had come across such an example. It turns out that I'm wrong. The code I've introduced involves creation of a ScalarFunction object, and as part of the construction it evaluates the function and gradient with the initial guesses. If the initial guess is outside the bounds then there is an error raised. This can be avoided by clipping the initial guess. I thought that this was the fault of the fortan optimizer loop, but it wasn't. Thankfully this issue made me look more closely and realise the mistake I was making. The changes I'm making as part of 10673 (I've done LBFGSB, TNC, CG, BFGS) will prevent finite difference calculation going outside the bounds, so long as the guess obeys the bounds. It probably wouldn't work if the lower and upper bound are equal. |
Fortunately, it seems that LBFGSB, TNC, and SLSQP already clip the guess to stay within bounds, and trust-constr raises an error if the initial guess is out of bounds and You are adding bounds support for CG and BFGS? If so, maybe their guess should be clipped to match the rest. |
I'm not adding bounds support. I'm just converting those to use approx_derivative. |
25d7d56 converts SLSQP to use approx_derivative, which should respect parameter bounds. However, approx_derivative is only used for the gradient of the parameters, it's not used for calculating the gradient of the constraints. |
I've had a lot of troubles because of this bug. I typically use SLSQP linked to external old Fortran programs like XFOIL and AVL. These codes provide output as a text file with 6-8 digits precision. My solution was to update analysis code to return 10-12 digits in exponential form (%.12e instead of %.6f). Also I always normalize bounds to [-1; 1] and set epsilon value to relatively large number from 5e-4 to 1e-3. It works in most cases, but sometimes I have doubts that solution is not a true optimum and some improvement is possible. |
In #10673 the numerical differentiation code used in SLSQP, L-BFGS-B (and a few others) respects bounds. e.g. if a value is at an upper limit instead of a forward difference a reverse difference is used instead. This is implemented in |
Just ran into this one for L-BFGS-B (SciPy 1.6.2):
In my case, what I'm doing is using I don't have a short repro, and even if I did, it wouldn't be helpful, since I have been unable to reproduce this even after quite a few attempts (RNG seems friendly to me). |
@fuglede all the minimize methods are deterministic, so if you do find a MWE that demonstrates the issue please post it in a new issue (closed issues are not likely to be curated). When an LBFGSB minimization is started the initial vector is clipped to bounds, so the initial guess should be within the bounds. The numerical differentiation strictly obeys bounds now. This causes issues where an upper bound is equal to a lower bound, but that's not the case for the |
Ah, both of my |
|
I'm not. (But would I expect that to make a difference here -- are there function calls taking place between the result being returned from one But again, I'm still unable to reproduce the issue, so I probably wouldn't pay more attention to me myself, unless something like this happens to show up elsewhere. |
SLSQP algorithm goes to infinity without counting for bounds specified if local gradient in one of the directions is close to zero. This issue is found at 2D and 7D bounded constrained problems I'm running now.
objective function as:
def objective(x):
lambda _obj _x: (5._x-2)__2_sin(12.*x-4)
return _obj(numpy.linalg.norm(x)
bounds = ((-1,1),(-1,1))
The text was updated successfully, but these errors were encountered: