New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature suggestion: post-solver feasibility tolerance checks #434
Comments
I totally agree. Sidenote: |
@ipa-lth there absolutely is a feature in cvxpy where you can check the check the constraints with the valued variables. In cvxpy 1.0, you can use: Just iterate over all constraints and verify that these violations are within your tolerances. I actually do this in one (rather messy) line: max([max(c.violation().flatten().tolist()) for c in p.constraints]) My proposal is essentially that this feasibility checking be done automatically, rather than being left to a user when debugging. |
@akshayka the importance of judging feasibility from the context of the solver became very clear during the Stanford convex optimization class this year. It was a common complaint by the students. But to properly implement it in the current cvxpy I feel the check should happen in reductions (at least some reductions). Right now reductions don't consider how well a problem was solved, only the status. What do you guys think? |
I'd like to start making progress on this problem. Here are some thoughts. We should be careful about changing the status code returned by a solver. People often write code that checks if cvxpy's reported status has a particular value. This might be considered a breaking change for some people. A less-intrusive option is just to raise a warning. The warning could state intentions for reporting a different status in later versions of cvxpy. Pinging @akshayka @SteveDiamond -- what are your thoughts about this? We might also consider post-solver optimality checks. It's pretty easy to check that dual variables satisfy their conic constraints (although this isn't universally the case). Checking complementarity is also easy. The hard part is checking for a stationary Lagrangian. @SteveDiamond can you remind me why you introduced |
I'm fine with starting with a warning. We can always build on that, add an additional status as you suggested or something else. |
Sometimes when I solve a problem, cvxpy's status code is "optimal", but the resulting cvxpy Constraint objects have very large violation (on the order of 1e0 or even 1e1 on an absolute scale).
Why might this happen?
Interface languages like cvxpy (or yalmip) need the translate a low-level solver's status code to something in its parlance. For example, if MOSEK has status codes at the level of "optimal / near optimal / inaccurate / infeasible", how should cvxpy map these to its status codes of "optimal / inaccurate / infeasible"? This is a hard question and there is no perfect answer.
The user specifies some precision with their formulation in mind, but it may be that much higher precision is needed to account for the transformations between the user's formulation and what the solver sees. Moreover, I imagine it is very difficult to algorithmically determine increased precision requirements given the problem transformations.
Why is this a problem?
It is a reasonable expectation for the user that if a solver is called with a given tolerance (say, 1e-6), that they not be returned a solution that simultaneously has status code "optimal" and has an inequality constraint that's violated by many orders of magnitude above that tolerance (say, an absolute violation of ~1.5, for a total of 7 orders of magnitude difference from the specified tolerance).
Experts in optimization can and should know better than to trust a status code, but cvxpy is targeted at a broad audience of scientists and engineers. By lowering the barrier to use of advanced optimization techniques, cvxpy should also raise the safegaurds against using it incorrectly. Both cvx and cvxpy already do this by issuing warning flags or flat-out refusing to solve problems which are easily verified as being non-convex.
What steps might be taken to address this problem?
In all honesty, I think it could be a very interesting area of research in optimization to consider how numerical issues in canonization or standardization should be factored into selecting precision criteria. While it is certainly beyond the scope of cvxpy's developers to single-handedly take on this issue, it would be nice to take some early steps in that direction. Here are some bare-bones changes to cvxpy that I think would meaningfully improve the situation:
What are your thoughts?
The text was updated successfully, but these errors were encountered: