Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature suggestion: post-solver feasibility tolerance checks #434

Open
rileyjmurray opened this issue Feb 7, 2018 · 5 comments
Open

Feature suggestion: post-solver feasibility tolerance checks #434

rileyjmurray opened this issue Feb 7, 2018 · 5 comments

Comments

@rileyjmurray
Copy link
Collaborator

Sometimes when I solve a problem, cvxpy's status code is "optimal", but the resulting cvxpy Constraint objects have very large violation (on the order of 1e0 or even 1e1 on an absolute scale).

Why might this happen?

  1. Interface languages like cvxpy (or yalmip) need the translate a low-level solver's status code to something in its parlance. For example, if MOSEK has status codes at the level of "optimal / near optimal / inaccurate / infeasible", how should cvxpy map these to its status codes of "optimal / inaccurate / infeasible"? This is a hard question and there is no perfect answer.

  2. The user specifies some precision with their formulation in mind, but it may be that much higher precision is needed to account for the transformations between the user's formulation and what the solver sees. Moreover, I imagine it is very difficult to algorithmically determine increased precision requirements given the problem transformations.

Why is this a problem?

  1. It is a reasonable expectation for the user that if a solver is called with a given tolerance (say, 1e-6), that they not be returned a solution that simultaneously has status code "optimal" and has an inequality constraint that's violated by many orders of magnitude above that tolerance (say, an absolute violation of ~1.5, for a total of 7 orders of magnitude difference from the specified tolerance).

  2. Experts in optimization can and should know better than to trust a status code, but cvxpy is targeted at a broad audience of scientists and engineers. By lowering the barrier to use of advanced optimization techniques, cvxpy should also raise the safegaurds against using it incorrectly. Both cvx and cvxpy already do this by issuing warning flags or flat-out refusing to solve problems which are easily verified as being non-convex.

What steps might be taken to address this problem?

In all honesty, I think it could be a very interesting area of research in optimization to consider how numerical issues in canonization or standardization should be factored into selecting precision criteria. While it is certainly beyond the scope of cvxpy's developers to single-handedly take on this issue, it would be nice to take some early steps in that direction. Here are some bare-bones changes to cvxpy that I think would meaningfully improve the situation:

  1. Create a global "feasibility override tolerance" parameter with some large default value (say 1e-4) that the user can change to their liking.
  2. Before a call to ".solve()" returns with status code "optimal", check that max(c.violation()) < TOL for every constraint c in the problem. If this condition fails, then override the status code to "post-solver infeasible" (or something to that affect).

What are your thoughts?

@ipa-lth
Copy link

ipa-lth commented Feb 8, 2018

I totally agree.
I ran into some trouble by using SCS, within a bisection. Depending on the iterations, "optimal_inaccurate" flipped into "infeasible_inaccurate". For the bisection I found an inaccurate solution would be good enough, as long as it is not violating the constraints. I am still working around this issue but the initial confusion why the bisection failed would not had happened with the proposed check.

Sidenote:
As far as I understood picos allows to individually check/calculate the constraints with the valued variables. Is there something comparable in cvxpy.Problem or cvxpy.Constraint?

@rileyjmurray
Copy link
Collaborator Author

rileyjmurray commented Feb 8, 2018

@ipa-lth there absolutely is a feature in cvxpy where you can check the check the constraints with the valued variables.

In cvxpy 1.0, you can use:
v = c.violation() # returns a numpy array; usually of the same dimensions as the Constraint expression
vlist = v.flatten().tolist() # gets those values as a list
c_viol = max(vlist) # ... does what it says it does.

Just iterate over all constraints and verify that these violations are within your tolerances. I actually do this in one (rather messy) line:

max([max(c.violation().flatten().tolist()) for c in p.constraints])

My proposal is essentially that this feasibility checking be done automatically, rather than being left to a user when debugging.

@SteveDiamond
Copy link
Collaborator

@akshayka the importance of judging feasibility from the context of the solver became very clear during the Stanford convex optimization class this year. It was a common complaint by the students. But to properly implement it in the current cvxpy I feel the check should happen in reductions (at least some reductions). Right now reductions don't consider how well a problem was solved, only the status.

What do you guys think?

@rileyjmurray
Copy link
Collaborator Author

I'd like to start making progress on this problem. Here are some thoughts.

We should be careful about changing the status code returned by a solver. People often write code that checks if cvxpy's reported status has a particular value. This might be considered a breaking change for some people. A less-intrusive option is just to raise a warning. The warning could state intentions for reporting a different status in later versions of cvxpy. Pinging @akshayka @SteveDiamond -- what are your thoughts about this?

We might also consider post-solver optimality checks. It's pretty easy to check that dual variables satisfy their conic constraints (although this isn't universally the case). Checking complementarity is also easy. The hard part is checking for a stationary Lagrangian. @SteveDiamond can you remind me why you introduced .grad to Expressions? I think it's been there since long before DPP.

@SteveDiamond
Copy link
Collaborator

.grad was added to support the DCCP package.

I'm fine with starting with a warning. We can always build on that, add an additional status as you suggested or something else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

No branches or pull requests

3 participants