New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to hint so that polish has higher success chance without reducing eps
?
#243
Comments
Another idea is to change basis of my problem. In the above case my constraints excluding bounds is basically C^T x = 0
So I can find the basis of my feasible set of
Hence I can re-parameter the problem in terms of the coefficients of the basis. P = scipy.sparse.csc_matrix((7, 7))
A = scipy.sparse.csc_matrix(basis)
# original_q.dot(basis)
q = np.array([-0.0012773 , -0.01149745, -0.01883068, 0.0068729 , -0.0044216 , 0.00198029, -0.0025005 ])
# Same as original
lb = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0.])
ub = np.array([ 50, 50, 50, 47.996, 9.8307, 50, 119, 50, 50])
m = osqp.OSQP()
m.setup(P=P, q=q, A=A, l=lb, u=ub, polish=True, eps_abs=1e-4, eps_rel=1e-4)
results = m.solver()
This problem has 2 less variables and 2 less constraints, but the constraints is no longer sparse. What do you think of this? Would it typically speed up the convergence or not? |
I have many small problems of various sizes (number of variables & constraints both between (5 ~ 200)
The quadratic term in the objective is typically quite small.
I want to find a good balance betwen number of iterations and quality of solutions. (I have to use OSQP throughout instead of switching to other optimizers)
The following is one example of the data, for simplification I removed the quadratic term so it is basically a LP
As you can see under this settings, the solution has low accuracy. In fact the solutions sometimes live outside of the bounds.
I know that if I reduce eps, I have a higher chance for a successful polishing
My problems all have a structure that all variables bounded between 0 to ub. (that's the identity block in the constraints) The number of other constraints are small, perhaps just 1 or 2. and they are equalities to 0. I wonder if there is any smart way to exploit this structure?
I observe that in many cases, when the polishing is unsucessful, the solutions are quite close to optimal. E.g. if a variable ends with 1e-2, then the exact solution is very likely to be 0.
If a variable ends with 49.99 and upper bound is 50, then the exact solution is very likely to be 50.
Therefore, I would like to somehow manually guess a better solution based on the unsucessful polished one, and warm start in osqp so that it has higher chance of polishing.
Unfortunately, it doesn't quite work.
Any suggestions how I can achieve high quality solutions in a small number of iterations here?
(I also tried hint with
y
, but never really understood how to guess a sensible value for dual)The text was updated successfully, but these errors were encountered: