Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault on HUESTIS problem #62

Closed
stephane-caron opened this issue Oct 28, 2022 · 10 comments
Closed

Segmentation fault on HUESTIS problem #62

stephane-caron opened this issue Oct 28, 2022 · 10 comments

Comments

@stephane-caron
Copy link
Contributor

While reproducing the Maros-Meszaros test set in qpsolvers_benchmark, I ran into a segmentation fault with ProxQP 0.2.2 on the HUESTIS problem.

Reproduction steps

Clone qpsolvers_benchmark, then un:

$ python run_benchmark.py --problem HUESTIS --solver proxqp

Outcome on my machine:

Running problem HUESTIS with proxqp...
[1]    89986 segmentation fault (core dumped)  python run_benchmark.py --problem HUESTIS --solver proxqp

Details

The .mat files are the ones from proxqp_benchmark.

They are separated into constraint $l \leq C x \leq u$ and box $lb \leq x \leq ub$ inequalities in from_mat_file, then extract equality constraints in from_double_sided_ineq. Those will be finally converted back to ProxQP format $l \leq C x \leq u$ by qpsolvers in proxqp_combine_inequalities. There may be something wrong in these back-and-forth conversions, but at any rate, it results in a reproducible segfault for ProxQP.

I haven't checked whether HUESTIS is solved fine in proxqp_benchmark (can do, just some extra work to get the benchmark working without Mosek or qpOASES installed).

@jcarpent
Copy link
Member

@stephane-caron Thanks for reporting this issue.
Could you provide a minimal reproducing example independent from your framework?

@jcarpent
Copy link
Member

@Bambade Did you try HUESTIS problem?

@jcarpent
Copy link
Member

My first hint is that the problem is too significant in memory if you use the back-end dense.
Then, malloc fails, and on your side, we are not checking for that during our pre-allocation phase ...

Antoine only checked for problems of size <= 1000 if I remember well. @Bambade Could you confirm?

@Bambade
Copy link
Collaborator

Bambade commented Oct 28, 2022

@stephane-caron thanks for reporting this issue.
@jcarpent yes indeed. I have not tested it as H is of size ~ 10000 x 10000 (and around the same for the full matrix of constraints A: 10002 x 10000), and the Maros benchmarks were restricted for matrices of dimension <= 1000

@stephane-caron
Copy link
Contributor Author

Could you provide a minimal reproducing example independent from your framework?

Sure. This archive HUESTIS.zip decompresses to HUESTIS.mat, and here is a reproduction script:

import proxsuite
import scipy.io as spio

m = spio.loadmat("HUESTIS.mat", squeeze_me=True)
P = m["P"].astype(float).tocsc()
q = m["q"].astype(float)
A = m["A"].astype(float).tocsc()
b = m["b"].astype(float)
C = m["C"].astype(float).tocsc()
l = m["l"].astype(float)
u = m["u"].astype(float)
proxsuite.proxqp.sparse.solve(P, q, A, b, C, l, u)

@jcarpent
Copy link
Member

Super. Very nice. Thanks @stephane-caron for the quick example.

@fabinsch
Copy link
Collaborator

Hello @stephane-caron , thanks for your example. Good news, after #73, the output for your problem is

-------------------------------------------------------------------------------------------------

                              ProxQP  -  Primal Dual Proximal QP Solver
     (c) Antoine Bambade, Sarah El Kazdadi, Fabian Schramm, Adrien Taylor, Justin Carpentier
                                         Inria Paris 2022        

-------------------------------------------------------------------------------------------------

problem:  
          variables n = 10000, equality constraints n_eq = 2,
          inequality constraints n_in = 10000, nnz = 40000,

settings: 
          backend = sparse,
          eps_abs = 1e-05 eps_rel = 0
          eps_prim_inf = 0.0001, eps_dual_inf = 0.0001,
          rho = 1e-06, mu_eq = 0.001, mu_in = 0.1,
          max_iter = 10000, max_iter_in = 1500,
          scaling: on, 
          timings: on, 
          initial guess: equality constrained initial guess. 

[outer iteration 1]
| primal residual=3.45e+03| dual residual=1.88e-02 | mu_in=1.00e-01 | rho=1.00e-06
[inner iteration 1]
| inner residual=1.33e+03 | alpha=9.63e-01
[inner iteration 2]
| inner residual=1.45e+02 | alpha=9.98e-01
[inner iteration 3]
| inner residual=1.36e-12 | alpha=1.00e+00
[outer iteration 2]
| primal residual=4.41e+02| dual residual=6.02e+03 | mu_in=1.00e-02 | rho=1.00e-06
[inner iteration 1]
| inner residual=2.55e+02 | alpha=9.87e-01
[inner iteration 2]
| inner residual=4.83e-13 | alpha=1.00e+00
[outer iteration 3]
| primal residual=5.07e+01| dual residual=6.80e+03 | mu_in=1.00e-03 | rho=1.00e-06
[inner iteration 1]
| inner residual=3.12e+01 | alpha=9.98e-01
[inner iteration 2]
| inner residual=5.33e-14 | alpha=1.00e+00
[outer iteration 4]
| primal residual=5.15e+00| dual residual=6.89e+03 | mu_in=1.00e-04 | rho=1.00e-06
[inner iteration 1]
| inner residual=4.61e-01 | alpha=1.00e+00
[inner iteration 2]
| inner residual=1.56e-15 | alpha=1.00e+00
[outer iteration 5]
| primal residual=5.16e-01| dual residual=6.90e+03 | mu_in=1.00e-05 | rho=1.00e-06
[inner iteration 1]
| inner residual=5.46e-12 | alpha=1.00e+00
[outer iteration 6]
| primal residual=5.16e-02| dual residual=9.28e-07 | mu_in=1.00e-05 | rho=1.00e-06
[inner iteration 1]
| inner residual=2.51e-15 | alpha=1.00e+00
-------------------SOLVER STATISTICS-------------------
outer iter:   6
total iter:   11
mu updates:   4
rho updates:  0
objective:    3.48e+11
status:       Solved
run time:     1.35e+05
--------------------------------------------------------

@jcarpent
Copy link
Member

Solved by #73.

@stephane-caron
Copy link
Contributor Author

Just chiming in to report that this issue was affecting some 30 problems in the Maros-Meszaros test set. With all of them gone, the success rate of ProxQP for sparse problems has bumped from 72% to 85% (with default settings). Good job! 🚀

(Diff of the update: qpsolvers/qpbenchmark@89859f8.)

@jcarpent
Copy link
Member

jcarpent commented Nov 8, 2022

This is a nice report that you've done @stephane-caron
It is also encouraging to move forward to have a real robust solver.
Great job @Bambade @fabinsch ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants