-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User configurable linear solvers #65
Conversation
Fixes #22 (implements Pardiso indirect solver support) |
ca3e252
to
5f00831
Compare
I rebased the branch onto the latest commit on master (with faster projections). Testing against the latest commit on master on Closest Correlation Matrix Problem with N = 600 from the COSMO benchmark repo, I get: Old Code (using the CHOLMOD solver):
PR Code with
PR Code with
Update: It seems like the problem is |
Yes, I agree that
I tried making the diagonal matrix into TL;DR : go back to assembling the KKT matrix the old way I think. |
Assembling the KKT matrix like before helps with the CholmodKKTSolver. This is now as fast as it used to be. Lines 244 to 260 in f299102
which should be really quick. |
Two ideas, though admittedly I don't think either will make a difference in the time:
|
99% of the time was spent reordering which was caused by the fact that we provided Julia sparse matrices in CSC format and Pardiso expects CSR. This can be overcome by telling Pardiso to solve the transposed problem. I therefore added # set to allow non-default iparams
set_iparm!(ps, 1, 1)
# set the transpose flag (Pardiso: CSR, Julia CSC)
set_iparm!(ps, 12, 1) For the problem mentioned above I know get the following solve times: Code with
Code with
Code with
Code with
It will be interesting to see how this changes when the pardiso solvers use multiple threads. Edit: Fix typo |
Did you mean for the last two sets of numerical results to be the same solver, or is one of them Cholmod? I agree it will be good to see the results for multi-threaded Pardiso. For the non-MKL version it can (must?) be done via some environment variable, e.g. |
Codecov Report
@@ Coverage Diff @@
## master #65 +/- ##
==========================================
- Coverage 93.93% 90.17% -3.76%
==========================================
Files 20 22 +2
Lines 1500 1608 +108
==========================================
+ Hits 1409 1450 +41
- Misses 91 158 +67
Continue to review full report at Codecov.
|
@migarstka regarding a discussion we had about this branch: I think that passing custom arguments to the linear solvers can be passed in the same way as Apart from that what other things you have in mind as necessary in order for this to be merged? |
@nrontsis Yes, it would be great, if you would take a look at this. In my opinion, the implementation should fulfil the following requirements:
ps = MKLPardisoSolver()
set_nprocs!(ps, i) So we probably have to make sure that the COSMO.Model(with_KKT_solver(MKLPardisoSolver, num_threads = 3))
JuMP.Model(with_optimizer(COSMO.Optimizer(with_KKT_solver(Pardiso, num_threads = 3), with_accelerator(Anderson, memory = 5), with_many_more_modules(...)) Maybe @blegat can say something about how JuMP intends the solver packages to handle this? |
@migarstka thanks for the detailed comment. I have pushed an initial implementation that I think that satisfies the above requirements. It works as following: The user creates a settings object as following settings = COSMO.Settings(...,
kkt_constructor=with_options(QdldlKKTSolver, args...; kwargs...)
) and then creates/optimises the model as usual (via For convenience, in the case that no custom options need to be passed by the user, the settings object can also be created by simply doing settings = COSMO.Settings(..., kkt_constructor=QdldlKKTSolver) I believe that the implementation is general and could be used e.g. for settings = COSMO.Settings(...,
kkt_constructor=with_options(QdldlKKTSolver, args...; kwargs...),
accelerator_constructor=with_options(MyAccelerator, args...; kwargs...)
) |
@nrontsis I think this is a good way to do it. I fixed a few things to make it work with MOI / JuMP as well. I also made OptionsFactory parametric to prevent users from doing something like: COSMO.Settings(kkt_solver = with_options(MyAccelerator, mem = 5)) |
Great, thanks. On a different note, I think we should make some types in this PR more concrete via parametrisation. For example, on the Should I proceed on changing this? |
Yeah. Go ahead! |
@migarstka I fixed the "ambiguities" I found in the structs. Do you think this is ready to be merged? |
- Allows user defined linear systems solvers - Add support for CHOLMOD, QDLDL, Pardiso, MKLPardiso - Make all Pardiso related code conditional - Add free memory step for pardiso solvers at end of main routine - Print information about which solver is used if verbose = true - Add OptionsFactory pattern to allow the user to provide custom options for each solver - Add unittests (pardiso related ones are optional)
I added some documentation, squashed all the commits and rebased them onto the latest master commit. Will merge tomorrow |
- Allows user defined linear systems solvers - Add support for CHOLMOD, QDLDL, Pardiso, MKLPardiso - Make all Pardiso related code conditional - Add free memory step for pardiso solvers at end of main routine - Print information about which solver is used if verbose = true - Add OptionsFactory pattern to allow the user to provide custom options for each solver - Add unittests (pardiso related ones are optional)
- Allows user defined linear systems solvers - Add support for CHOLMOD, QDLDL, Pardiso, MKLPardiso - Make all Pardiso related code conditional - Add free memory step for pardiso solvers at end of main routine - Print information about which solver is used if verbose = true - Add OptionsFactory pattern to allow the user to provide custom options for each solver - Add unittests (pardiso related ones are optional)
- Allows user defined linear systems solvers - Add support for CHOLMOD, QDLDL, Pardiso, MKLPardiso - Make all Pardiso related code conditional - Add free memory step for pardiso solvers at end of main routine - Print information about which solver is used if verbose = true - Add OptionsFactory pattern to allow the user to provide custom options for each solver - Add unittests (pardiso related ones are optional)
- Allows user defined linear systems solvers - Add support for CHOLMOD, QDLDL, Pardiso, MKLPardiso - Make all Pardiso related code conditional - Add free memory step for pardiso solvers at end of main routine - Print information about which solver is used if verbose = true - Add OptionsFactory pattern to allow the user to provide custom options for each solver - Add unittests (pardiso related ones are optional)
This adds support for configurable linear solvers for the KKT systems.
Implements initial support for QDLDL(default), CHOLMOD and Pardiso (both direct and indirect).
Pardiso requires an extra license, so maybe best to switch this one to the MKL version instead, or support both.