Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

distribute with MKL? #6

Closed
mlubin opened this issue Apr 18, 2014 · 13 comments · Fixed by #327
Closed

distribute with MKL? #6

mlubin opened this issue Apr 18, 2014 · 13 comments · Fixed by #327
Labels
LINALG Codes Relates to linear algebra codes (HSL etc.)

Comments

@mlubin
Copy link
Member

mlubin commented Apr 18, 2014

@ViralBShah mentioned in JuliaLang/julia#4272 that julia has a license to redistribute MKL. Could we use this for the Ipopt binaries? This should give much better default performance than with MUMPS.

@tkelman

@ViralBShah
Copy link

It is probably not worth the hassle - need to maintain the licenses when they expire, OS issues, etc. What kind of performance difference are we talking about?

@mlubin
Copy link
Member Author

mlubin commented Apr 18, 2014

Right now we're statically linking reference BLAS/LAPACK and using MUMPS for sparse linear algebra, so I wouldn't be surprised if there's a factor of 5 speedup possible without even considering multiple threads, but I should actually benchmark it.

@ViralBShah
Copy link

Any idea why MUMPS is so much better?

@mlubin
Copy link
Member Author

mlubin commented Apr 18, 2014

Worse, you mean?

@tkelman
Copy link
Contributor

tkelman commented Apr 19, 2014

I was a bit surprised the last time I compared Mumps to Pardiso (this was using Basel Pardiso, before we got MKL Pardiso working in Ipopt https://projects.coin-or.org/Ipopt/ticket/216 - wouldn't expect MKL Pardiso to be much different).

parallel_results

These were all using MKL for Blas, allocating threads to the linear solver for the last 4 solvers, or to Blas for the first 4. I have some data somewhere comparing different Blas implementations but IIRC it wasn't much more than 20% difference. These conclusions are very problem-dependent though, it's all down to how large the dense sub-blocks get during the multifrontal sparse solve.

@tkelman
Copy link
Contributor

tkelman commented Apr 19, 2014

Anyone know what the license looks like on Matlab Compiler Runtime? They ship MA57 for sparse ldl, perhaps we could borrow just the one file. In the Matlab interface for Ipopt I wound up just using Matlab's own MA57 directly.

@ViralBShah
Copy link

I meant how does MUMPS compare to UMFPACK for lpopt, or does lpopt not support calling UMFPACK?

@tkelman
Copy link
Contributor

tkelman commented Apr 19, 2014

UMFPACK doesn't work for Ipopt because Ipopt needs to check the inertia of the symmetric indefinite KKT matrix to ensure descent properties (it does a regularization and re-factorizes if it doesn't get the expected inertia). LU and Cholesky won't give you that, only Bunch-Kaufman LDL. Those 8 linear solvers above are pretty much an exhaustive list of usable candidates for what Ipopt needs, with the exception of TAUCS. Several years ago they looked at a TAUCS interface, but my understanding is the performance wasn't good enough to be worth keeping around.

If you're solving a convex problem you can do block-wise Cholesky so optimization codes for QP/SOCP/SDP have more choices of linear algebra libraries, but Ipopt is designed for general possibly non-convex problems.

@ViralBShah
Copy link

Got it. Thanks for the explanation.

@mlubin
Copy link
Member Author

mlubin commented Apr 19, 2014

Interesting results. Using the matlab compiler runtime sounds sketchy. What about compiling and statically linking our own 32-bit integer version of OpenBlas?

@tkelman
Copy link
Contributor

tkelman commented Apr 19, 2014

We could also pay for a binary redistribution license, assuming Julia as an organization has some resources. The HSL folks write really good code and it's worth supporting them. Might be able to get a better deal than whatever they charged Mathworks.

Statically linked LP64 OpenBlas should work (just don't forget -fPIC). I suspect that might make the binaries significantly larger, depending how clever the linker is about only pulling in what it needs. If you want to try it soon, go for it. I doubt the performance difference will be all that big, but I could be wrong here.

I think getting Julia issue 4923 sorted would be preferable in the long run, having ILP64 OpenBlas with prefixes on all the functions and shared LP64 without prefixes built by Julia but only used by packages.

@ViralBShah
Copy link

We certainly can check with the HSL folks. If the licensing is possible with a reasonable cost, I am sure we can find a way to make this work.

@odow odow added the LINALG Codes Relates to linear algebra codes (HSL etc.) label May 26, 2020
@odow
Copy link
Member

odow commented Nov 23, 2020

A potential route forward for this is to use IpoptMKL_jll, JuliaPackaging/Yggdrasil#1031, but it's still a WIP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
LINALG Codes Relates to linear algebra codes (HSL etc.)
Development

Successfully merging a pull request may close this issue.

4 participants