-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
default to 32-bit MKL in Julia MKL builds #30828
Comments
I also found a similar issue while using Gmsh Julia API. Changing to LP64 makes it work! |
I thought the preferred way to use MKL now was MKL.jl. But that perhaps still has the same issue? Given that we mangle names for the 64-bit BLAS calling, could we ship both 32 and 64-bit BLAS? |
Based on further conversation with @stevengj - it seems that many 64-bit BLAS libraries are adopting the mangled names. We will try contact Intel about this - and at the same time we should default to LP64 instead of ILP64 with MKL. |
I'm on board with this. We might want to consider if we can come up with a nice way to make this configurable when building the package. However, we can probably just change the default right away. We'd just have to change the value of |
We should change the default right away for MKL. We are also now in touch with Intel and are getting connected to the right person. |
Explanation from @stevengj that I am capturing here for the need to make LP64 default for MKL: Julia users can install the MKL.jl package that downloads MKL (currently v2019.0.117) and loads it dynamically with dlopen at runtime. Alternatively, there are Julia builds that link with MKL directly when Julia is compiled. In either case, MKL can then be used for linear algebra etcetera by Julia users, replacing the OpenBLAS calls that Julia uses by default. Currently, these use the ILP64 version of MKL so that we can support big arrays. The problems arise when other packages link external libraries that also load MKL dynamically but expect the LP64 interface. For example, Julia programs can call Python packages by loading libpython dynamically, via the PyCall.jl package, and Python packages often use NumPy, which is often linked to MKL for BLAS operations (e.g. Anaconda Python ships with MKL), but NumPy supports the LP64 interface exclusively. These then crash because an ILP64 version of MKL is already loaded. In general, for a very dynamic environment like Julia where many external libraries and their dependencies are often loaded at runtime with dlopen, it is a severe problem to have two versions of MKL that export the same symbols but which are ABI-incompatible (LP64 vs ILP64). It is extremely difficult to ensure that every dependency in a large ecosystem wants the same MKL interface, and it is especially difficult to use ILP64 for newer software (Julia) while continuing to support older software (NumPy etc.) that was designed for the LP64 interface. The cleanest solution is to offer an ILP64 build of MKL in which all exported symbols are modified in some way, e.g. by a “64_” suffix — this is what was adopted in OpenBLAS following a convention set by the SunPerf BLAS. (The specific spelling of the suffix doesn’t matter too much because most software will append it automatically using macros or similar, but it would be most convenient if Intel adopted the same ILP64 suffix “64_" as OpenBLAS, SunPerf, and Fedora EPEL.) |
If we do this, it will unfortunately break existing user codes that depend on the BLAS being 64-bit. |
It's fairly hard to depend on 32-bit BLAS on a 32-bit system, isn't it? |
I am pretty sure we ship a 32-bit BLAS on 32-bit systems, but then those folks would not be running large problems anyways. Also, Yggdrasil now ships OpenBLAS32_jll to support packages that explicitly need a 32-bit BLAS and we have it all co-existing together. The major issue is interaction between Python packages and MKL as in JuliaLinearAlgebra/MKL.jl#46. It would be easily solved with Intel supporting suffixes for 64-bit BLAS. @stevengj has pinged them several times with detailed explanations, but they emailed us yesterday that it is not high priority for them. Similar emails have been sent to AMD as well, with little response. |
Is there any chance of using something like |
In principle we could do that in Yggdrasil, but the last time we touched MKL libraries we wreaked havoc on macOS: JuliaPackaging/Yggdrasil#915, but then it turned out they don't even set the soname in the Linux libraries: JuliaSparse/Pardiso.jl#69 |
There's considerable work now done to address this issue, as a result of which I think we no longer need to do this. |
As I suggested in #4923, the number of people who need a 64-bit BLAS are probably dwarfed by the number of people who will see library conflicts and crashes if they link any other libraries using MKL. Most notably, Anaconda nowadays defaults to MKL (in numpy etcetera), so Conda-packaged libraries and Python code become unusable from any Julia MKL build.
The text was updated successfully, but these errors were encountered: