-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intend to package mkl-dnn for Debian (and Ubuntu) #206
Comments
@cdluminate, thanks for the update. Indeed the library built from the source is fully functional. We are also working on closing the performance gap between JIT GEMM implementation available in MKL-DNN and MKL. Could you please elaborate on what issues you see with using libmklml in the package? |
I'll deal with the packaging soon. I'll post issues discovered during the process here if any. And glad to hear that open sourced MKL-DNN is trying to catch up MKL in some aspects.
But I didn't found source of mklml on github. Could you please point me to the source if available?
|
Well I found this https://software.intel.com/en-us/mkl/license-faq |
Here is the discussion about MKL https://lists.debian.org/debian-science/2018/03/msg00070.html |
Intel MKL is available under Intel Software Simplified License and allows redistribution. Downloading Intel MKL package from Intel Registration Center requires registration, however once downloaded there's no restrictions on redistribution. The package is also available without registration via yum and apt repositories. The source code for Intel MKL is not available. To simplify enabling of deep learning applications we also provide a subset of Intel MKL functionality including BLAS and selected LAPACK and Vector Math functions in special distribution called Intel MKL small libraries or libmklml. This package is available for all supported platforms and allows redistribution. The source code for this package is not available. This library, Intel MKL-DNN, includes the latest and greatest optimizations for deep learning functionality and all the source code is available on Github. There's an option to use SGEMM function from either Intel MKL or libmklml, which has better performance in some cases in comparison to open source implementation of this function available in Intel MKL-DNN. |
@vpirogov Thank you for the elaborate explanations. I also noticed that the mkl core libraries are available via
The mkl-dnn packaging (WIP) is hosted here, and the link won't change. |
There are a few levels of granularity available in Intel MKL when it comes to extracting a subset. First you can choose specific libraries based on the usage model. The complete Intel MKL package includes several variants for each library including static and dynamic versions, IA32 and Intel64 variants, and variants necessary to accommodate for ABI differences between supported compilers. There are also separate libraries for MPI support. Description of all the libraries included into Linux package is available here. Another option the library provides is building a shared object including a subset of the functionality. This is how we build libmklml. All the Intel provided packages for Intel MKL are updated at the same time and provide the same functionality, with the exception of Conda package that does not include examples and Fortran functionality. So feel free to choose the channel that works best for you. Carefully separating the library into packages might be tricky, I hope the link to documentation I provided will help. You can also examine structure of yum or apt packages to see how these are structured into components. Feel free to ping me if you need help understanding what individual libraries do and how to package these in the structure you need. |
The mkl-dnn package is basically ready (built without MKL). However a test failure was encountered: |
Some more questions about MKL-DNN for finalizing the package:
Here is a list of Debian's build machines. We hope that the uploaded packages work on as many architectures as possible.
|
MKL-DNN has only been tested on intel64 machines, but I know that there are some forks that do work on non-x86 CPUs. The library has not been designed with non-x86 machines in mind, and the way it checks for the CPU features is not portable. So I would only package MKL-DNN for amd64. |
Maybe this is the last question before upload:
I see no special requirement of this code in CMakeLists.txt, is this a bug? |
The source is the latest master. |
It seems that the issue above is about a non-informational error output #214 when AVX512-BW is not available. This doesn't block packaging and I'll continue. Let's keep this issue open until mkl-dnn entered Ubuntu devel, and I'll keep updating in this issue. |
The package is already waiting for review: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=895729 . We have to wait for some time until it enters the archive. |
@vpirogov Hi, I'm trying to split the ~1GB version of MKL into small packages. The first question is that, can cluster libraries work without any component from the interface layer, thread layer and computational layer? (If so we can provide an individual cluster meta package) I have bunch of other questions. They'll be listed and post here later. Here describes how I split them in Debian-specific format: |
Hi @cdluminate,
No. Intel MKL has 3 mandatory layers:
Whenever you want to work with Intel MKL you have to include all these layers (one library per layer depending on the choice).
BLACS interface is mandatory, because it contains all the MPI bindings. |
@vpirogov Hi, a license problem: e.g.
Under what license are these files distributed? Are they distributed under BSD-2-Clause like the nearby header files or under ISSL? |
I've almost finished the packaging. Only several small issues remain to be fixed. |
I manually mirrored the packaging repo to here |
@cdluminate, Intel MKL, including the header you are referring to, is distributed under Intel Simplified Software License. This license allows redistribution. |
@vpirogov Thanks for the confirmation. The package is now waiting for review. |
@cdluminate Any news? |
@rsdubtso MKL is still waiting to be manually checked by Debian's ftp masters. See https://ftp-master.debian.org/new/intel-mkl_2018.3.222-1.html mkl-dnn is still waiting for a sponsor to be uploaded to ftp-master. Shall I hurry up a bit on it? I'm recently updating the Julia package. |
I'm not trying to expedite this or anything. I was just wondering if I'm reading the packaging status correctly. Please take your time. And thanks for this! |
FYI: MKL has landed on Debian unstable.
|
One more question: Can AMD cpus gain any performance boost from intel's mkl-dnn library? I'm trying to package tensorflow for Debian, I need the answer of the above question before having the tensorflow package linked against mkl-dnn by default. I've never had any AMD cpus in my life so I cannot figure it out by myself. |
Yes, AMD processors are x86 compatible and with a few exceptions use the same Instruction Set Architecture (ISA) as Intel processors. |
Hi, http://packages.le-vert.net/machinelearning/debian/pool-stretch/ I already made package for mkl dnn and tensorflow, if you're interrested in. Adam. |
@eLvErDe Nice work! Debian's mkl-dnn packaging is available here https://salsa.debian.org/science-team/mkl-dnn . After reading your packaging scripts I realized there is a giant gap between our scripts... As much as I'd love to compile packages with SIMD instruction sets enabled, or linked against MKL, such resulting packages won't enter Debian official archive... |
Exactly. Tensorflow is hopeless, bazel, Cuda... Have you seen the insane shit I have to do to populate a folder full of symlink to make Devian's CUDA look like it's upstream tarball ? |
BTW, there is at least boinc I think in the archive coming as to separate package, like boinc and boinc-contrib, one of them is in contrib and is linked to CUDA. You may use the same workaround to provide an mkl-enabled version of mkl-dnn.... |
I wrote a very hacky and experimental build system for tensorflow https://salsa.debian.org/science-team/tensorflow, which requires only python3 and ninja. The build system reads bazel's query results and decide how to compile specified targets. Well, to make this packaging reasonably useful enough, more work is required...
I'm sorry to hear that, since I'm also one of debian's cuda toolkit package maintainer. We have to mangle the installation path to make things compliant to the FHS standard and Debian policy...
Feel free to send me patches, if some portion of your packaging work can be merged to upstream packaging to reduce your maintenance burden a bit. |
I'm also maintainer of Debian's caffe package .... there is already |
Regarding CUDA, it's definitely not your fault but bazel's stupid behavior.... Anyway, would you be interested in having a chat regarding this somewhere ? |
@eLvErDe If you don't mind using public mailing list, https://lists.debian.org/debian-science/ is a better place for the discussion. I subscribed the list, but feel free to CC me if you want. |
Hi @cdluminate and @eLvErDe ! Thanks for doing the packaging work! Feel free to CC me on the discussions related to packaging MKL and MKL-DNN as well. If there is anything we can do to make packaging -- please let us know. |
Finally! Debian's ftp team has accepted mkl-dnn into experimental. I'm closing this issue now since intel-mkl and mkl-dnn are both in the archive. I'll soon upload it to Debian unstable, and mkl-dnn will eventually enter testing and stable. https://tracker.debian.org/pkg/mkl-dnn |
Thank you for the great work, @cdluminate! |
mkl-dnn |
@cdluminate , Pay attention that as far as I know Intel MKL does discriminate non Genuine Intel CPU's.
Does it mean MKL-DNN checks only for ISA features and not the manufactur? Thank You. |
@RoyiAvital, Intel MKL-DNN dispatches the code based on ISA features only. Here's the relevant piece of code. |
@vpirogov , What happens if one links it to MKL? |
@RoyiAvital, since v1.0 Intel MKL-DNN has fully optimized JIT GEMM and does not support linking with Intel MKL anymore. So starting from Intel MKL-DNN v1.0 the question is moot. Intel MKL may or may not optimize to the same degree for non-Intel microprocessors. For more complete information about optimizations for non-Intel processors see Optimization Notice. |
The problem is the optimization notice says we should look into the product documentation yet no where in MKL documentation it says when it will chose code path based on CPU feature or when based on manufacture. I really hope the MKL team will embrace your policy and set the code path based only on the CPU features and not the manufacture. Thank You. |
@RoyiAvital, thank you for the input. I will bring it to the attention of Intel MKL team. |
FYI: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=894411
It seems that the Apache-2 licensed mkl-dnn can be built and used without MKL, despite of a suboptimal performance. In this case we can make packages.
The text was updated successfully, but these errors were encountered: