Note
Version 1.0 brings incompatible changes to the 0.20 version. Please read Version 1.0 Transition Guide.
Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an open-source performance library for deep learning applications. The library includes basic building blocks for neural networks optimized for Intel Architecture Processors and Intel Processor Graphics.
Note Intel MKL-DNN is distinct from Intel MKL, which is general math performance library.
Intel MKL-DNN is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. Deep learning practitioners should use one of the applications enabled with Intel MKL-DNN:
- Apache* MXNet
- BigDL
- Caffe* Optimized for Intel Architecture
- Chainer*
- DeepLearning4J*
- Intel Nervana Graph
- MATLAB* Deep Learning Toolbox
- Menoh*
- Microsoft* Cognitive Toolkit (CNTK)
- ONNX Runtime
- OpenVINO(TM) toolkit
- PaddlePaddle*
- PyTorch*
- Tensorflow*
Intel MKL-DNN is licensed under Apache License Version 2.0. This software includes the following third-party components:
- Xbyak distributed under 3-clause BSD licence
- gtest distributed under 3-clause BSD license
- ittnotify distributed under 3-clause BSD license
- Developer guide explains programming model, supported functionality, details of primitives implementations and includes annotated examples.
- API reference provides comprehensive reference of the library API.
Please submit your questions, feature requests, and bug reports on the GitHub issues page.
WARNING The following functionality has preview status and might change without prior notification in future releases.
- Threading Building Blocks (TBB) support
We welcome community contributions to Intel MKL-DNN. If you have an idea on how to improve the library:
- For changes impacting the public API, submit an RFC pull request.
- Ensure that the changes are consistent with the code contribution guidelines and coding style.
- Ensure that you can build the product and run all the examples with your patch.
- Submit a pull request.
For additional details, see contribution guidelines.
Intel MKL-DNN supports systems meeting the following requirements:
- Intel 64 architecture or compatible
- C++ compiler with C++11 standard support
- CMake 2.8.11 or later
- Doxygen 1.8.5 or later
Configurations of CPU and GPU engines may introduce additional build time dependencies.
Intel Architecture Processors and compatible devices are supported by Intel MKL-DNN CPU engine. The CPU engine is built by default and cannot be disabled at build time. The engine can be configured to use OpenMP or TBB threading runtime. The following additional requirements apply:
- OpenMP runtime requires C++ compiler with OpenMP 2.0 or later standard support
- TBB runtime requires Threading Building Blocks (TBB) 2017 or later.
The library is optimized for systems based on
- Intel Atom processor with Intel SSE4.1 support
- 4th, 5th, 6th, 7th, and 8th generation Intel Core(TM) processor
- Intel Xeon(R) processor E3, E5, and E7 family (formerly Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
- Intel Xeon Phi(TM) processor (formerly Knights Landing and Knights Mill)
- Intel Xeon Scalable processor (formerly Skylake and Cascade Lake)
- future Intel Xeon Scalable processor (code name Cooper Lake)
and compatible processors.
Intel MKL-DNN detects instruction set architecture (ISA) in the runtime and uses just-in-time (JIT) code generation to deploy the code optimized for the latest supported ISA. Some implementations rely on OpenMP 4.0 SIMD extensions and we recommend using the Intel C++ Compiler for the best performance results.
Warning In the default build configuration, Intel MKL-DNN targets build system ISA as the minimal supported ISA for the build. To make sure that the build is portable to older systems, you might need to override MKLDNN_ARCH_OPT_FLAGS.
CPU engine was validated on RedHat* Enterprise Linux 7 with
- GNU Compiler Collection 4.8, 5.4, 6.1, 7.2, and 8.1
- Clang* 3.8.0
- Intel C/C++ Compiler 17.0, 18.0, and 19.0
on Windows Server* 2012 R2 with
- Microsoft Visual C++ 14.0 (Visual Studio 2015 Update 3)
- Intel C/C++ Compiler 17.0 and 19.0
on macOS* 10.13 (High Sierra) with
- Apple LLVM version 9.2 (XCode 9.2)
- Intel C/C++ Compiler 18.0 and 19.0
Intel Processor Graphics is supported by Intel MKL-DNNs GPU engine. GPU engine is disabled in the default build configuration. The following additional requirements apply when GPU engine is enabled:
- OpenCL* runtime library (OpenCL* version 1.2 or later)
- OpenCL* driver (with kernel language support for OpenCL* C 2.0 or later) with Intel subgroups extension support
The library is optimized for systems based on
- Intel HD Graphics
- Intel UHD Graphics
- Intel Iris Plus Graphics
GPU engine was validated on Ubuntu* 18.04 with
- GNU Compiler Collection 5.4 and 8.1
- Clang* 3.8.1
- Intel C/C++ Compiler 19.0
- Intel SDK for OpenCL* applications 2019 Update 3
- Intel Graphics Compute Runtime for OpenCL* 19.15.12831
on Windows Server* 2019 with
- Microsoft Visual C++ 14.0 (Visual Studio 2015 Update 3)
- Intel C/C++ Compiler 19.0
- Intel SDK for OpenCL* applications 2019 Update 3
- Intel Graphics - Windows* 10 DCH Drivers 26.20.100.6709