Skip to content

v0.14

Compare
Choose a tag to compare
@tprimak tprimak released this 27 Apr 17:44

Performance optimizations

  • Improved fp32 Winograd convolution performance on Intel Xeon processors with Intel(R) AVX512 instruction set support.
  • Improved depthwise separable convolutions performance on processors with Intel(R) SSE 4.2, Intel(R) AVX and Intel(R) AVX512 instruction sets support.
  • Improved performance of GEMM-based convolutions backward propagation.
  • Improved performance of auxiliary primitives for NHWC and NCHW data layouts.

New functionality

  • Feature preview: Introduced recurrent neural network (RNN) support. This release includes training and inference support for uni- and bi-directional vanilla RNN and Long Short-Term Memory (LSTM) cells. Use of the new API is demonstrated with an example featuring LSTM model inference with attention based on Google Neural Machine Translation (GNMT) topology.
  • Added Winograd convolution implementation for int8 data type optimized for Intel Xeon processors with Intel AVX512 instruction set support. The implementation includes initial optimizations for future Intel Xeon processors with AVX512_VNNI instruction groups support.
  • Introduced deconvolution (or transposed convolution) primitive
  • Introduced support for 3D spatial data in convolution and auxiliary primitives. The following primitives are optimized for 3D tensors:
    • reorders
    • convolution
    • deconvolution
    • batch normalization
    • pooling
    • eltwise
    • concat
    • inner product

Usability improvements

  • Added flags -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF in build system that disable building tests and examples.
  • Added –DLIB_SUFFIX flag that allows to add suffix to the lib directory.
  • Added prepare_mkl.bat script that automates download of Intel MKL small libraries on Windows.

Thanks to the contributors

This release contains contributions from many Intel(R) Performance Libraries developers as well as Zhong Cao @4pao, Dmitriy Gorokhov, Jian Tang @tensor-tang, Daniel M. Weeks @doctaweeks, Tony Wang @tonywang1990, Tao Lv @TaoLv and Xinyu Chen @xinyu-intel. We would also like to thank everyone who asked questions and reported issues.

*Other names and brands may be claimed as the property of others.