A library containing both highly optimized building blocks and an execution engine for data pre-processing in deep learning applications
Clone or download
pribalta and JanuszL Minimal changes for CPU CropMirrorNormalize (#257)
* Initial version of CMN for CPU

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>

* Added unit tests for refactor

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>

* Remove unnecessary comment

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>

* Fixes from CR

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>

* Fixes from cr

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>

* Fixes for unit tests

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>

* Fix test

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>

* Disable flaky tests

Signed-off-by: Pablo Ribalta Lorenzo <pribalta@nvidia.com>
Latest commit 63ca3c9 Nov 19, 2018


License Documentation


Today’s deep learning applications include complex, multi-stage pre-processing data pipelines that include compute-intensive steps mainly carried out on the CPU. For instance, steps such as load data from disk, decode, crop, random resize, color and spatial augmentations and format conversions are carried out on the CPUs, limiting the performance and scalability of training and inference tasks. In addition, the deep learning frameworks today have multiple data pre-processing implementations, resulting in challenges such as portability of training and inference workflows and code maintainability.

NVIDIA Data Loading Library (DALI) is a collection of highly optimized building blocks and an execution engine to accelerate input data pre-processing for deep learning applications. DALI provides both performance and flexibility of accelerating different data pipelines, as a single library, that can be easily integrated into different deep learning training and inference applications.

Key highlights of DALI include:

  • Full data pipeline accelerated from reading from disk to getting ready for training/inference
  • Flexibility through configurable graphs and custom operators
  • Support for image classification and segmentation workloads
  • Ease of integration through direct framework plugins and open source bindings
  • Portable training workflows with multiple input formats - JPEG, LMDB, RecordIO, TFRecord
  • Extensible for user specific needs through open source license


DALI is preinstalled in the NVIDIA GPU Cloud TensorFlow, PyTorch, and MXNet containers in versions 18.07 and later.

Installing prebuilt DALI packages



pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali

Compiling DALI from source


Linux x64  
GCC 4.9.2 or later  
NVIDIA CUDA 9.0 CUDA 8.0 compatibility is provided unofficially
nvJPEG library This can be unofficially disabled. See below
version 2 or later
(version 3 or later is required for TensorFlow TFRecord file format support)
CMake 3.5 or later  
libjpeg-turbo 1.5.x or later This can be unofficially disabled. See below
OpenCV 3 or later
We recommend using version 3.4+, however previous versions are also compatible.
OpenCV 2.x compatibility is provided unofficially
(Optional) liblmdb 0.9.x or later  
One or more of the following Deep Learning frameworks:


TensorFlow installation is required to build the TensorFlow plugin for DALI


Items marked "unofficial" are community contributions that are believed to work but not officially tested or maintained by NVIDIA.

Get the DALI source

git clone --recursive https://github.com/NVIDIA/dali
cd dali

Make the build directory

mkdir build
cd build

Compile DALI

To build DALI without LMDB support:

cmake ..
make -j"$(nproc)"

To build DALI with LMDB support:

cmake -DBUILD_LMDB=ON ..
make -j"$(nproc)"

To build DALI using Clang (experimental):


This build is experimental and it is not maintained and tested like the default configuration. It is not guaranteed to work. We recommend using GCC for production builds.

make -j"$(nproc)"

Optional CMake build parameters:

  • BUILD_PYTHON - build Python bindings (default: ON)
  • BUILD_TEST - include building test suite (default: ON)
  • BUILD_BENCHMARK - include building benchmarks (default: ON)
  • BUILD_LMDB - build with support for LMDB (default: OFF)
  • BUILD_NVTX - build with NVTX profiling enabled (default: OFF)
  • BUILD_TENSORFLOW - build TensorFlow plugin (default: OFF)
  • (Unofficial) BUILD_JPEG_TURBO - build with libjpeg-turbo (default: ON)
  • (Unofficial) BUILD_NVJPEG - build with nvJPEG (default: ON)

Install Python bindings

pip install dali/python

Getting started

The docs/examples directory contains a series of examples (in the form of Jupyter notebooks) highlighting different features of DALI. It also contains examples of how to use DALI to interface with deep learning frameworks.

Documentation for the latest stable release is available here. Nightly version of the documentation that stays in sync with the master branch is available here.

Additional resources

  • GPU Technology Conference 2018 presentation about DALI, T. Gale, S. Layton and P. Tredak: slides, recording.

Contributing to DALI

Contributions to DALI are more than welcome. To contribute to DALI and make pull requests, follow the guidelines outlined in the Contributing document.

Reporting problems, asking questions

We appreciate any feedback, questions or bug reporting regarding this project. When help with code is needed, follow the process outlined in the Stack Overflow (https://stackoverflow.com/help/mcve) document. Ensure posted examples are: - minimal – use as little code as possible that still produces the same problem - complete – provide all parts needed to reproduce the problem. Check if you can strip external dependency and still show the problem. The less time we spend on reproducing problems the more time we have to fix it - verifiable – test the code you're about to provide to make sure it reproduces the problem. Remove all other problems that are not related to your request/question.


DALI was built with major contributions from Trevor Gale, Przemek Tredak, Simon Layton, Andrei Ivanov, Serge Panev