Skip to content
Deep Learning tools and applications for NVIDIA AGX platforms.
C++ Python Roff Smarty
Branch: master
Clone or download
andi4191 Merge pull request #21 from NVIDIA/narendasan/dockerfile_pip_deps
fix(//docker): Remove enum34 from being installed by default
Latest commit 6a9705c Aug 23, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
MultiDeviceInferencePipeline fix(//MultiDeviceInteferencePipeline):Changed bounding box color Jul 9, 2019
common chore: Removing version numbers from external bazel targets as they a… May 20, 2019
docker fix(//docker): Remove enum34 from being installed by default Aug 23, 2019
libs chore: Removing version numbers from external bazel targets as they a… May 20, 2019
plugins fix(//plugins/dali/TensorRTInferOp): Rename tensorrtinferop and DALI … Jun 21, 2019
third_party
toolchains feat(//toolchains): Support targeting multiple PDK versions Aug 5, 2019
tools refactor(//tools/nvcc): Merge utils and the couple bazel modified baz… Jun 4, 2019
.bazelrc fix(//tools/linter): Fix bazel not using python3 when using the linters May 20, 2019
.clang-format Initial Open Sourcing of the Repository May 1, 2019
.clang-tidy Initial Open Sourcing of the Repository May 1, 2019
.dazelrc Initial Open Sourcing of the Repository May 1, 2019
.gitignore Add path for pdk_files, jetpack_files and qnx_toolchain Jun 19, 2019
.style.yapf Initial Open Sourcing of the Repository May 1, 2019
BUILD fix(//plugins/dali/TensorRTInferOp): Rename tensorrtinferop and DALI … Jun 21, 2019
CHANGELOG.md docs: Changelog for v1.1.0 (first publically available version) Jun 4, 2019
CONTRIBUTING.md fix(//plugins/dali/TensorRTInferOp): Rename tensorrtinferop and DALI … Jun 21, 2019
Dockerfile.dazel chore!: Bumping Bazel version to 0.28.1 Aug 5, 2019
LICENSE Initial Open Sourcing of the Repository May 1, 2019
README.md feat(//toolchains): Support targeting multiple PDK versions Aug 5, 2019
WORKSPACE chore: Removing version numbers from external bazel targets as they a… May 20, 2019

README.md

DL4AGX

Conventional Commits

This repository contains applications and tools to help understand and develop Deep Learning Applications for NVIDIA AGX Platforms (DRIVE, Jetson and CLARA). The AGX Family is based around the Xavier SoC, a high performance Aarch64 based processor that is automotive safety grade. On board are a number of accelerators to help accelerate Deep Learning workloads. These include a Volta Based Integrated GPU, multiple Deep Learning Accelerators (DLA), multiple Programmable Vision Accelerators (PVA) as well as other ISPs and Video processors. For more information on Xavier check https://developer.nvidia.com/drive/drive-agx.

Getting Started

This repo uses bazel via a tool called dazel (https://github.com/nadirizr/dazel) to manage builds and cross-compilation inside a docker container.

Installing Dependencies

  1. Install Docker

  2. Install NVIDIA-Docker

  3. Install Dazel

    • pip3 install dazel
  4. Build the relevant docker container using one of the Dockerfiles provided in //docker

    • More precise instructions can be found in that directory's (README.md)
  5. Modify Dockerfile.dazel to be based on the image you just built

    • e.g. FROM nvidia/drive_pdk:5.1.3.0

Compiling Applications

Dazel behaves like bazel but runs the compilation in a specified docker container. Therefore traditional bazel commands work like:

dazel build //plugins/dali/TensorRTInferOp:libtensorrtinferop.so

You will find the associated binaries in //bazel-out/k8-fastbuild/plugins/dali/TensorRTInferOp/libtensorrtinferop.so

Cross-Compiling Applications

The AGX platforms are aarch64 based, so we need to cross compile the applications:

There are two supported toolchains:

aarch64-linux

Applicable to DRIVE AGX Platforms flashed with the Linux PDK and Jetson AGX Platforms

In order to use this toolchain you must and have built a container that supports aarch64-linux (Dockerfiles will have names that contain aarch64-linux or both)

To cross-compile targets for aarch64-linux append the following flag to your build command: --config=D5L-toolchain

  • e.g. dazel build //plugins/dali/TensorRTInferOp:libtensorrtinferop.so --config=D5L-toolchain

You will find the associated binaries in //bazel-out/aarch64-fastbuild/plugins/dali/TensorRTInferOp/libtensorrtinferop.so

Note: D5L-toolchain is aliased to L4T-toolchain for Jetson users' convenience

aarch64-qnx

Applicable to DRIVE AGX Platforms flashed with the QNX PDK

In order to use this toolchain you must obtain the QNX Toolchain and have built a container that supports QNX (Dockerfiles will have names that contain aarch64-qnx or both)

To cross-compile targets for aarch64-qnx append the following flag to your build command: --config=D5Q-toolchain

  • e.g. dazel build //plugins/dali/TensorRTInferOp:libtensorrtinferop.so --config=D5Q-toolchain

You will find the associated binaries in //bazel-out/aarch64-fastbuild/plugins/dali/TensorRTInferOp/libtensorrtinferop.so

Building for older PDKs

By default --config=[D5L/D5Q/L4T]-toolchain will target the latest supported version. Since versions might use slightly different dependencies, to build using an older build container you will also need to specify the exact PDK version you are targeting.

  • e.g. dazel build //plugins/dali/TensorRTInferOp:libtensorrtinferop.so --config=D5L-toolchain --define platforms=drive_pdk_5.1.6.0+linux

Running Compiled Targets in a Container

If you want to run a target in a container, use a command similar to the following:

docker run --runtime=nvidia -v $(realpath bazel-bin):/DL4AGX -it <NAME OF ENV DOCKER IMAGE> /DL4AGX/<PATH TO YOUR SAMPLE IN bazel-bin>

Applications

Multi-Device Inference Pipelines

This application demonstrates how to use DALI (https://github.com/NVIDIA/DALI) and TensorRT (https://developer.nvidia.com/tensorrt) in order to create accelerated inference pipelines that leverage more than one accelerator on the Xavier SoC.

Troubleshooting Steps

Refreshing the Build container

If you rebuild a container but have not changed the name of it, dazel may not pick up that the environment has changed. To trigger a manual rebuild of the environment do:

touch Dockerfile.dazel
You can’t perform that action at this time.