Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
53 lines (30 sloc) 1.31 KB

TensorRTInferOp

A plugin for DALI https://github.com/NVIDIA/DALI/ that allows users to include TensorRT engines in DALI pipelines. This lets people use the same DALI GPU accelerated data preprocessing pipelines used to training in inference.

Compiling

To compile this library just run the applicable command for your platform:

x86_64-linux

dazel run //plugins/dali/TensorRTInferOp:libtensorrtinferop.so

aarch64-linux

dazel run //plugins/dali/TensorRTInferOp:libtensorrtinferop.so --config=[D5L/L4T]-toolchain

aarch64-qnx

dazel run //plugins/dali/TensorRTInferOp:libtensorrtinferop.so --config=D5Q-toolchain

Usage

Op Name: TensorRTInfer

Perform inference over the TensorRT engine

Arguments:

Required:
  • input_nodes Vec<string>: Inputs nodes in the engine

  • output_nodes Vec<string>: Outputs nodes in the engine

  • engine string: Path to TensorRT engine file to run inference

Optional
  • log_severity int (nvinfer::Severity): Logging severity for TensorRT

  • plugins Vec<string>: Plugin library to load

  • num_outputs int: Number of outputs

  • inference_batch_size int: Batch size to run inference

  • use_dla_core int: DLA core to run inference upon

You can’t perform that action at this time.