Table Of Contents
- Description
- How does this sample work?
- Running the sample
- Preparing sample data
- Additional resources
- License
- Changelog
- Known issues
This sample, sampleIOFormats, uses a Onnx model that was trained on the MNIST dataset and performs engine building and inference using TensorRT. The correctness of outputs is then compared to the golden reference. Specifically, it shows how to use APIs to explicitly specify input formats to TensorFormat::kLINEAR
for Float32, and additionally TensorFormat::kCHW2
and TensorFormat::kHWC8
for Float16 and INT8 precision.
ITensor::setAllowedFormats
is invoked to specify which format is expected to be supported.
```
bool SampleIOFormats::build(int dataWidth)
{
...
network->getInput(0)->setAllowedFormats(static_cast<TensorFormats>(1 << static_cast<int>(mTensorFormat)));
...
}
```
- Download the sample data from TensorRT release tarball, if not already mounted under
/usr/src/tensorrt/data
(NVIDIA NGC containers) and set it to$TRT_DATADIR
.export TRT_DATADIR=/usr/src/tensorrt/data pushd $TRT_DATADIR/mnist pip3 install Pillow popd
-
Compile the sample by following build instructions in TensorRT README.
-
Run inference on the digit looping from 0 to 9:
./sample_io_formats --datadir=<path/to/data> --useDLACore=N
For example:
./sample_io_formats --datadir $TRT_DATADIR/mnist
-
Verify that all 10 digits match correctly. If the sample runs successfully, you should see output similar to the following:
&&&& RUNNING TensorRT.sample_io_formats # ./sample_io_formats [I] The test chooses MNIST as the network and recognizes a randomly generated digit [I] Firstly it runs the FP32 as the golden data, then INT8/FP16 with different formats will be tested [I] [I] Building and running a FP32 GPU inference to get golden input/output [I] [TRT] Detected 1 input and 1 output network tensors. [I] Input: ... (omitted message) &&&& PASSED TensorRT.sample_io_formats
This output shows that the sample ran successfully;
PASSED
.
To see the full list of available options and their descriptions, use the -h
or --help
command line option.
The following resources provide a deeper understanding about this sample:
Models
Documentation
- Introduction To NVIDIA’s TensorRT Samples
- Working With TensorRT Using The C++ API
- NVIDIA’s TensorRT Documentation Library
For terms and conditions for use, reproduction, and distribution, see the TensorRT Software License Agreement documentation.
August 2022
- Migrated code from parsing a
caffe
model to anonnx
model.
Oct 2021
- Change names and topic from "reformat-free" to "I/O formats", because
BuilderFlag::kSTRICT_TYPES
is deprecated. "Reformat-free I/O" (seeBuilderFlag::kDIRECT_IO
) is generally counterproductive and fragile, since it constrains the optimizer from choosing the fastest implementation, and depends upon what kernels are available on a particular target.
June 2019
- This is the first release of the
README.md
file and sample.
There are no known issues in this sample.