Ahead of Time (AOT) compiling for PyTorch JIT
- Libtorch 1.4.0
- CUDA 10.1
- cuDNN 7.6
- TensorRT 6.0.1.5
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
bazel build //:libtrtorch --cxxopt="-DNDEBUG"
bazel build //:libtrtorch --compilation_mode=dbg
A tarball with the include files and library can then be found in bazel-bin
Make sure to add LibTorch's version of CUDA 10.1 to your LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TRTorch/external/libtorch/lib
bazel run //cpp/trtorchexec -- $(realpath <PATH TO GRAPH>) <input-size>
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters.
The Node Converter Registry is not exposed currently in the public API but you can try using internal headers.
You can register a converter for your op using the NodeConverterRegistry inside your application.
Component | Description |
---|---|
core | Main JIT ingest, lowering, conversion and execution implementations |
cpp | C++ API for TRTorch |
tests | Unit test for TRTorch |
The TRTorch license can be found in the LICENSE file.