Skip to content

Commit

Permalink
[TensorRT EP] Update doc to TRT10 (#20580)
Browse files Browse the repository at this point in the history
### Description
<!-- Describe your changes. -->
* Update TRT version to 10.0


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
  • Loading branch information
yf711 committed May 7, 2024
1 parent 08407ea commit f131eb9
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 16 deletions.
4 changes: 2 additions & 2 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ See more information on the TensorRT Execution Provider [here](../execution-prov

* Follow [instructions for CUDA execution provider](#cuda) to install CUDA and cuDNN, and setup environment variables.
* Follow [instructions for installing TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html)
* The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 8.6.
* The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 10.0.
* The path to TensorRT installation must be provided via the `--tensorrt_home` parameter.
* ONNX Runtime uses TensorRT built-in parser from `tensorrt_home` by default.
* To use open-sourced [onnx-tensorrt](https://github.com/onnx/onnx-tensorrt/tree/main) parser instead, add `--use_tensorrt_oss_parser` parameter in build commands below.
Expand Down Expand Up @@ -146,7 +146,7 @@ Dockerfile instructions are available [here](https://github.com/microsoft/onnxru

---

## NVIDIA Jetson TX1/TX2/Nano/Xavier
## NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin

### Build Instructions
{: .no_toc }
Expand Down
29 changes: 15 additions & 14 deletions docs/execution-providers/TensorRT-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,20 +27,21 @@ See [Build instructions](../build/eps.md#tensorrt).

## Requirements

| ONNX Runtime | TensorRT | CUDA |
|:-------------|:---------|:-------|
| 1.17-main | 8.6 | 11.8, 12.2 |
| 1.16 | 8.6 | 11.8 |
| 1.15 | 8.6 | 11.8 |
| 1.14 | 8.5 | 11.6 |
| 1.12-1.13 | 8.4 | 11.4 |
| 1.11 | 8.2 | 11.4 |
| 1.10 | 8.0 | 11.4 |
| 1.9 | 8.0 | 11.4 |
| 1.7-1.8 | 7.2 | 11.0.3 |
| 1.5-1.6 | 7.1 | 10.2 |
| 1.2-1.4 | 7.0 | 10.1 |
| 1.0-1.1 | 6.0 | 10.0 |
| ONNX Runtime | TensorRT | CUDA |
| :----------- | :------- | :--------- |
| 1.18-main | 10.0 | 11.8, 12.2 |
| 1.17 | 8.6 | 11.8, 12.2 |
| 1.16 | 8.6 | 11.8 |
| 1.15 | 8.6 | 11.8 |
| 1.14 | 8.5 | 11.6 |
| 1.12-1.13 | 8.4 | 11.4 |
| 1.11 | 8.2 | 11.4 |
| 1.10 | 8.0 | 11.4 |
| 1.9 | 8.0 | 11.4 |
| 1.7-1.8 | 7.2 | 11.0.3 |
| 1.5-1.6 | 7.1 | 10.2 |
| 1.2-1.4 | 7.0 | 10.1 |
| 1.0-1.1 | 6.0 | 10.0 |

For more details on CUDA/cuDNN versions, please see [CUDA EP requirements](./CUDA-ExecutionProvider.md#requirements).

Expand Down

0 comments on commit f131eb9

Please sign in to comment.