Description
Hi, guys.
After converting the onnx to the trt engine with the link below,
The inference time is 5ms in trt8.2.3 and 80ms in trt8.4.3.
Environment
TensorRT Version: 8.4.3.1
NVIDIA GPU: 3080Ti
NVIDIA Driver Version: 515
CUDA Version: 11.6
CUDNN Version: 8.4
Operating System: ubuntu18.04
Python Version (if applicable): 3.8.8
Tensorflow Version (if applicable):
PyTorch Version (if applicable): 1.11
Baremetal or Container (if so, version):
Relevant Files
https://github.com/ywfwyht/onnx_model/blob/main/0903_p28_t3_seg_simp.onnx
Steps To Reproduce
Description
Hi, guys.
After converting the onnx to the trt engine with the link below,
The inference time is 5ms in trt8.2.3 and 80ms in trt8.4.3.
Environment
TensorRT Version: 8.4.3.1
NVIDIA GPU: 3080Ti
NVIDIA Driver Version: 515
CUDA Version: 11.6
CUDNN Version: 8.4
Operating System: ubuntu18.04
Python Version (if applicable): 3.8.8
Tensorflow Version (if applicable):
PyTorch Version (if applicable): 1.11
Baremetal or Container (if so, version):
Relevant Files
https://github.com/ywfwyht/onnx_model/blob/main/0903_p28_t3_seg_simp.onnx
Steps To Reproduce