-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
Description
I tried to run the attached model using trtexec tool on the V100 GPU with TensorRT 8.6 on CUDA 12.1, but it fails with a Segmentation fault (core dumped) error below. The same model can be loaded fine with TensorRT 8.4, CUDA 11.6, GTX 1080. Note: possibly related to #3631, this is the same model but with dynamic batch size.
./trtexec --onnx=trtexec_segfault.onnx --verbose
...omitted output, see attached log...
Segmentation fault (core dumped)
Environment
TensorRT Version: 8.6.1.6
NVIDIA GPU: Tesla V100
NVIDIA Driver Version: 545.23.08
CUDA Version: 12.1
CUDNN Version: 8.9.0.131-1+cuda12.1
Operating System: Ubuntu 20.04
Python Version (if applicable): N/A
Tensorflow Version (if applicable): N/A
PyTorch Version (if applicable): N/A
Baremetal or Container (if so, version): N/A
Relevant Files
Model link: https://drive.google.com/file/d/10old1P-M5gafvWjjLVI3khkiGnlVVB9L/view?usp=sharing
Output log: trtexec_segfault.txt
Steps To Reproduce
Commands or scripts: ./trtexec --onnx=trtexec_segfault.onnx --verbose
Have you tried the latest release?: Yes
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt): This model can be run with TensorRT 8.4, CUDA 11.6, GTX 1080