-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Issues: NVIDIA/TensorRT
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
LSTM model converted to TensorRT is slower than PyTorch on RTX 4090
#4490
opened Jun 16, 2025 by
jds250
How to use custom ops (load xx.so for ort) with polygraph run --onnxrt ?
Module:Polygraphy
Issues with Polygraphy
triaged
Issue has been triaged by maintainers
#4487
opened Jun 12, 2025 by
lzcchl
RFE: Support Ceil or Round operator on DLA
Module:Embedded
issues when using TensorRT on embedded platforms
triaged
Issue has been triaged by maintainers
#4486
opened Jun 11, 2025 by
JoeCleary
inference takes longer time than previous version failure of TensorRT 10.3 when running trt model on GPU orin
internal-bug-tracked
Tracked internally, will be fixed in a future release.
Investigating
Issue is under investigation by TensorRT devs
Module:Performance
General performance issues
triaged
Issue has been triaged by maintainers
#4483
opened Jun 10, 2025 by
JamesWang007
Export failure of TensorRT 10.11 when running scaled dot product on GPU A6000
Module:ONNX
Issues relating to ONNX usage and import
triaged
Issue has been triaged by maintainers
waiting for feedback
Requires more information from author of item to make progress on the issue.
#4482
opened Jun 5, 2025 by
evolvingai
HuggingFace DETR model export fails
Module:Accuracy
Output mismatch between TensorRT and other frameworks
triaged
Issue has been triaged by maintainers
#4477
opened Jun 3, 2025 by
geiche735
failed to build the serialized network due to the wrong shape inference of the LayerNormalization operator
Module:ONNX
Issues relating to ONNX usage and import
triaged
Issue has been triaged by maintainers
waiting for feedback
Requires more information from author of item to make progress on the issue.
#4475
opened Jun 3, 2025 by
coffezhou
IUnaryLayer cannot be used to compute a shape tensor
Module:ONNX
Issues relating to ONNX usage and import
#4474
opened May 30, 2025 by
zhangzk0416
TensorRT produces wrong results when running valid onnx model on GPU 3080
Module:Accuracy
Output mismatch between TensorRT and other frameworks
triaged
Issue has been triaged by maintainers
#4473
opened May 29, 2025 by
coffezhou
fails to parse valid onnx model: API Usage Error (node_of_reduce_min_output: at least 1 dimensions are required for input.)
Module:ONNX
Issues relating to ONNX usage and import
#4472
opened May 29, 2025 by
coffezhou
TensorRT fails to infer the shape of the output for a valid onnx model.
Module:ONNX
Issues relating to ONNX usage and import
triaged
Issue has been triaged by maintainers
#4471
opened May 29, 2025 by
coffezhou
The onnx parser failed to parse a valid model: Slice (importSlice): INVALID_NODE: Assertion failed: (starts.size() == axes.size())
Module:ONNX
Issues relating to ONNX usage and import
#4470
opened May 29, 2025 by
coffezhou
failed to build the serialized network when running a valid onnx model on GPU 3080: dimensions not compatible for Gather with GatherMode = kND
Module:ONNX
Issues relating to ONNX usage and import
triaged
Issue has been triaged by maintainers
#4469
opened May 28, 2025 by
coffezhou
"Internal Error: MyelinCheckException: gvn.cpp:318: CHECK(graph().ssa_validation()) failed." when building engine
Module:Engine Build
Issues with building TensorRT engines
triaged
Issue has been triaged by maintainers
#4468
opened May 28, 2025 by
xjy1995
How can I adjust the position of quantization nodes to reduce data conversion?
Module:Engine Build
Issues with building TensorRT engines
triaged
Issue has been triaged by maintainers
#4467
opened May 27, 2025 by
lzcchl
detectron2 faster rcnn to tensor rt
Module:Engine Build
Issues with building TensorRT engines
waiting for feedback
Requires more information from author of item to make progress on the issue.
#4466
opened May 26, 2025 by
Kolkhoznyk
Why does img2img diffusion task not have quantization support? how to get it working with quantization?
Module:Demo
Issues regarding demos under the demo/ directory: Diffusion, DeBERTa, Bert
triaged
Issue has been triaged by maintainers
#4463
opened May 23, 2025 by
varshith15
ConvNet FP8 support
Module:Performance
General performance issues
triaged
Issue has been triaged by maintainers
#4461
opened May 23, 2025 by
AnnaTrainingG
An inference error occurred after converting ONNX to tensorrt
Module:Accuracy
Output mismatch between TensorRT and other frameworks
triaged
Issue has been triaged by maintainers
#4460
opened May 22, 2025 by
wangbiao0
About using multi-process to execute two engine models in one program
Module:Accuracy
Output mismatch between TensorRT and other frameworks
triaged
Issue has been triaged by maintainers
#4459
opened May 21, 2025 by
A-cvprogrammer
trtexec
fails at the end if --saveEngine
path is invalid or unwritable
Feature Request
#4448
opened May 18, 2025 by
PierreMarieCurie
TensorRT Plugin gets incorrect input data when integrated into full model, but works fine in isolation
Module:Plugins
Issues when using TensorRT plugins
triaged
Issue has been triaged by maintainers
#4440
opened May 13, 2025 by
niubiplus2
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.