-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed as not planned
Labels
ep:TensorRTissues related to TensorRT execution providerissues related to TensorRT execution providerstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot
Description
Describe the issue
I'm using onnx-tensorrt.
When I enable the trt_cuda_graph_enable like this:

Subsequently, no matter how many images of data I pass for inference, what I get is always the result of the first image.


The following is my infer code:
the “input” and “output temp” is reusable.
To reproduce
- use onnx-tensorrt 1.16.3/1.17.0/1.17.1
- infer different image
Urgency
No response
Platform
Windows
OS Version
WIN10
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
C++
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
No response
Metadata
Metadata
Assignees
Labels
ep:TensorRTissues related to TensorRT execution providerissues related to TensorRT execution providerstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot