Skip to content

Poor performance on trt engine compared to ONNX model #2296

@omri-cavnue

Description

@omri-cavnue

Description

Environment
• Hardware Platform (Jetson / GPU) : Jetson Xavier NX
• DeepStream Version : 6.0
• JetPack Version : Jetpack 4.6.1 & L4T 32.6.1
• TensorRT Version : 8.0.1.6-1+cuda10.2

I convert my ONNX model to a TensorRT engine using Deepstream and save the engine file. However, the results are much worse when compared to the ONNX model. This is an object detection task, and the MAP on the Jetson is significantly lower than if I run the ONNX at desktop

Metadata

Metadata

Assignees

Labels

triagedIssue has been triaged by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions