Description
Environment
• Hardware Platform (Jetson / GPU) : Jetson Xavier NX
• DeepStream Version : 6.0
• JetPack Version : Jetpack 4.6.1 & L4T 32.6.1
• TensorRT Version : 8.0.1.6-1+cuda10.2
I convert my ONNX model to a TensorRT engine using Deepstream and save the engine file. However, the results are much worse when compared to the ONNX model. This is an object detection task, and the MAP on the Jetson is significantly lower than if I run the ONNX at desktop
Description
Environment
• Hardware Platform (Jetson / GPU) : Jetson Xavier NX
• DeepStream Version : 6.0
• JetPack Version : Jetpack 4.6.1 & L4T 32.6.1
• TensorRT Version : 8.0.1.6-1+cuda10.2
I convert my ONNX model to a TensorRT engine using Deepstream and save the engine file. However, the results are much worse when compared to the ONNX model. This is an object detection task, and the MAP on the Jetson is significantly lower than if I run the ONNX at desktop