You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! Very nice work. I'm very much inspired by your work in my project. I'm working on a Jeston AGX Xavier and converting my custom yolov5s.onnx model to .trt engine with trtexec command. I ran some experiments and saw that optimized inferencing takes approx. 10ms.
However post-processing (nms, modifying output with the formulas given in the repo of ultralytics, etc...) takes 20ms. Did you try to implement your approach in c++ to get faster post-processing?
Another question is about modifying the output array. Why do we need this modification? Doesn't trtexec take this part of model into consideration during transformation from onnx to trt?
Thanks!
The text was updated successfully, but these errors were encountered:
Hello! Very nice work. I'm very much inspired by your work in my project. I'm working on a Jeston AGX Xavier and converting my custom yolov5s.onnx model to .trt engine with trtexec command. I ran some experiments and saw that optimized inferencing takes approx. 10ms.
However post-processing (nms, modifying output with the formulas given in the repo of ultralytics, etc...) takes 20ms. Did you try to implement your approach in c++ to get faster post-processing?
Another question is about modifying the output array. Why do we need this modification? Doesn't trtexec take this part of model into consideration during transformation from onnx to trt?
Thanks!
The text was updated successfully, but these errors were encountered: