-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Describe the issue
I designed and trained a 6D pose estimation algorithm model using pytorch. After that I use torch.onnx.export to convert the pth format parameter file into an onnx inference file. Through comparison, it was found that in some input cases (for example, the target in the image is small and the target background is pure black), the inference results using onnxruntime and pytorch are obviously inconsistent, resulting in a large difference in the results of the two (in this case The error results of both inference results are very large compared with the true value).
I want to know how to reduce or completely avoid the inference differences between onnxruntime and pytorch?
To reproduce
Our algorithm can be found at this link: https://github.com/YangHai-1218/PseudoFlow/blob/69e8e7ad11a2a58f06532cc5b89b76300d83613b/models/estimator/wdr_pose.py
Urgency
No response
Platform
Linux
OS Version
86~20.04.2-Ubuntu
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.15.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 11.3