Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT doesn't provide the same output as Torch model #102

Closed
4 tasks done
duchieuphan2k1 opened this issue May 5, 2023 · 2 comments
Closed
4 tasks done

TensorRT doesn't provide the same output as Torch model #102

duchieuphan2k1 opened this issue May 5, 2023 · 2 comments
Labels
question Further information is requested

Comments

@duchieuphan2k1
Copy link

duchieuphan2k1 commented May 5, 2023

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

I have converted the torch model to tensorRT model using end2end converter.
python tools/converter.py -f configs/damoyolo_tinynasL45_L.py -c best.pth --batch_size 1 --img_size 1024 --trt --end2end --trt_eval

This command worked normally but the evaluation is 0% , compare to 90% when i using torch model
I also use demo command to predict some images. The output show that the bounding box seem randomly.

I also try convert to onnx by this command:
python tools/converter.py -f configs/damoyolo_tinynasL45_L.py -c best.pth --batch_size 1 --img_size 1024
The Onnx model output exactly the same output as torch model

So do i miss any configuration for tensorRT?

Additional

No response

@duchieuphan2k1 duchieuphan2k1 added the question Further information is requested label May 5, 2023
@jyqi
Copy link
Collaborator

jyqi commented May 5, 2023

Hello, currently the End2End NMS module is only compatible with TensorRT-7.2.1.4. Please verify if the TensorRT version you are using is consistent. If it is not, you may consider modifying the TensorRT version or exporting it using a non-End2End TensorRT and implementing NMS post-processing using Python.

@jyqi jyqi closed this as completed May 10, 2023
ZlodeiBaal added a commit to ZlodeiBaal/DAMO-YOLO that referenced this issue Jun 14, 2023
box_coding should be 0 to correspond with network output [x1, y1, x2, y2]. Same box coding using for TRT7.

After this fix, TRT8 export successfully work.

Same problem is probably here - tinyvision#102
@ategen3rt
Copy link

I have run into this issue as well. The issue is that the TRT8 export has passed the wrong value for box_coding to TRT::EfficientNMS_TRT. I've confirmed that PR #113 fixes the issue. It changes box_coding from 1 (BoxCenterSize) to 0 (BoxCorner).
See https://github.com/NVIDIA/TensorRT/tree/release/8.6/plugin/efficientNMSPlugin for more information on the parameters.

@ategen3rt ategen3rt mentioned this issue Sep 18, 2023
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants