-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows TensorRT Python interface compatibility #390
Comments
Hi @IamNaQi , Because we use PyTorch to do the data binding in the TensorRT Python interface, this will involve pointer manipulation, and this approach may have some limitations on cross-platform. We have verified the accuracy of the C++ example on Windows system #389 , we should add more tests and more docs for this. |
Thank you very much for your kind responseAccuracy of the C++ example on Windows system is working very smoothly, I have tested without copying DLLs into debug folder and working perfectly with build by new cmake list, and result is awesome. Environment
Result of the C++ example on Windows systemPython inference still giving errors because of my environment issue. I am working on it and will update when solved |
The C++ inference results are perfect! And seems that you're using TensorRT EA version. EA version stands for early access (It is before actual release). GA stands for general availability. TensorRT GA is stable version and completely tested by nvidia team. So could you try to test TensorRT latest version GA release - TensorRT 8.2 GA Update 3 for x86_64 Architecture. |
where is the ppl.nn forward? |
@IamNaQi , Since C++ TensorRT inference can be reproducibly verified, I guess TensorRT's python interface does not support Windows well, so I think this issue has been solved, I'll close this thread for now. @xinsuinizhuan , Thanks for your interesets here, we don't support ppl.nn yet. We did have a pplnn branch before, but we tested that the ONNX exported by yolort did not work properly on pplnn #147. I'm not sure how well pplnn supports yolov5 (or yolort) now, and I will create a new ticket for pplnn support later to make this thread cleaner, or you can create a new one if you are convenient. |
As described in NVIDIA/TensorRT#1945 (comment) , TensorRT's Windows python interface has a compatibility issue with PyTorch. Reopen this ticket due to we should make |
🐛 Describe the bug
Hi I run your given notebook on windows for python inference on windows 10
https://github.com/zhiqwang/yolov5-rt-stack/blob/main/notebooks/onnx-graphsurgeon-inference-tensorrt.ipynb
but I could not get a better result
here is code sample that I used from your given notebook
I have tried with different thresh holds and but didn't try other precision as it's supports fp32 for now
output: model saved and can show input shape
**While prediction **
Error: it seems that it detect but giving empty tensors with size(0,4)
Here is the out put image,
![output](https://user-images.githubusercontent.com/76849182/163387164-c68232c7-0989-49d6-9dab-fae1082aae55.png)
please help me out I hope I explain my issue better.
Versions
PyTorch version: 1.8.2+cu111
CUDA used to build PyTorch: 11.1
We're using TensorRT: 8.4.0.6 on cuda device: 0.
OS: Microsoft Windows 10 Home
CMake version: version 3.23.0
The text was updated successfully, but these errors were encountered: