New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help with use of onnx model and TensorRT #1
Comments
Thank you for your quesion! now, I don't add the bbox_head for onnx because there are other ops which doesn't support in TensorRT. |
Thanks for your answer. So I guess, you did not use yet your onnx model for inference / prediction ?? |
Have you actually used your model with onnxruntime, can you answer please @CarkusL? |
I uploaded the scatterNDPlugin code for the Tensorrt, you can try to do Inference the onnx model in Tensorrt. |
You can fix the input dimensions [1,10, 60000,20] or [1,10, 30000,20] for example["voxels"]. You can pad 0 for the example["voxels"] and coordinate, if you don't have 30000 or 60000 pillars. |
Hello @CarkusL, I haven't tested yet your code for exporting to onnx model, but congratulations. I tried to implement the same exports to onnx in the last few days until I realized that exporting PointPillars as a whole model was difficult because of the PillarsScatter backbone.
Have you tried using your onnx model in TensorRT or what's the purpose of converting the model to onnx in your case?
In my attempts the "ScatterND" operation was not supported in TensorRT, that's why I gave up
Do you have maybe and idea how to do the same operation without Scatter, another alternative?
I noticed in order to get the final results for training or inference, the functions here
CenterPoint/det3d/models/detectors/point_pillars.py
Line 56 in 4f2fa6d
The text was updated successfully, but these errors were encountered: