-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorrt #24
Comments
Hi, after converting the original PyTorch model to the ONNX model, you don't need to further convert it to TensorRT model. |
Thanks for amazing work. I was interested in converting original start_st2 model to onnx and then preferably to tensorrt. I couldn't find the instructions for those. Thanks in advance. |
@trathpai Hi, thanks for the appreciation of our work. ONNXRuntime has a TensorRT execution provider. This link (https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html) may give you some help. But from my experience, TensorRT execution provider would bring too much improvement of speed compared with CUDA execution provider. |
hello
i have follow your steps to convert baseline stark_st2 to onnx successfully,
now i want to convert the onnx to tensorrt model,can you give me some advice?
if possible, please list your relevant environment (tensorrt/cuda/etc .)
thank you sincerely!
The text was updated successfully, but these errors were encountered: