-
Notifications
You must be signed in to change notification settings - Fork 152
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I rename the input and output layer name of the torchscript model? #154
Comments
Hi @CPFelix , As far as I can recall, |
@zhiqwang can I modify the name when make model?The layer name seems to be name auto. |
Seems a bit tricky, and need to check the documentation for exporting |
Besides, it doesn't care about the names of the input and output layers if you are using |
For Triton,model input and output name must be named like "name__index", otherwise will come error: |
@zhiqwang And I find that onnx model is slow than .pt. For my test, yolov5s.pt gets 76.18fps, while yolov5s.onnx gets 24.78fps. |
I think this is possible. Judging the speed of model inference requires more information, such as |
I use batchsieze=1 and the same GPU to test the two model. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
❓ Questions and Help
I used the
export.py
get the torchscript model for yolov5s. But the input layer name isx
and can't be load by Triton. So I want modify the name of input layer. Thanks!The text was updated successfully, but these errors were encountered: