Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I rename the input and output layer name of the torchscript model? #154

Closed
CPFelix opened this issue Sep 1, 2021 · 8 comments
Closed
Labels
question Further information is requested

Comments

@CPFelix
Copy link

CPFelix commented Sep 1, 2021

❓ Questions and Help

I used the export.py get the torchscript model for yolov5s. But the input layer name is x and can't be load by Triton. So I want modify the name of input layer. Thanks!

@CPFelix CPFelix added the question Further information is requested label Sep 1, 2021
@zhiqwang
Copy link
Owner

zhiqwang commented Sep 1, 2021

Hi @CPFelix ,

As far as I can recall, torchscript does not provide an interface to modify the names of the input and output layers. But I think that their names are fixed once you export the torchscript model. And you can check out the name of it via https://netron.app/.

@CPFelix
Copy link
Author

CPFelix commented Sep 1, 2021

@zhiqwang can I modify the name when make model?The layer name seems to be name auto.

@zhiqwang
Copy link
Owner

zhiqwang commented Sep 1, 2021

Seems a bit tricky, and need to check the documentation for exporting torchscript more carefully.

@zhiqwang
Copy link
Owner

zhiqwang commented Sep 1, 2021

Besides, it doesn't care about the names of the input and output layers if you are using libtorch. (I did't have much experience on Triton)

https://github.com/zhiqwang/yolov5-rt-stack/blob/76096edb32b71981552cc68e61b1e0026d2c74ff/deployment/libtorch/main.cpp#L197

@CPFelix
Copy link
Author

CPFelix commented Sep 1, 2021

For Triton,model input and output name must be named like "name__index", otherwise will come error:
"
UNAVAILABLE: Internal: input 'x' does not follow naming convention i.e. name__index.
"
Thank you for your earnest reply.

@CPFelix
Copy link
Author

CPFelix commented Sep 1, 2021

@zhiqwang And I find that onnx model is slow than .pt. For my test, yolov5s.pt gets 76.18fps, while yolov5s.onnx gets 24.78fps.
Is it normal?

@zhiqwang
Copy link
Owner

zhiqwang commented Sep 1, 2021

For my test, yolov5s.pt gets 76.18fps, while yolov5s.onnx gets 24.78fps. Is it normal?

I think this is possible. Judging the speed of model inference requires more information, such as batch_size, whether the currently used device is specifically optimized and so on.

@CPFelix
Copy link
Author

CPFelix commented Sep 1, 2021

I use batchsieze=1 and the same GPU to test the two model.

Repository owner locked and limited conversation to collaborators Sep 1, 2021
@zhiqwang zhiqwang closed this as completed Sep 1, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants