Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Export to ONNX SAM image encoder #51

Closed
WaterKnight1998 opened this issue Jul 27, 2023 · 1 comment
Closed

[Feature] Export to ONNX SAM image encoder #51

WaterKnight1998 opened this issue Jul 27, 2023 · 1 comment

Comments

@WaterKnight1998
Copy link

Thank you very much for this incredible model.

I was looking at your guide for exporting the model to ONNX. I didn't understand why you don't want to export the SAM image encoder to ONNX. I think is because you are executing the onnx graph with onnxruntime in CPU.

However, it would be nice to have it for Triton Inference Server with CUDA backend.

@ymq2017
Copy link
Collaborator

ymq2017 commented Jul 27, 2023

Hi, thanks for watching our work.
We follow the original SAM's method that only exported the decoder. I think it's the CPU reason as you said. If you want to export the encoder to ONNX, this pull request may help.

@lkeab lkeab closed this as completed Aug 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants