You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was looking at your guide for exporting the model to ONNX. I didn't understand why you don't want to export the SAM image encoder to ONNX. I think is because you are executing the onnx graph with onnxruntime in CPU.
However, it would be nice to have it for Triton Inference Server with CUDA backend.
The text was updated successfully, but these errors were encountered:
Hi, thanks for watching our work.
We follow the original SAM's method that only exported the decoder. I think it's the CPU reason as you said. If you want to export the encoder to ONNX, this pull request may help.
Thank you very much for this incredible model.
I was looking at your guide for exporting the model to ONNX. I didn't understand why you don't want to export the SAM image encoder to ONNX. I think is because you are executing the onnx graph with onnxruntime in CPU.
However, it would be nice to have it for Triton Inference Server with CUDA backend.
The text was updated successfully, but these errors were encountered: