New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SparseInst export to ONNX input data format #75
Comments
It was renamed. An onnx file already generated if I am guess it right. Just ignore the error. |
I actually want to run the onnx conversion on my custom trained model as well, which is why I'm looking to get the error fixed :/ |
onnx is int8 which seems not work at this moment. |
Okay, thanks for the reply. I'll try to look for a fix myself then, I'll let you know if I manage to find any. |
It looks like onnx export work fine, it's the check that happens afterwards that fails. This happens because the forward pass through the Sparseinst network expects different inputs in training than evaluation. |
the onnx exportation already finished in |
Then, how can I test onnx inference logic without the rest of code? |
I am asking the same question as mauricewells, as well |
Hi,
I tried exporting the weights of SparseInst (GIAM) that are in this repository to ONNX format using export.py with the following command (I assume I need to use this command? The documentation reads 'use export_onnx.py', but there's no export_onnx.py in the current branch of this repository).
python export.py --config-file configs/coco/sparseinst/sparse_inst_r50_giam_aug.yaml --opts MODEL.WEIGHTS weights/base_giam.pth INPUT.MIN_SIZE_TEST 512
This leads to the following issue:
Any ideas on how to fix this?
Thank you.
The text was updated successfully, but these errors were encountered: