New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SpaceToBatchND error: only support same blocksize at different dims #790
Comments
This operator is being updated, so the error message should disappear very shortly. ONNX does not support SpacetoBatchND directly, so it has be composed using a combination of other primitive operators. The current implementation only supports [n, n] block_shape, but will be updated to support arbitrary sized block_shapes and >4D tensors. |
This should be fixed by #797. Can you check if you're still seeing a conversion failure? |
Hi @jignparm ! Thanks a lot for the fix. It seems to work, since I now get a different error in further layers. I can open up a different bug for those. |
Thanks for the verification. I'll close this out after merging. For the remaining errors, one option would be to share the model (assuming privacy is not a concern), otherwise another bug works as well. |
Great! Unfortunately I'm not able to share the model so I'll put up another ticket for the other issue. Thanks a lot for the quick help :) |
Hi,
We are trying to convert our frozen network (as protobuf) into ONNX. The
tf2onnx
tool is called like this:python -m tf2onnx.convert --input model.pb --inputs image:0 --outputs output1:0,output2:0 --opset 11 --verbose
However the tool complains with the following message:
This stems from creating a
Conv2D
layer as follows:Is this expected?
Thanks!
The text was updated successfully, but these errors were encountered: