-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented. #997
Comments
Hello, I am also getting the same error while converting a yolov5's onnx model to Tensorflow format. Can you please suggest how can I fix this error? Thank you |
Unfortunately the spec change for Unsqueeze version 13 has not been implemented. Contribution is certainly welcome! |
Do you mean that firstly I have to port my model to Onnx format via onnx opset 12 and then use current version of onnx-tensorflow? |
Yes. |
I can't export my pytorch model expect opset_version=14. And when I try to export_graph I get the same error "BackendIsNotSupposedToImplementIt: Squeeze version 13 is not implemented." Currently, I'm on onnx-tf 1.9.0, onnx 1.10.2, tensorflow 2.7.0. So, I won't be able to run via onnx opset 12. Is there any workaround possible for me? |
I also have the same issue with yolov3.onnx model (from https://github.com/ultralytics/yolov3) |
I have the same issue with fairseq transformer translate model. |
When I try to export pytorch translation model in the format of onnx(opset_version=12), an error is raised: |
I experienced the same issue. The model I'm trying to convert to TF must use opset 13 when converted from PT -> ONNX. |
I think you can close this issue @chinhuang007 😊 |
Hello, I am facing the same issue. Originally I had an onnx model with opset 14, then I downgraded it to both 12 and 13 but I keep getting the same error. What do I have to do to add squeeze/unsqueeze to the opsets? |
you need to add changes from this branch -#1022 |
I was in a similar situation.I had to use opset-16 to do the pt->onnx conversion because the node "grid_sample" is only supported in opset-16 . How did you solve the problem? |
I was in a similar situation.I had to use opset-16 to do the pt->onnx conversion because the node "grid_sample" is only supported in opset-16 . How did you solve the problem? |
Currently, I'm on onnx-tf 1.10.0, onnx 1.13.0, tensorflow 2.6.0. |
What we should do to solve this issue? I could see some comment in below link that its fixed from onnx developer side. Im running in google colab, but this issue still persist. I cant build onnx_tf from source code in colab i think, instead of 'pip install onnx_tf'. |
Unfortunately the issue persists, even if I'm already using release v1.10.0, where the the pull request #1022 should have been merged. |
Same here, using onnx-tf 1.10.0 but still sees this |
我处于类似的情况。我必须使用 opset-16 进行 pt->onnx 转换,因为节点“grid_sample”仅在 opset-16 中受支持。然后, BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.就报错了 请问如何解决 |
still facing the same issue using onnx-tf 1.10.0. I tried with opset 13,14,16 but it does not solve the problem. I could not try with opset 12 or lower because I get this error: RuntimeError: D:\a\onnx\onnx\onnx\onnx/version_converter/BaseConverter.h:70: adapter_lookup: Assertion |
Have you solved this problem? |
Right now i am trying to port silero_vad model from Onnx format to TensorFlow with onnx_tf.
However, after .export_graph next error occurs:
BackendIsNotSupposedToImplementIt: in user code:
You can get the model here
The text was updated successfully, but these errors were encountered: