Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented. #997

Open
rtm-1dyakonov opened this issue Dec 29, 2021 · 21 comments

Comments

@rtm-1dyakonov
Copy link

rtm-1dyakonov commented Dec 29, 2021

Right now i am trying to port silero_vad model from Onnx format to TensorFlow with onnx_tf.

However, after .export_graph next error occurs:

BackendIsNotSupposedToImplementIt: in user code:

File "c:\users\rtm51\downloads\onnx-tensorflow\onnx_tf\backend_tf_module.py", line 99, in __call__  *
    output_ops = self.backend._onnx_node_to_tensorflow_op(onnx_node,
File "c:\users\rtm51\downloads\onnx-tensorflow\onnx_tf\backend.py", line 347, in _onnx_node_to_tensorflow_op  *
    return handler.handle(node, tensor_dict=tensor_dict, strict=strict)
File "c:\users\rtm51\downloads\onnx-tensorflow\onnx_tf\handlers\handler.py", line 61, in handle  *
    raise BackendIsNotSupposedToImplementIt("{} version {} is not implemented.".format(node.op_type, cls.SINCE_VERSION))

BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.

You can get the model here

@anshudaur
Copy link

Hello,

I am also getting the same error while converting a yolov5's onnx model to Tensorflow format.
BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.

Can you please suggest how can I fix this error?

Thank you

@chinhuang007
Copy link
Collaborator

Unfortunately the spec change for Unsqueeze version 13 has not been implemented. Contribution is certainly welcome!
In the meantime, you could possibly export the model into onnx opset 12 and the conversion should work.

@rtm-1dyakonov
Copy link
Author

Do you mean that firstly I have to port my model to Onnx format via onnx opset 12 and then use current version of onnx-tensorflow?

@anshudaur
Copy link

Do you mean that firstly I have to port my model to Onnx format via onnx opset 12 and then use current version of onnx-tensorflow?

Yes.

@Aayush2007
Copy link

Aayush2007 commented Jan 21, 2022

I can't export my pytorch model expect opset_version=14. And when I try to export_graph I get the same error "BackendIsNotSupposedToImplementIt: Squeeze version 13 is not implemented."

Currently, I'm on onnx-tf 1.9.0, onnx 1.10.2, tensorflow 2.7.0.

So, I won't be able to run via onnx opset 12. Is there any workaround possible for me?

@Valdiolus
Copy link

I also have the same issue with yolov3.onnx model (from https://github.com/ultralytics/yolov3)

@EuphoriaYan
Copy link

I have the same issue with fairseq transformer translate model.

@EuphoriaYan
Copy link

When I try to export pytorch translation model in the format of onnx(opset_version=12), an error is raised:
RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Support for this operator was added in version 14, try exporting with this version.
So I have to export the translation model with opset_version=14... However, when I try to convert the onnx model into tensorflow pb checkpoint, it will occur the same error as the arthor.

@ghost
Copy link

ghost commented Apr 9, 2022

I experienced the same issue. The model I'm trying to convert to TF must use opset 13 when converted from PT -> ONNX.
Thanks so much to @krishnanNuance for working on it!! See here adding squeeze/unsqueeze for opset 13 👍

@ghost
Copy link

ghost commented Apr 11, 2022

I think you can close this issue @chinhuang007 😊

@MarcoEsposito890
Copy link

Hello, I am facing the same issue. Originally I had an onnx model with opset 14, then I downgraded it to both 12 and 13 but I keep getting the same error. What do I have to do to add squeeze/unsqueeze to the opsets?

@krishnanNuance
Copy link
Contributor

Hello, I am facing the same issue. Originally I had an onnx model with opset 14, then I downgraded it to both 12 and 13 but I keep getting the same error. What do I have to do to add squeeze/unsqueeze to the opsets?

you need to add changes from this branch -#1022

@leeqiaogithub
Copy link

When I try to export pytorch translation model in the format of onnx(opset_version=12), an error is raised: RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Support for this operator was added in version 14, try exporting with this version. So I have to export the translation model with opset_version=14... However, when I try to convert the onnx model into tensorflow pb checkpoint, it will occur the same error as the arthor.

I was in a similar situation.I had to use opset-16 to do the pt->onnx conversion because the node "grid_sample" is only supported in opset-16 . How did you solve the problem?

@leeqiaogithub
Copy link

I can't export my pytorch model expect opset_version=14. And when I try to export_graph I get the same error "BackendIsNotSupposedToImplementIt: Squeeze version 13 is not implemented."

Currently, I'm on onnx-tf 1.9.0, onnx 1.10.2, tensorflow 2.7.0.

So, I won't be able to run via onnx opset 12. Is there any workaround possible for me?

I was in a similar situation.I had to use opset-16 to do the pt->onnx conversion because the node "grid_sample" is only supported in opset-16 . How did you solve the problem?

@leeqiaogithub
Copy link

Currently, I'm on onnx-tf 1.10.0, onnx 1.13.0, tensorflow 2.6.0.

@johnkennyy
Copy link

What we should do to solve this issue?
https://stackoverflow.com/questions/75969364/backendisnotsupposedtoimplementit-error-converting-onnx-to-tensorflow
"BackendIsNotSupposedToImplementIt Error: Converting ONNX to Tensorflow"

I could see some comment in below link that its fixed from onnx developer side. Im running in google colab, but this issue still persist. I cant build onnx_tf from source code in colab i think, instead of 'pip install onnx_tf'.
#1022

@jianyuzzz
Copy link

I experienced the same issue. The model I'm trying to convert to TF must use opset 13 when converted from PT -> ONNX. Thanks so much to @krishnanNuance for working on it!! See here adding squeeze/unsqueeze for opset 13 👍

Unfortunately the issue persists, even if I'm already using release v1.10.0, where the the pull request #1022 should have been merged.

@summerisc
Copy link

Same here, using onnx-tf 1.10.0 but still sees this

@Magnificent-01
Copy link

我处于类似的情况。我必须使用 opset-16 进行 pt->onnx 转换,因为节点“grid_sample”仅在 opset-16 中受支持。然后, BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.就报错了 请问如何解决

@xedrion
Copy link

xedrion commented May 2, 2024

still facing the same issue using onnx-tf 1.10.0. I tried with opset 13,14,16 but it does not solve the problem. I could not try with opset 12 or lower because I get this error: RuntimeError: D:\a\onnx\onnx\onnx\onnx/version_converter/BaseConverter.h:70: adapter_lookup: Assertion false failed: No Adapter To Version $12 for Relu
What can I do ?

@lzfff12
Copy link

lzfff12 commented May 31, 2024

Currently, I'm on onnx-tf 1.10.0, onnx 1.13.0, tensorflow 2.6.0.

Have you solved this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests