Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"AliasWithName is not a registered function/op" when run converted onnx model #1868

Closed
hieubz opened this issue Aug 5, 2020 · 5 comments
Closed

Comments

@hieubz
Copy link

hieubz commented Aug 5, 2020

I used tools/deploy/caffe2_converter.py to convert to onnx model. But when I initiated this model by onnxruntime on google colab, this throwed an exception:

[ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

My code to load onnx model

`
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()

class ONNX_DETECT:

def __init__(self, onnx_model_path, device):
    self.onnx_model_path = onnx_model_path
    self.device = device.upper()
    print(' ---- running on ', onnxruntime.get_device())

    self.detector = onnxruntime.InferenceSession(self.onnx_model_path)

def detect(self, input_image):

    inputs = {self.detector.get_inputs()[0].name: to_numpy(input_image)}
    outputs = self.detector.run(None, inputs)

    return outputs

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
detectron_model = ONNX_DETECT('/content/model.onnx', device.type)
`
Have anyone faced this issue?

@ppwwyyxx
Copy link
Contributor

ppwwyyxx commented Aug 5, 2020

As https://detectron2.readthedocs.io/modules/export.html#detectron2.export.Caffe2Tracer.export_onnx says

ote that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by other runtime. Post-processing or transformation passes may be applied on the model to accommodate different runtimes.

so this is working as expected

@ppwwyyxx ppwwyyxx closed this as completed Aug 5, 2020
@hieubz
Copy link
Author

hieubz commented Aug 6, 2020

Would you please give me some ideas to deal with it? I still vague about that, especially how to run detectron2 on onnxruntime.
Thanks.

@hieubz
Copy link
Author

hieubz commented Aug 6, 2020

I saw that there are 3 options when I export model (caffe2, onnx, torchscript), why exported model only available in caffe2 ? why onnx exported model can not run on onnxruntime but needs post-processing ?

@menchunlei
Copy link

I saw that there are 3 options when I export model (caffe2, onnx, torchscript), why exported model only available in caffe2 ? why onnx exported model can not run on onnxruntime but needs post-processing ?
@PhamDuyHieutb
because op isn't implemented in the backend of onnxruntime.

@Muhammad-Talha-MT
Copy link

I am facing the same issue
What is Post-processing or transformation passes?

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 1, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants