Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[wasm runtime] Could not find an implementation for ArgMax(12) node with name 'ArgMax_1382' #9760

Open
josephrocca opened this issue Nov 14, 2021 · 13 comments
Labels
feature request request for unsupported feature or enhancement

Comments

@josephrocca
Copy link
Contributor

Is your feature request related to a problem? Please describe.
I've exported OpenAI's CLIP text encoder to ONNX with this command (as shown in this Colab notebook):

torch.onnx.export(model, text_tokens, "clip-text-vit-32.onnx", export_params=True, opset_version=12, do_constant_folding=True, input_names = ['input'], output_names = ['output'], dynamic_axes={'input' : {0 : 'batch_size'}, 'output' : {0 : 'batch_size'}})

You can see here: https://josephrocca.github.io/openai-clip-js/onnx-text-demo.html that it's throwing an error when trying to create a session (Could not find an implementation for ArgMax(12) node with name 'ArgMax_1382'). Just click that link, and then click the "start" button, and open the browser console to see the error.

Here's a direct link to the onnx file: https://huggingface.co/rocca/openai-clip-js/resolve/main/clip-text-vit-32-float32.onnx

I'm guessing this means that ArgMax is not yet supported by the wasm runtime?

System information

Describe the solution you'd like
Support for ArgMax, and the other operators in the above-mentioned onnx file, in case there are other operators that are also missing from the wasm runtime.

Describe alternatives you've considered
Not sure of any alternatives here.

Additional context
I've made a CLIP image encoding demo and it works fine. It was exported using the same command.

Here's the error stack (apologies for the screenshot - not sure how to copy-paste Chrome DevTools logs without it getting mangled):
image

@hanbitmyths
Copy link
Contributor

ORT supports types selectively to keep a binary size small as possible. In case of ArgMax, ORT support only tensor(double), tensor(float), tensor(int32) as described in https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md#cpuexecutionprovider.

In your model, ArgMax is a type of int64, so if you can change the type of an input to float32 type, it should work properly.

@josephrocca
Copy link
Contributor Author

@hanbitmyths Ah, I see, thank you. It would be helpful for newbies like me if the error message mentioned the data type - can I submit a pull request for this?

Also, is there a tool in the onnx ecosystem to do this conversion? I've been hacking on two scripts (float16-to-float32 and int64-to-int32), but they're very haphazard and even after fixing the above bug (here's the new model file) I've ran into Type Error: Type parameter (T) of Optype (Div) bound to different types (tensor(int64) and tensor(int32) in node (Div_27). Netron says that the axis value of the Gather_25 op that feeds into that Div is int64, but reading the onnx profobuf, it just says "INT": attribute { name: "axis", i: 0, type: INT }, and the AttributeProto.INT32 type doesn't exist. I wouldn't have thought that the axis attribute would affect the output type in any case.

Here's a video of netron showing the data for the relevant nodes (the axis attribute of Gather is the only int64 datatype, but as previously mentioned the protobuf just says INT):

simplescreenrecorder-2021-11-15_22.19.41.mp4

@hariharans29 hariharans29 added the feature request request for unsupported feature or enhancement label Nov 15, 2021
@wschin
Copy link
Contributor

wschin commented Nov 15, 2021

Looks like ORT can print all supported schemas if it finds an unsupported schema. In this ArgMax issue, we should print

Encounter unsupported ArgMax in opset ?? with input types to (int64,) and output types (int64,)
Found supported ArgMax(in=float,out=int64), ArgMax(in=double,out=int64) in opset ??, ArgMax(in=float,out=int64), ArgMax(in=double,out=int64) in opset ??, ...

@hariharans29
Copy link
Member

hariharans29 commented Nov 15, 2021

We do have a way to print that but it is not at the default logging level.

Here is the relevant PR: #3473. At the logging level of INFO/VERBOSE, you should find such logging messages about ops that are supported but just missing type support. The default logging level is WARNING. By default, only logs of the level >= WARNING are printed.

I think the reason why this can't be made a WARNING/ERROR level log is because this is printed while searching within one kernel registry. At this point, we have no way of knowing if there is a kernel that supports this type in another kernel registry and an WARNING/ERROR log could be mis-leading if that was the case. Anyway, I will leave it to the original implementer of this logging functionality (@pranavsharma) to see if he has a way to print out such useful logging messages at the default logging level.

@dengfenglai321
Copy link

Hi did you fix it? I accur the same problem that ArgMAX does not support int64. Could you tell me how to solve problem? thanks

@josephrocca
Copy link
Contributor Author

@cendelian As mentioned above, I used this to convert to int32: https://github.com/josephrocca/onnx-typecast/blob/master/convert-int64-to-int32.py But I've unfortunately run into more int64 related problems, also mentioned above. You might have better luck.

@dengfenglai321
Copy link

@cendelian As mentioned above, I used this to convert to int32: https://github.com/josephrocca/onnx-typecast/blob/master/convert-int64-to-int32.py But I've unfortunately run into more int64 related problems, also mentioned above. You might have better luck.

I think I do the same thing that I convert clip text model to onnx and auccur the same error.
I also unfortunately run into more int64 related problems,
so did you solve the all problem about clip's text model ?

how can we get around this problem?

I try to use clip's onnx model to deploy, and you?

@dengfenglai321
Copy link

I solve it , only change ArgMax's input to int32, everything else stays the same, and it ok!!

@nbardy
Copy link

nbardy commented Jan 1, 2022

https://github.com/josephrocca/onnx-typecast/blob/master/convert-int64-to-int32.py

How did you do this? I'm looking at the onnx file for CLIP and I only see "INT" defined for the type of ArgMax

image

All of the fields with data_type that are changeable are the constants. Implying there would be a lot to change.

image

@lonngxiang
Copy link

I solve it , only change ArgMax's input to int32, everything else stays the same, and it ok!!

how to solve it?

@dengfenglai321
Copy link

I solve it , only change ArgMax's input to int32, everything else stays the same, and it ok!!

how to solve it?

找到ArgMax的操作代码,在模型外面实现ArgMax操作,然后将结果传给网络

@josephrocca
Copy link
Contributor Author

@hanbitmyths Is there any chance that some official conversion tools could be developed to help with issues like this? Or some automatic casting at runtime (perhaps only if an option is set)?

@ramkrishna1121
Copy link

Having an official tool to get around such conversion issues would be great.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request request for unsupported feature or enhancement
Projects
None yet
Development

No branches or pull requests

10 participants