-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[wasm runtime] Could not find an implementation for ArgMax(12) node with name 'ArgMax_1382' #9760
Comments
ORT supports types selectively to keep a binary size small as possible. In case of ArgMax, ORT support only tensor(double), tensor(float), tensor(int32) as described in https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md#cpuexecutionprovider. In your model, ArgMax is a type of int64, so if you can change the type of an input to float32 type, it should work properly. |
@hanbitmyths Ah, I see, thank you. It would be helpful for newbies like me if the error message mentioned the data type - can I submit a pull request for this? Also, is there a tool in the onnx ecosystem to do this conversion? I've been hacking on two scripts (float16-to-float32 and int64-to-int32), but they're very haphazard and even after fixing the above bug (here's the new model file) I've ran into Here's a video of netron showing the data for the relevant nodes (the axis attribute of Gather is the only int64 datatype, but as previously mentioned the protobuf just says simplescreenrecorder-2021-11-15_22.19.41.mp4 |
Looks like ORT can print all supported schemas if it finds an unsupported schema. In this ArgMax issue, we should print
|
We do have a way to print that but it is not at the default logging level. Here is the relevant PR: #3473. At the logging level of INFO/VERBOSE, you should find such logging messages about ops that are supported but just missing type support. The default logging level is WARNING. By default, only logs of the level >= WARNING are printed. I think the reason why this can't be made a WARNING/ERROR level log is because this is printed while searching within one kernel registry. At this point, we have no way of knowing if there is a kernel that supports this type in another kernel registry and an WARNING/ERROR log could be mis-leading if that was the case. Anyway, I will leave it to the original implementer of this logging functionality (@pranavsharma) to see if he has a way to print out such useful logging messages at the default logging level. |
Hi did you fix it? I accur the same problem that ArgMAX does not support int64. Could you tell me how to solve problem? thanks |
@cendelian As mentioned above, I used this to convert to int32: https://github.com/josephrocca/onnx-typecast/blob/master/convert-int64-to-int32.py But I've unfortunately run into more int64 related problems, also mentioned above. You might have better luck. |
I think I do the same thing that I convert clip text model to onnx and auccur the same error. how can we get around this problem? I try to use clip's onnx model to deploy, and you? |
I solve it , only change ArgMax's input to int32, everything else stays the same, and it ok!! |
How did you do this? I'm looking at the onnx file for CLIP and I only see "INT" defined for the type of ArgMax All of the fields with data_type that are changeable are the constants. Implying there would be a lot to change. |
how to solve it? |
找到ArgMax的操作代码,在模型外面实现ArgMax操作,然后将结果传给网络 |
@hanbitmyths Is there any chance that some official conversion tools could be developed to help with issues like this? Or some automatic casting at runtime (perhaps only if an option is set)? |
Having an official tool to get around such conversion issues would be great. |
Is your feature request related to a problem? Please describe.
I've exported OpenAI's CLIP text encoder to ONNX with this command (as shown in this Colab notebook):
You can see here: https://josephrocca.github.io/openai-clip-js/onnx-text-demo.html that it's throwing an error when trying to create a session (
Could not find an implementation for ArgMax(12) node with name 'ArgMax_1382'
). Just click that link, and then click the "start" button, and open the browser console to see the error.Here's a direct link to the onnx file: https://huggingface.co/rocca/openai-clip-js/resolve/main/clip-text-vit-32-float32.onnx
I'm guessing this means that ArgMax is not yet supported by the wasm runtime?
System information
Describe the solution you'd like
Support for ArgMax, and the other operators in the above-mentioned onnx file, in case there are other operators that are also missing from the wasm runtime.
Describe alternatives you've considered
Not sure of any alternatives here.
Additional context
I've made a CLIP image encoding demo and it works fine. It was exported using the same command.
Here's the error stack (apologies for the screenshot - not sure how to copy-paste Chrome DevTools logs without it getting mangled):
The text was updated successfully, but these errors were encountered: