-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizer is broken for new PyTorch exports and segfault in onnx.checker #2417
Comments
If I export using |
+1. It is because the current code (https://github.com/onnx/onnx/blob/master/onnx/common/ir_pb_converter.cc#L230-L237) only add graph proto's inputs but not initializers into the map I think it is a very critical bug :| It affects all models exported from PyTorch 1.3. |
不止是pytorch版本问题,我用keras2onnx 转换模型,然后用onnxsim优化identity,也是这个问题。。。 |
It is not so easy to fix. #2247 contains a failed attempt. @prasanthpul @linkerzhang Is there some plan to fix it from ONNX maintainers? |
keras 低版本是否可行呢. |
Maybe :) You can have a try. |
@dashesy If I run checker on the original exported model (without adding keep_initializers_as_inputs=True) it's happy. Is it only after you attempt to run the optimizers that the checker is unhappy with the model? FWIW onnxruntime has working implementations of the optimizations you're attempting and in the latest version can save an updated model post optimization. Set SessionOptions.optimized_model_filepath before loading the model and it will write the optimized onnx model out to that path. It will check that an initializer is actually const before fusing, and can also search parent graphs for initializers in order to optimize subgraphs in control flow nodes. It will take a fairly significant overhaul of the IR and optimizer setup in onnx to make the implementations there more correct, so that's maybe your best short term option. e.g.
|
onnx-simplifier also supports the case that initializers are not in inputs now :) It generates optimized and clean ONNX models. |
@daquexian I actually rely on |
@skottmckay I tried sess_options = rt.SessionOptions()
#sess_options.graph_optimization_level = rt.GraphOptimizationLevel.ORT_ENABLE_EXTENDED
sess_options.optimized_model_filepath = "/mnt/output/gr/model_optimzied.onnx"
sess = rt.InferenceSession(onnxfile, sess_options) The model is twice in size but I still get these errors (which is why I used onnx.optimzier for):
I used to be able to fix these using onnx optimizer, but now that is broken |
@dashesy There is a gap in how ORT is handling initializers that become redundant during optimizations. microsoft/onnxruntime#2320 should address that. |
@skottmckay I applied that PR as a patch against current master, and it works! I used Which is all I wanted to do. |
Excellent. One note in case it's relevant - FusedConv is not an official ONNX operator, so this saved model would only be able to be run by ORT. If you need a model that conforms to the ONNX spec a lower optimization level would need to be used. For this model that would still mean Conv + BN gets fused and unused initializers removed, but the (Conv +BN result) + Relu that is handled by FusedConv would be missing. |
This issue deserves the highest priority to resolve, as it affects many many models and we cannot make every user or even library (e.g., caffe2 onnx backend, #2458) to use onnxruntime just for optimizing.. :( What's your opinion on it? |
Thank you @daquexian a lot for investigating the issue and getting the root cause! Sorry that I didn't chime into this topic early. Yes. This is indeed an issue of ONNX IR (c++) and optimizer (in ONNX repo) now. As the ONNX spec (model format and op spec) keeps moving on, the ONNX IR (c++) and optimizers are not maintained properly. That says, ONNX IR (c++) and optimizers in current repo are not taken as part of ONNX standard repo right now. So two options for us (the full community),
Meanwhile, I do think that ONNX runtime has a fair optimizer list maintained better, so that you guys may choose. ONNX runtime team will make the optimizer lib more general and easier to be used. |
…nx operator check
Please note that ONNX optimizer has been moved to another repo https://github.com/onnx/optimizer since ONNX 1.9. If you still have questions related to the optimizer, please raise an issue there. Thank you! |
Fist export a model (as I did for this issue) in the latest PyTorch
Now try optimzier
And you get this cryptic error message:
And checker segfaults!
Related to issue #1385 but with repro
The text was updated successfully, but these errors were encountered: