Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

import_onnx.py parser for onnx opset >= 9 has bug #16590

Open
deHsien opened this issue Oct 23, 2019 · 7 comments
Open

import_onnx.py parser for onnx opset >= 9 has bug #16590

deHsien opened this issue Oct 23, 2019 · 7 comments

Comments

@deHsien
Copy link

deHsien commented Oct 23, 2019

which cause node name not found

File "/usr/local/lib/python2.7/dist-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py", line 115, in from_onnx
    inputs = [self._nodes[i] for i in node.input]
KeyError: u'Input/sub/y:0'

The model is fine with onnxruntime
Moreover, opset <= 8 is fine as well

@deHsien deHsien added the Bug label Oct 23, 2019
@deHsien deHsien changed the title import_onnx.py node name not found import_onnx.py parser for onnx opset >= 9 has bug Oct 24, 2019
@zachgk zachgk added the ONNX label Nov 7, 2019
@samskalicky
Copy link
Contributor

@lanking520 assign @oorqueda

@oavision7946
Copy link

Please assign it to me until I find an owner

@kice
Copy link
Contributor

kice commented Dec 9, 2019

UPDATE:

I think mxnet should look at the initializers first. I checked the doc for pytorch export onnx model, and found that I should set keep_initializers_as_inputs=True for mxnet import.

Doc for torch.onnx.export

keep_initializers_as_inputs (bool, default None) – If True, all the initializers (typically corresponding to parameters) in the exported graph will also be added as inputs to the graph. If False, then initializers are not added as inputs to the graph, and only the non-parameter inputs are added as inputs. This may allow for better optimizations (such as constant folding etc.) by backends/runtimes that execute these graphs. If unspecified (default None), then the behavior is chosen automatically as follows. If operator_export_type is OperatorExportTypes.ONNX, the behavior is equivalent to setting this argument to False. For other values of operator_export_type, the behavior is equivalent to setting this argument to True.

@oorqueda Would you like to fix it?


In my case, I use pytorch to export onnx with opset less than 8, it still cannot import the model to mxnet.

>>>mxnet.__version__
'1.6.0'
>>> torch.__version__
'1.3.1'
>>> onnx.__version__
'1.6.0'
onnx_model = onnx.load(model_file)
for node in onnx_model.graph.node:
    for i in node.input:
        print(i)

Here is the output

data
head.0.weight
head.0.bias
19
body.0.body.0.weight
body.0.body.0.bias
20
21
body.0.body.2.weight
body.0.body.2.bias
22
19
23
body.1.body.0.weight
body.1.body.0.bias
24
25
body.1.body.2.weight
body.1.body.2.bias
26
23
27
body.2.weight
body.2.bias
28
19
29
tail.0.0.weight
tail.0.0.bias
31
30
43
32
34
33
44
35
tail.0.2.weight
tail.0.2.bias
37
36
45
38
40
39
46
41
tail.1.weight
tail.1.bias

Here is error when I import the model.

c:\program files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_model.py in import_model(model_file)
     57     # loads model file and returns ONNX protobuf object
     58     model_proto = onnx.load_model(model_file)
---> 59     sym, arg_params, aux_params = graph.from_onnx(model_proto.graph)
     60     return sym, arg_params, aux_params
     61 

c:\program files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in from_onnx(self, graph)
    113             node_name = node_name if node_name else None
    114             onnx_attr = self._parse_attr(node.attribute)
--> 115             inputs = [self._nodes[i] for i in node.input]
    116             mxnet_sym = self._convert_operator(node_name, op_name, onnx_attr, inputs)
    117 

c:\program files\python37\lib\site-packages\mxnet\contrib\onnx\onnx2mx\import_onnx.py in <listcomp>(.0)
    113             node_name = node_name if node_name else None
    114             onnx_attr = self._parse_attr(node.attribute)
--> 115             inputs = [self._nodes[i] for i in node.input]
    116             mxnet_sym = self._convert_operator(node_name, op_name, onnx_attr, inputs)
    117 

KeyError: 'head.0.weight'

I even try to add the model parameters to the input names for pytorch to export,

opset = 7
input_names = ['data'] + list(model.state_dict().keys())
torch.onnx.export(model,                     # model being run
    x,                         # model input (or a tuple for multiple inputs)
    onnx_name,                 # where to save the model (can be a file or file-like object)
    export_params=True,        # store the trained parameter weights inside the model file
    opset_version=opset,       # the ONNX version to export the model to
    do_constant_folding=True,  # whether to execute constant folding for optimization
    input_names = input_names, # the model's input names
    output_names = ['output']  # the model's output names
)

@lilipj
Copy link

lilipj commented Feb 13, 2020

I have the same problem with keras2onnx.
I tried onnxmltools to do the conversion from keras to onnx, it fails, same error. But when I change the target_opset to 8 in onnxmltool.convert_keras() function, it works

@TristonC
Copy link
Contributor

TristonC commented May 5, 2021

@szha @Zha0q1 Has this issue been solved?

@Monteiro4
Copy link

when exporting from torch to onnx, if we add "keep_initializers_as_inputs=True" parameter to export() function, it solves the issue.

@gcunhase
Copy link

Is there a solution for this in TensorFlow?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Development

No branches or pull requests

9 participants