-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't convert Upsample to onnx #18113
Comments
Either your forgeting the " " somewhere and did you forget to add these somwhere *args **kwargs and i thinks you should take your time to look over this error output and carefully read your code just to see any mistakes |
;) |
@Protocal13, thank you for answer, but can you clerify what is the problem in 'space' and *args **kwargs? |
@Protocal13 your answers on this issue and other issues are wrong or not helpful. Please only make comments if you think they are the right answer / direction. |
I have this issue too in my Resnet based project, and this bug exist from 1.0 till now. |
Any updates? |
I face the same problem about upsample. |
I am trying to convert UNet to Caffe2 using ONNX and I am also facing the same problem as others. Will it be solved soon? |
i face the problem too import torch.nn as nn
import torch
import torch.nn.functional as F
class Test(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
#return F.upsample(x, size=(x.shape[2] * 2, x.shape[3] * 2), mode='bilinear', align_corners=True)
# RuntimeError: ONNX symbolic expected a constant value in the trace
#return F.interpolate(x, size=(x.shape[2] * 2, x.shape[3] * 2), mode='bilinear', align_corners=True)
# RuntimeError: ONNX symbolic expected a constant value in the trace
#return F.upsample(x, size=(600, 600), mode='bilinear', align_corners=False)
# UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
#return F.interpolate(x, size=(600, 600), mode='bilinear', align_corners=True)
# UserWarning: ONNX export failed on upsample_bilinear2d because align_corners == True not supported
# RuntimeError: ONNX export failed: Couldn't export operator aten::upsample_bilinear2d
return F.interpolate(x, size=(600, 600), mode='bilinear', align_corners=False) #no warning, all clear
model = Test()
x = torch.zeros((1, 3, 300, 300))
torch.onnx._export(model, x, "test.onnx", verbose=True) |
meet too |
Pls!!! Fix this! ASAP!!! |
I believe it has been fixed in 93d5503 four hours ago :D |
The conversion from PyTorch to ONNX works fine, only when I go from ONNX to OpenVINO I get this error: [ ERROR ] Unexpected exception happened during extracting attributes for node 43. Dont know if this has anything to do with it. |
cc @pk-g @houseroad re Aeroxander's latest comment |
@Aeroxander do you want to provide your onnx model? I think it may be due to different opset version. |
@houseroad sure! Here it is: decoder.onnx Edit: I was able to convert it from v9 to v8 with ONNX and now the "Upsample" problem is solved! |
@Aeroxander How did you do the conversion to v8? When I tried with my model using a python script with ONNX 1.5.0 at the line- converted_model = version_converter.convert_version(original_model, 8) it gives the following error- adapt_upsample_9_8: Assertion |
Hi,
Here is the code to export Pytorch to ONNX:
Here is the verbose information of the onnx export:
If i understand properly your solution, it consist in forcing the V8 of the Upsample operator when exporting from Pytorch to ONNX. But how, do you achieve this ? Thank's a lot |
@Aeroxander @ftaralle The problem is PyTorch does not put the scale values in the Upsample layer, I have not tried to change the PyTorch code that generates the ONNX output as I am using ONNX only as an intermediate stage to OpenVino so I have hacked the OpenVino code to set the scale values to 2.0. If you wanted to change the ONNX file you could either rewrite the PyTorch exporter to add the scale values or alternatively write a script that deserializes the file afterwards as ONNX is a protobuf then make the corrections and serialize it back out again. |
Hi, following the suggestion of @jjhw I finaly managed to make the Upsample accepted by OpenVINO import onnx
from onnx import version_converter, helper
# load model
original_model = onnx.load(model_path)
# converts oppset v9 to v8
converted_model = version_converter.convert_version(original_model, 8)
# change attribute of all Upsample nodes
for node in converted_model.graph.node:
if node.op_type == 'Upsample':
# get id-attribute_name map
id = { attribute.name: id for id, attribute in enumerate(node.attribute)}
# get & remove "scales" attribute
att_scales = node.attribute.pop(id['scales'])
_, _, scale_height, scale_width = att_scales.floats # CARE IT DEPENDS ON ORDER. HERE [B, C, W, H] IS EXPECTED
# append new attributes 'scale_width' & 'scale_height'
node.attribute.extend([
helper.make_attribute('width_scale', scale_width),
helper.make_attribute('height_scale', scale_height)
])
# save
onnx.save(converted_model, result_path) Here are OpenVINO's error messages that i followed:
To be noted the miss spelling in the last error message :p |
Thanks @ftaralle, your python code works for me. |
@ftaralle thanks for your code, when i run this code, i got this error:
File "/usr/local/lib/python3.6/dist-packages/onnx/version_converter.py", line 166, in convert_version i dont know how to fix it, can you give me some advice? thanks |
Hi @guoguangchao |
@ftaralle thanks for your reply, in the original_model.graph one of the upsample layer is as follows:
I used the FPN structure in the model锛宼here is F.interpolate method in the model.The definition is as follows:
Printing of FPN:
|
So i guess it is about your
You are using fixed-size resising operation. So indeed, there is no scaling factor in here. I'm not sure, but i think that scaling with a fixed size is not supported (yet). If i understand properly, your are processing, in parrallel 3 inputs then merge them. ;p |
@ftaralle Thanks for your advice. |
Does the problem still exist in master? |
@guoguangchao, the Upsample node you copied in your comment seems correct for opset 9. If I understand correctly you are trying to convert the model from opset 9 to opset 8? |
@lara-hdr Thanks for your reply, My Pytorch verson is 1.2.0, I try to export the model in the following way:
|
I have the same problem when I export F.interpolate. My torch version is 1.2.0. When I set opset_version as 9, everything is fine. However, problem happens when I set opset_version as 7. Any suggestions ? |
@kealennieh - the ONNX version of the operator that support F.interpolate, onnx::Resize, has undergone significant changes since opset 7. Not all scenarios of F.interpolate can be supported in opset 7 version, which is one reason why the op was upgraded in onnx in subsequent versions. My suggestion would be consider using opset 9 (or even higher) with the latest PyTorch 1.3 (or even the nightly build is possible). Is there any reason you cannot use opset 9? |
@spandantiwari Thanks for your suggestion. The reason is that my current tensorrt can only support opset 7.
to
Then F.interpolate can be exported correctly in opset 7. |
Could you show me how to convert it from v9 to v8, i have the same problem |
@voqtuyen It's quite easy, just do it with the ONNX python API: https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#converting-version-of-an-onnx-model-within-default-domain-aionnx |
ENV: I can convert model to onnx, but output of pytorch and onnx do not match. import torch.nn as nn
import torch
import torch.nn.functional as F
import numpy as np
import onnx
import onnxruntime
class Test(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return F.interpolate(x, size=(400, 600), mode='bilinear', align_corners=False) #no warning, all clear
model = Test()
x = torch.rand((1, 3, 200, 300))
torch.onnx._export(model, x, "test.onnx", verbose=True)
model.eval()
with torch.no_grad():
torch_out = model(x)
ort_session = onnxruntime.InferenceSession("test.onnx")
ort_input = {ort_session.get_inputs()[0].name: x.cpu().numpy()}
ort_out = ort_session.run(None, ort_input)[0]
np.testing.assert_allclose(torch_out.cpu().numpy(), ort_out, rtol=1e-03, atol=1e-05) output
Any suggestions ? |
@nieyan I met similar problem, do you have any progress on it? |
I think convert to onnx opset version 11 can get correct result . But I do need onnx file with opset version 9 for next step, which is convert to embedding device's special model format. |
hi, @nieyan Thanks for the pointing out.
|
@blueardour try to upgrade your onnx. |
Original model reported by @E1eMenta can be exported with If someone still needs Upsample to be exported specifically for opset version 9, please open a new issue and please note what is going to consume the ONNX model so that we can prioritize the issue. |
馃悰 Bug
pytorch == 1.0.1.post2
onnx == 1.4.1
I'm trying to convert 'upsample' op from pytorch to onnx.
Code:
Output error:
I've tried different modes and nn.interpolation, result is the same. What is the problem?
The text was updated successfully, but these errors were encountered: