New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX export of MaxUnpool2d is not supported #25088
Comments
I am facing the same error. Looks like, ONNX does not support max_unpool2d . |
onnx people says issue it is torch bug. |
Hi, is there any progress here?
|
Currently, I think support max_unpool2d is not easy... MORE锛孧axUnpool in onnx is ambiguous.. I will create an issue. |
Hi, Hi, Hi, is there any progress here? |
Any update on this? I'm having the same issue |
same problem |
@habib-19 same problem with the exact same use case. Were you able to transform ENet to ONNX? I am aware that 'max_unpool2d' is still not supported, but eventually you have replaced it? |
Hi PatrikNa, I was not able to used the conversation tool. (for pytorch to tensorflow) |
@habib-19 would you mind sharing your implementation? I would be interested in testing it with Tensorflow, too. |
@PatrickNa I did this last year in my previous company. Sorry, I don't have access to my implementation. |
That's alright. Was ENET suitable in you application back then - in accuracy and performance? |
@PatrickNa Yes for our application it was good, it was a little modified version of ENET. Both the Pytorch and TensorFlow implementations were acceptable |
Exporting the operator max_unpool2d to ONNX opset version 11 is not supported. Please let me know if this issue has got fixed ? |
Hi, has anyone found a solution yet? |
Still an issue with |
Is there any progress on this problem ? |
check out this link. It provides a maxunpool2d module which is able to convert to onnx successfully. |
This is being tracked internally at Microsoft by https://msdata.visualstudio.com/Vienna/_workitems/edit/1444696 |
Copy pasting work-around from @markson14's link. I have not verified this is correct: import torch
import torch.nn as nn
import torch.nn.functional as F
from mmcv.cnn.utils.weight_init import xavier_init
from torch.autograd import Function
from torch.nn.modules.pooling import _MaxUnpoolNd
from torch.nn.modules.utils import _pair
class MaxUnpool2dop(Function):
"""We warp the `torch.nn.functional.max_unpool2d`
with an extra `symbolic` method, which is needed while exporting to ONNX.
Users should not call this function directly.
"""
@staticmethod
def forward(ctx, input, indices, kernel_size, stride, padding,
output_size):
"""Forward function of MaxUnpool2dop.
Args:
input (Tensor): Tensor needed to upsample.
indices (Tensor): Indices output of the previous MaxPool.
kernel_size (Tuple): Size of the max pooling window.
stride (Tuple): Stride of the max pooling window.
padding (Tuple): Padding that was added to the input.
output_size (List or Tuple): The shape of output tensor.
Returns:
Tensor: Output tensor.
"""
return F.max_unpool2d(input, indices, kernel_size, stride, padding,
output_size)
@staticmethod
def symbolic(g, input, indices, kernel_size, stride, padding, output_size):
# get shape
input_shape = g.op('Shape', input)
const_0 = g.op('Constant', value_t=torch.tensor(0))
const_1 = g.op('Constant', value_t=torch.tensor(1))
batch_size = g.op('Gather', input_shape, const_0, axis_i=0)
channel = g.op('Gather', input_shape, const_1, axis_i=0)
# height = (height - 1) * stride + kernel_size
height = g.op(
'Gather',
input_shape,
g.op('Constant', value_t=torch.tensor(2)),
axis_i=0)
height = g.op('Sub', height, const_1)
height = g.op('Mul', height,
g.op('Constant', value_t=torch.tensor(stride[1])))
height = g.op('Add', height,
g.op('Constant', value_t=torch.tensor(kernel_size[1])))
# width = (width - 1) * stride + kernel_size
width = g.op(
'Gather',
input_shape,
g.op('Constant', value_t=torch.tensor(3)),
axis_i=0)
width = g.op('Sub', width, const_1)
width = g.op('Mul', width,
g.op('Constant', value_t=torch.tensor(stride[0])))
width = g.op('Add', width,
g.op('Constant', value_t=torch.tensor(kernel_size[0])))
# step of channel
channel_step = g.op('Mul', height, width)
# step of batch
batch_step = g.op('Mul', channel_step, channel)
# channel offset
range_channel = g.op('Range', const_0, channel, const_1)
range_channel = g.op(
'Reshape', range_channel,
g.op('Constant', value_t=torch.tensor([1, -1, 1, 1])))
range_channel = g.op('Mul', range_channel, channel_step)
range_channel = g.op('Cast', range_channel, to_i=7) # 7 is int64
# batch offset
range_batch = g.op('Range', const_0, batch_size, const_1)
range_batch = g.op(
'Reshape', range_batch,
g.op('Constant', value_t=torch.tensor([-1, 1, 1, 1])))
range_batch = g.op('Mul', range_batch, batch_step)
range_batch = g.op('Cast', range_batch, to_i=7) # 7 is int64
# update indices
indices = g.op('Add', indices, range_channel)
indices = g.op('Add', indices, range_batch)
return g.op(
'MaxUnpool',
input,
indices,
kernel_shape_i=kernel_size,
strides_i=stride)
class MaxUnpool2d(_MaxUnpoolNd):
"""This module is modified from Pytorch `MaxUnpool2d` module.
Args:
kernel_size (int or tuple): Size of the max pooling window.
stride (int or tuple): Stride of the max pooling window.
Default: None (It is set to `kernel_size` by default).
padding (int or tuple): Padding that is added to the input.
Default: 0.
"""
def __init__(self, kernel_size, stride=None, padding=0):
super(MaxUnpool2d, self).__init__()
self.kernel_size = _pair(kernel_size)
self.stride = _pair(stride or kernel_size)
self.padding = _pair(padding)
def forward(self, input, indices, output_size=None):
"""Forward function of MaxUnpool2d.
Args:
input (Tensor): Tensor needed to upsample.
indices (Tensor): Indices output of the previous MaxPool.
output_size (List or Tuple): The shape of output tensor.
Default: None.
Returns:
Tensor: Output tensor.
"""
return MaxUnpool2dop.apply(input, indices, self.kernel_size,
self.stride, self.padding, output_size) |
Hi @habib-19 , We鈥檝e gone ahead and closed this issue because it has a workaround. This workaround can be found here. Thanks, |
The workaround in the code shared by @markson14 @garymm didn't work for me when passing an optional
The above gives the following error:
I realized that the error came from
I then loaded the exported model with ONNX but if failed at checking inference shapes and in particular because of the Any ideas? Did anyone exported to ONNX using Versions: |
More info about this from here: pytorch/pytorch#25088
Hmm, I am not sure I understand the problem completely. I am trying to export a pre-trained model to ONNX without the possibility to modify or re-train the model. I have updated to version torch 1.13.1 and I set opset_version=76 when exporting. Trying 18 gave me an error. I still get the following error:
Should this work without the workaround now? How can I apply the work-around in my situation? |
Hello! |
Hey all! The workaround is not working for me. Trying to use it leads to I tried the conversion on PyTorch versions 2.0.1 and 2.1 (Error without the workaround: "torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::max_unpool2d' to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.") |
Hi @besserai! I solved the issues with dimensions in my post. I'll check and share it for others. For your |
Hey @carpemonf, This is my conversion code:
And I am using this ENet implementation by iArunava, the MaxUnpool2d function is called in the UBNeck block. I also added my model which I am trying to convert here. |
@besserai I think you are seeing Below is the code I ended up with. It adds the output size to the import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Function
from torch.nn.modules.pooling import _MaxUnpoolNd
from torch.nn.modules.utils import _pair
class MaxUnpool2dop(Function):
"""We warp the `torch.nn.functional.max_unpool2d`
with an extra `symbolic` method, which is needed while exporting to ONNX.
Users should not call this function directly.
"""
@staticmethod
def forward(ctx, input, indices, kernel_size, stride, padding,
output_size):
"""Forward function of MaxUnpool2dop.
Args:
input (Tensor): Tensor needed to upsample.
indices (Tensor): Indices output of the previous MaxPool.
kernel_size (Tuple): Size of the max pooling window.
stride (Tuple): Stride of the max pooling window.
padding (Tuple): Padding that was added to the input.
output_size (List or Tuple): The shape of output tensor.
Returns:
Tensor: Output tensor.
"""
return F.max_unpool2d(input, indices, kernel_size, stride, padding,
output_size)
@staticmethod
def symbolic(g, input, indices, kernel_size, stride, padding, output_size):
# get shape
input_shape = g.op('Shape', input)
const_0 = g.op('Constant', value_t=torch.tensor(0))
const_1 = g.op('Constant', value_t=torch.tensor(1))
output_size_list = list(output_size)
const_size = g.op('Constant', value_t=torch.tensor(output_size_list))
batch_size = g.op('Gather', input_shape, const_0, axis_i=0)
channel = g.op('Gather', input_shape, const_1, axis_i=0)
# height = (height - 1) * stride + kernel_size
height = g.op(
'Gather',
input_shape,
g.op('Constant', value_t=torch.tensor(2)),
axis_i=0)
height = g.op('Sub', height, const_1)
height = g.op('Mul', height,
g.op('Constant', value_t=torch.tensor(stride[1])))
height = g.op('Add', height,
g.op('Constant', value_t=torch.tensor(kernel_size[1])))
# width = (width - 1) * stride + kernel_size
width = g.op(
'Gather',
input_shape,
g.op('Constant', value_t=torch.tensor(3)),
axis_i=0)
width = g.op('Sub', width, const_1)
width = g.op('Mul', width,
g.op('Constant', value_t=torch.tensor(stride[0])))
width = g.op('Add', width,
g.op('Constant', value_t=torch.tensor(kernel_size[0])))
# step of channel
channel_step = g.op('Mul', height, width)
# step of batch
batch_step = g.op('Mul', channel_step, channel)
# channel offset
range_channel = g.op('Range', const_0, channel, const_1)
range_channel = g.op(
'Reshape', range_channel,
g.op('Constant', value_t=torch.tensor([1, -1, 1, 1])))
range_channel = g.op('Mul', range_channel, channel_step)
range_channel = g.op('Cast', range_channel, to_i=7) # 7 is int64
# batch offset
range_batch = g.op('Range', const_0, batch_size, const_1)
range_batch = g.op(
'Reshape', range_batch,
g.op('Constant', value_t=torch.tensor([-1, 1, 1, 1])))
range_batch = g.op('Mul', range_batch, batch_step)
range_batch = g.op('Cast', range_batch, to_i=7) # 7 is int64
# update indices
indices = g.op('Add', indices, range_channel)
indices = g.op('Add', indices, range_batch)
return g.op(
'MaxUnpool',
input,
indices,
const_size,
kernel_shape_i=kernel_size,
strides_i=stride)
class MaxUnpool2d(_MaxUnpoolNd):
"""This module is modified from Pytorch `MaxUnpool2d` module.
Args:
kernel_size (int or tuple): Size of the max pooling window.
stride (int or tuple): Stride of the max pooling window.
Default: None (It is set to `kernel_size` by default).
padding (int or tuple): Padding that is added to the input.
Default: 0.
"""
def __init__(self, kernel_size, stride=None, padding=0):
super(MaxUnpool2d, self).__init__()
self.kernel_size = _pair(kernel_size)
self.stride = _pair(stride or kernel_size)
self.padding = _pair(padding)
def forward(self, input, indices, output_size=None):
"""Forward function of MaxUnpool2d.
Args:
input (Tensor): Tensor needed to upsample.
indices (Tensor): Indices output of the previous MaxPool.
output_size (List or Tuple): The shape of output tensor.
Default: None.
Returns:
Tensor: Output tensor.
"""
if output_size is not None and isinstance(output_size, torch.Size):
output_size = tuple(s.item() for s in output_size)
return MaxUnpool2dop.apply(input, indices, self.kernel_size,
self.stride, self.padding, output_size) |
Upon further investigation, I realized that Check if that's the case for you:
I've updated the code I shared above. To address the situation I convert |
Hey @carpemonf, The conversion did the trick! 馃帀 Now the export works, thank you so much! 馃檹 Indeed I got the same types for the output before ( What would be the best place to fix the bug for the future? I could copy the workaround to the ENet repo I used, as many people seem to end up here using that. But better would be to fix it in the Pytorch/ONNX repos, no? |
Glad it worked! I'm not sure about the best place. Ideally this would be a supported operator in Torch instead of the workaround... How do you guys use the workaround in ENet? I'm doing the above for my model:
The other option is to put this inside the custom "MaxUnpool" to avoid passing the
With both you will get a warning because of the size conversion type. |
I use the workaround like you, just switching to the other import when I want to export the model. Actually, I was not able to use the converted model for inference so far, but that might be an error in my hard to debug opencv.js. |
馃悰 Bug
Unable to convert ENet to ONNX because of missing max_unpool2d error?
ONNX export failed on ATen operator max_unpool2d because torch.onnx.symbolic_opset9.max_unpool2d does not exist
ONNX issue told me to report in torch
To Reproduce
Steps to reproduce the behavior:
1.download the ENet model
1.use onnx to convert pytorch to onnx
Expected behavior
Environment
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX TITAN X
GPU 1: GeForce GTX TITAN X
GPU 2: GeForce GTX TITAN X
GPU 3: GeForce GTX TITAN X
Nvidia driver version: 410.48
cuDNN version: /usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn.so.7.2.1
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] numpy-indexed==0.3.5
[pip3] torch==1.1.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.3.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.0.2 py36h7b6447c_0
[conda] mkl_fft 1.0.14 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] pytorch 1.2.0 py3.6_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchvision 0.4.0 py36_cu100 pytorch
Additional context
I am using conda env to run the code
cc @BowenBao @neginraoof
The text was updated successfully, but these errors were encountered: