Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INSTANCENORM] Instance Normalization ignores track_running_stats=True when exporting to ONNX. #72057

Closed
Mypathissional opened this issue Jan 31, 2022 · 3 comments
Labels
module: onnx Related to torch.onnx onnx-needs-info needs information from the author / reporter before ONNX team can take action triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@Mypathissional
Copy link

Mypathissional commented Jan 31, 2022

馃悰 Describe the bug

Hi,

I was exporting https://github.com/yunjey/stargan to ONNX, they have Instance Normalization layers with track_running_stats=True. In this case the layer keeps 4 params: running mean/variance, weight and bias.
image
Here is a minimal illustaration.

norm = nn.InstanceNorm2d(64, affine=True, track_running_stats=True)
input = torch.randn(1, 64, 128, 128)
norm(input)

norm.eval()
torch.onnx._export(norm,             # model being run
                           (torch.rand(1,64, 128, 128)),                   
                           "./norm.onnx") ;
with torch.no_grad():
    torchout=norm(input)

But when I export it to onnx, it keeps only weight and bias params. Can this be fixed?
image

I have also tested the layer with the onnxruntime, and the absolute and relative error is large.

import onnxruntime

ort_session = onnxruntime.InferenceSession("./norm.onnx")

def to_numpy(tensor):
    return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()

# compute ONNX Runtime output prediction
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input)}
ort_outs = ort_session.run(None,ort_inputs)

# compare ONNX Runtime and PyTorch results
np.testing.assert_allclose(to_numpy(torchout), ort_outs[0], rtol=1e-03, atol=1e-05)

print("Exported model has been tested with ONNXRuntime, and the result looks good!")

And this is the result of the onnx runtime from the instance noramlization layer.
image

Versions

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.8.1+cpu
[pip3] torchaudio==0.8.1
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.9.1+cpu
[pip3] torchviz==0.0.1
[conda] Could not collect

@VitalyFedyunin VitalyFedyunin added module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Feb 1, 2022
@garymm garymm added the onnx-needs-info needs information from the author / reporter before ONNX team can take action label Feb 3, 2022
@garymm
Copy link
Collaborator

garymm commented Feb 3, 2022

Please a minimal repro code showing a model definition, the call to torch.onnx.export, and the bug.
Ideally take a screenshot of the exported model in netron and explain what's wrong about it.

@Mypathissional
Copy link
Author

I have run some experiments https://discuss.pytorch.org/t/understanding-instance-normalization-2d-with-running-mean-and-running-var/144139/3 and if my understanding is correct, it makes sense why the running mean and variances are not exported by ONNX. It was just confusing that the results in the evaluation mode of pytorch vs onxx model were different.

@garymm
Copy link
Collaborator

garymm commented Feb 16, 2022

OK, seems this is working as intended. Closing.

@garymm garymm closed this as completed Feb 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: onnx Related to torch.onnx onnx-needs-info needs information from the author / reporter before ONNX team can take action triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants