Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rounding behavior of Resize when interpolating on integral types is unspecified #3390

Closed
pranav-prakash opened this issue Apr 2, 2021 · 4 comments
Labels
bug no-issue-activity operator Issues related to ONNX operators spec clarification Clarification of the ONNX spec needed

Comments

@pranav-prakash
Copy link
Contributor

Bug Report

When input to Resize operator is of integral type and interpolation is to be performed, the rounding mode for the intermediary floating-point interpolated value is unspecified. ONNXRuntime currently seems to cast the intermediate float to the output type. But this floor behavior can lead to large biases when you're interpolating between two close values. Instead, it seems that rounding-to-even then casting would produce output that more closely tracks with the floating-point version.

Could the intended behavior be clarified?

@daquexian
Copy link
Member

@pranav-prakash I'm the main author of resize op spec. If needed, I will take a look at how pytorch/tf/.. do int8 interpolation and update onnx spec

@pranav-prakash
Copy link
Contributor Author

pranav-prakash commented Apr 18, 2021

@daquexian Thanks for looking into this. I did some brief experimentation and it seems like PyTorch does indeed round for interpolation. See the following sample case

input_tensor = torch.FloatTensor([0,1,1,0]).resize(1,1,2,2)
q = torch._make_per_tensor_quantized_tensor(input_tensor.type(torch.uint8), 1, 0)
>>> q
tensor([[[[0., 1.],
          [1., 0.]]]], size=(1, 1, 2, 2), dtype=torch.quint8,
       quantization_scheme=torch.per_tensor_affine, scale=1.0, zero_point=0)
>>> torch.nn.functional.interpolate(q, scale_factor=5, mode='bilinear')
tensor([[[[0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],
          [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],
          [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],
          [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],
          [0., 0., 0., 0., 0., 1., 1., 1., 1., 1.],
          [1., 1., 1., 1., 1., 0., 0., 0., 0., 0.],
          [1., 1., 1., 1., 1., 0., 0., 0., 0., 0.],
          [1., 1., 1., 1., 1., 0., 0., 0., 0., 0.],
          [1., 1., 1., 1., 1., 0., 0., 0., 0., 0.],
          [1., 1., 1., 1., 1., 0., 0., 0., 0., 0.]]]], size=(1, 1, 10, 10),
       dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine,
       scale=1.0, zero_point=0)

@stale
Copy link

stale bot commented Apr 19, 2022

Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale label Apr 19, 2022
@stale stale bot closed this as completed May 10, 2022
@jcwchen
Copy link
Member

jcwchen commented May 10, 2022

I think this issue might still exist so I will reopen it to track. Thanks.

@jcwchen jcwchen reopened this May 10, 2022
@stale stale bot removed the stale label May 10, 2022
@jcwchen jcwchen added the spec clarification Clarification of the ONNX spec needed label May 10, 2022
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug no-issue-activity operator Issues related to ONNX operators spec clarification Clarification of the ONNX spec needed
Projects
None yet
Development

No branches or pull requests

3 participants