Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Fix argmin and argmax test cases #79503

Closed
wants to merge 6 commits into from

Conversation

qqaatw
Copy link
Collaborator

@qqaatw qqaatw commented Jun 14, 2022

Part of #79263

The keepdim argument is theoretically ignored when dim is not specified (See docs).

Unfortunately the PyTorch implementation seems to still take it into account, resulting in a non-fully-reduced tensor, which is an undefined behavior. Thus, I add dim argument to the tests to make the outputs between PyTorch and ONNX runtime consistent.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 14, 2022

🔗 Helpful links

✅ No Failures (26 Pending)

As of commit 263ae6b (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 14, 2022
@justinchuby justinchuby added the module: onnx Related to torch.onnx label Jun 15, 2022
Copy link
Collaborator

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@titaiwangms
Copy link
Collaborator

raise an issue for the doc in #79953.

@titaiwangms titaiwangms self-assigned this Jun 21, 2022
Copy link
Collaborator

@titaiwangms titaiwangms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

BowenBao
BowenBao previously approved these changes Jun 21, 2022
@BowenBao
Copy link
Collaborator

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Merge failed due to This PR is too stale; the last push date was more than 3 days ago. Please rebase and try again.
Raised by https://github.com/pytorch/pytorch/actions/runs/2537466773

@lazycal
Copy link
Contributor

lazycal commented Jun 21, 2022

raise an issue for the doc in #79953.

@AllenTiTaiWang FYI the doc issue has been explained in #78791

Copy link
Collaborator

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the link to the doc. So the behavior is defined and expected. @qqaatw could you retain the lines and add your changes as additional tests instead?

@qqaatw
Copy link
Collaborator Author

qqaatw commented Jun 22, 2022

Thanks for the link to the doc. So the behavior is defined and expected. @qqaatw could you retain the lines and add your changes as additional tests instead?

The ArgMin and ArgMax don't allow axis=None AFAIK, so in this case we have to flatten the input beforehand then feed it, which underlyingly loses dimension info. After all, retaining the lines would definitely make the proposed check #79263 fail (unless we intentionally resize the output to make it consistent with PyTorch).

@justinchuby
Copy link
Collaborator

@BowenBao thoughts?

@titaiwangms
Copy link
Collaborator

titaiwangms commented Jun 23, 2022

Personally, I feel like depending on what is the optimization goal of onnx opset, if the goal is to align the functionality as much as we can with torch, then maybe Argmin/Argmax should be modified to support this behavior instead and then add the test case, as I don't think changing the code of input/output format in test case would be wise (it definitely affects a lot of other cases.)

@qqaatw
Copy link
Collaborator Author

qqaatw commented Jun 23, 2022

I actually meant by resizing the output in the symbolic functions, not in the test cases. I agree that modifying the functionality of ArgMin and ArgMax on ONNX side would be the best choice.

@qqaatw
Copy link
Collaborator Author

qqaatw commented Jun 23, 2022

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased fix_argmin_argmax_test_onnx onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout fix_argmin_argmax_test_onnx && git pull --rebase)

@justinchuby
Copy link
Collaborator

ONNX definitions are set for any particular opsets. IMO If resizing is needed to match the behavior with PyTorch then we should do that

@qqaatw
Copy link
Collaborator Author

qqaatw commented Jul 1, 2022

All should be good now. (If there is any coding style issue, feel free to push on this branch)

@justinchuby justinchuby dismissed BowenBao’s stale review July 2, 2022 07:02

Reset based on new changes

Copy link
Collaborator

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the change! Some minor comments

torch/onnx/symbolic_helper.py Show resolved Hide resolved
torch/onnx/symbolic_helper.py Outdated Show resolved Hide resolved
torch/onnx/symbolic_helper.py Outdated Show resolved Hide resolved
torch/onnx/symbolic_helper.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM🚀

@justinchuby justinchuby requested review from BowenBao and removed request for BowenBao August 5, 2022 15:02
@justinchuby
Copy link
Collaborator

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link

github-actions bot commented Aug 5, 2022

Hey @qqaatw.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@titaiwangms titaiwangms added release notes: onnx torch.onnx related changes that should show up in the release notes topic: bug fixes topic category labels Aug 5, 2022
facebook-github-bot pushed a commit that referenced this pull request Aug 7, 2022
Summary:
Part of #79263

The `keepdim` argument is theoretically ignored when `dim` is not specified (See [docs](https://pytorch.org/docs/stable/generated/torch.argmin.html)).

Unfortunately the PyTorch implementation seems to still take it into account, resulting in a non-fully-reduced tensor, which is an undefined behavior. Thus, I add `dim` argument to the tests to make the outputs between PyTorch and ONNX runtime consistent.

Pull Request resolved: #79503
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/BowenBao

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/6bdf89b0c71a827be2a0969afa33f151e87a32f3

Reviewed By: kit1980

Differential Revision: D38478785

fbshipit-source-id: cd9493d6d088c4e32eb11bd19924efc0b6e6335c
pytorchmergebot pushed a commit that referenced this pull request Aug 10, 2022
Currently we don't have a dtype check in verifying the consistency between PyTorch and ONNX outputs. As a result, some of dtype inconsistencies were found and reported: #77842 #77845

This is a POC.

Failed workflows:
- [linux-xenial-py3.7-clang7-onnx / test (default, 2, 2, linux.2xlarge)]
  - inconsistent shape
    - TestONNXRuntime_opset10.test_all (#79371)
    - TestONNXRuntime_opset10.test_any (#79371)
    - TestONNXRuntime_opset10.test_argmin_argmax (#79503)
    - TestONNXRuntime_opset10.test_hardshrink (#79695)
    - TestONNXRuntime_opset10.test_linalg_norm (#79506)
    - TestONNXRuntime_opset10.test_linalg_vector_norm (#79506)
    - TestONNXRuntime_opset10.test_prelu_scalar (#79846)
    - TestONNXRuntime_opset10.test_softshrink (#79695)
    - TestONNXRuntime_opset10.test_sum_empty_tensor (skipped)
    - TestONNXRuntime_opset10.test_tolist (skipped)
  - inconsistent dtype
    - test_arithmetic_prim_bool (skipped)
    - test_arithmeticOps_with_low_precision (skipped)
    - test_arithmetic_prim_float (skipped)
    - test_logical_and (#79339)
    - test_logical_or (#79339)
    - test_logical_xor (#79339)
    - test_pow (skipped)
    - test_primitive_input_floating (skipped)
    - test_quantize_per_tensor (#79690)
    - test_quantized_adaptive_avg_pool2d (#79690)
    - test_quantized_arithmetic (#79690)
    - test_quantized_arithmetic_qfunctional (#79690)
    - test_quantized_conv2d (#79690)
    - test_quantized_conv2d_relu (#79690)
    - test_quantized_flatten (#79690)
    - test_quantized_hardsigmoid (#79690)
    - test_quantized_hardswish (#79690)
    - test_quantized_linear (#79690)
    - test_quantized_sigmoid (#79690)
    - test_item (skipped)
    - test_full_like_value (skipped)
    - TestONNXRuntime_opset7.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset8.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9_IRv4.test_div_rounding_mode (skipped)
    - test_outer (skipped)
    - test_symbolic_shape_inference_arange_2 (skipped)
Pull Request resolved: #79263
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
facebook-github-bot pushed a commit that referenced this pull request Aug 10, 2022
Summary:
Currently we don't have a dtype check in verifying the consistency between PyTorch and ONNX outputs. As a result, some of dtype inconsistencies were found and reported: #77842 #77845

This is a POC.

Failed workflows:
- [linux-xenial-py3.7-clang7-onnx / test (default, 2, 2, linux.2xlarge)]
  - inconsistent shape
    - TestONNXRuntime_opset10.test_all (#79371)
    - TestONNXRuntime_opset10.test_any (#79371)
    - TestONNXRuntime_opset10.test_argmin_argmax (#79503)
    - TestONNXRuntime_opset10.test_hardshrink (#79695)
    - TestONNXRuntime_opset10.test_linalg_norm (#79506)
    - TestONNXRuntime_opset10.test_linalg_vector_norm (#79506)
    - TestONNXRuntime_opset10.test_prelu_scalar (#79846)
    - TestONNXRuntime_opset10.test_softshrink (#79695)
    - TestONNXRuntime_opset10.test_sum_empty_tensor (skipped)
    - TestONNXRuntime_opset10.test_tolist (skipped)
  - inconsistent dtype
    - test_arithmetic_prim_bool (skipped)
    - test_arithmeticOps_with_low_precision (skipped)
    - test_arithmetic_prim_float (skipped)
    - test_logical_and (#79339)
    - test_logical_or (#79339)
    - test_logical_xor (#79339)
    - test_pow (skipped)
    - test_primitive_input_floating (skipped)
    - test_quantize_per_tensor (#79690)
    - test_quantized_adaptive_avg_pool2d (#79690)
    - test_quantized_arithmetic (#79690)
    - test_quantized_arithmetic_qfunctional (#79690)
    - test_quantized_conv2d (#79690)
    - test_quantized_conv2d_relu (#79690)
    - test_quantized_flatten (#79690)
    - test_quantized_hardsigmoid (#79690)
    - test_quantized_hardswish (#79690)
    - test_quantized_linear (#79690)
    - test_quantized_sigmoid (#79690)
    - test_item (skipped)
    - test_full_like_value (skipped)
    - TestONNXRuntime_opset7.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset8.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9_IRv4.test_div_rounding_mode (skipped)
    - test_outer (skipped)
    - test_symbolic_shape_inference_arange_2 (skipped)

Pull Request resolved: #79263
Approved by: https://github.com/justinchuby, https://github.com/BowenBao

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d9a7e93aaf3166e639ea413123bd6c38b9144adc

Reviewed By: seemethere

Differential Revision: D38585848

fbshipit-source-id: 9da98581ceec51142ae31d3f8a06f9f296a16b23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: onnx Related to torch.onnx open source release notes: onnx torch.onnx related changes that should show up in the release notes topic: bug fixes topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants