Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Fix any and all outputs' shape #79371

Closed
wants to merge 3 commits into from

Conversation

qqaatw
Copy link
Collaborator

@qqaatw qqaatw commented Jun 12, 2022

Part of #79263

Before: When dim == None and keepdim == 0(False), the reduced output has [1] shape.
After: Squeeze the output so that the shape will be [] as PyTorch's behavior.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 12, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 1dfec2b (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@qqaatw qqaatw changed the title Fix shapes [ONNX] Fix any and all outputs' shape Jun 12, 2022
@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 13, 2022
@justinchuby justinchuby added the module: onnx Related to torch.onnx label Jun 13, 2022
@qqaatw qqaatw force-pushed the fix_any_all_shape_onnx branch 2 times, most recently from 67e3fbb to ab599c5 Compare June 18, 2022 12:14
Copy link
Collaborator

@titaiwangms titaiwangms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

@qqaatw
Copy link
Collaborator Author

qqaatw commented Jun 21, 2022

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased fix_any_all_shape_onnx onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout fix_any_all_shape_onnx && git pull --rebase)

@qqaatw
Copy link
Collaborator Author

qqaatw commented Jun 21, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

@qqaatw your PR has been successfully merged.

@github-actions
Copy link

Hey @qqaatw.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Jun 22, 2022
Summary:
Part of #79263

Before: When `dim` == `None` and `keepdim` == `0`(`False`), the reduced output has `[1]` shape.
After: Squeeze the output so that the shape will be `[]` as PyTorch's behavior.

Pull Request resolved: #79371
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/4b52babcd972ffe92ee55412af1da131f61ffa06

Reviewed By: atalman

Differential Revision: D37343997

Pulled By: atalman

fbshipit-source-id: 07558d99636a1552a99ddfc53b4143c8eea0650c
miladm pushed a commit to miladm/pytorch that referenced this pull request Jun 27, 2022
Part of pytorch#79263

Before: When `dim` == `None` and `keepdim` == `0`(`False`), the reduced output has `[1]` shape.
After: Squeeze the output so that the shape will be `[]` as PyTorch's behavior.

Pull Request resolved: pytorch#79371
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
justinchuby pushed a commit to justinchuby/pytorch that referenced this pull request Jul 27, 2022
Part of pytorch#79263

Before: When `dim` == `None` and `keepdim` == `0`(`False`), the reduced output has `[1]` shape.
After: Squeeze the output so that the shape will be `[]` as PyTorch's behavior.

Pull Request resolved: pytorch#79371
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
pytorchmergebot pushed a commit that referenced this pull request Aug 10, 2022
Currently we don't have a dtype check in verifying the consistency between PyTorch and ONNX outputs. As a result, some of dtype inconsistencies were found and reported: #77842 #77845

This is a POC.

Failed workflows:
- [linux-xenial-py3.7-clang7-onnx / test (default, 2, 2, linux.2xlarge)]
  - inconsistent shape
    - TestONNXRuntime_opset10.test_all (#79371)
    - TestONNXRuntime_opset10.test_any (#79371)
    - TestONNXRuntime_opset10.test_argmin_argmax (#79503)
    - TestONNXRuntime_opset10.test_hardshrink (#79695)
    - TestONNXRuntime_opset10.test_linalg_norm (#79506)
    - TestONNXRuntime_opset10.test_linalg_vector_norm (#79506)
    - TestONNXRuntime_opset10.test_prelu_scalar (#79846)
    - TestONNXRuntime_opset10.test_softshrink (#79695)
    - TestONNXRuntime_opset10.test_sum_empty_tensor (skipped)
    - TestONNXRuntime_opset10.test_tolist (skipped)
  - inconsistent dtype
    - test_arithmetic_prim_bool (skipped)
    - test_arithmeticOps_with_low_precision (skipped)
    - test_arithmetic_prim_float (skipped)
    - test_logical_and (#79339)
    - test_logical_or (#79339)
    - test_logical_xor (#79339)
    - test_pow (skipped)
    - test_primitive_input_floating (skipped)
    - test_quantize_per_tensor (#79690)
    - test_quantized_adaptive_avg_pool2d (#79690)
    - test_quantized_arithmetic (#79690)
    - test_quantized_arithmetic_qfunctional (#79690)
    - test_quantized_conv2d (#79690)
    - test_quantized_conv2d_relu (#79690)
    - test_quantized_flatten (#79690)
    - test_quantized_hardsigmoid (#79690)
    - test_quantized_hardswish (#79690)
    - test_quantized_linear (#79690)
    - test_quantized_sigmoid (#79690)
    - test_item (skipped)
    - test_full_like_value (skipped)
    - TestONNXRuntime_opset7.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset8.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9_IRv4.test_div_rounding_mode (skipped)
    - test_outer (skipped)
    - test_symbolic_shape_inference_arange_2 (skipped)
Pull Request resolved: #79263
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
@qqaatw
Copy link
Collaborator Author

qqaatw commented Aug 10, 2022

@pytorchbot label "release notes: onnx" "topic: bug fixes"

@pytorch-bot pytorch-bot bot added release notes: onnx torch.onnx related changes that should show up in the release notes topic: bug fixes topic category labels Aug 10, 2022
facebook-github-bot pushed a commit that referenced this pull request Aug 10, 2022
Summary:
Currently we don't have a dtype check in verifying the consistency between PyTorch and ONNX outputs. As a result, some of dtype inconsistencies were found and reported: #77842 #77845

This is a POC.

Failed workflows:
- [linux-xenial-py3.7-clang7-onnx / test (default, 2, 2, linux.2xlarge)]
  - inconsistent shape
    - TestONNXRuntime_opset10.test_all (#79371)
    - TestONNXRuntime_opset10.test_any (#79371)
    - TestONNXRuntime_opset10.test_argmin_argmax (#79503)
    - TestONNXRuntime_opset10.test_hardshrink (#79695)
    - TestONNXRuntime_opset10.test_linalg_norm (#79506)
    - TestONNXRuntime_opset10.test_linalg_vector_norm (#79506)
    - TestONNXRuntime_opset10.test_prelu_scalar (#79846)
    - TestONNXRuntime_opset10.test_softshrink (#79695)
    - TestONNXRuntime_opset10.test_sum_empty_tensor (skipped)
    - TestONNXRuntime_opset10.test_tolist (skipped)
  - inconsistent dtype
    - test_arithmetic_prim_bool (skipped)
    - test_arithmeticOps_with_low_precision (skipped)
    - test_arithmetic_prim_float (skipped)
    - test_logical_and (#79339)
    - test_logical_or (#79339)
    - test_logical_xor (#79339)
    - test_pow (skipped)
    - test_primitive_input_floating (skipped)
    - test_quantize_per_tensor (#79690)
    - test_quantized_adaptive_avg_pool2d (#79690)
    - test_quantized_arithmetic (#79690)
    - test_quantized_arithmetic_qfunctional (#79690)
    - test_quantized_conv2d (#79690)
    - test_quantized_conv2d_relu (#79690)
    - test_quantized_flatten (#79690)
    - test_quantized_hardsigmoid (#79690)
    - test_quantized_hardswish (#79690)
    - test_quantized_linear (#79690)
    - test_quantized_sigmoid (#79690)
    - test_item (skipped)
    - test_full_like_value (skipped)
    - TestONNXRuntime_opset7.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset8.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9_IRv4.test_div_rounding_mode (skipped)
    - test_outer (skipped)
    - test_symbolic_shape_inference_arange_2 (skipped)

Pull Request resolved: #79263
Approved by: https://github.com/justinchuby, https://github.com/BowenBao

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d9a7e93aaf3166e639ea413123bd6c38b9144adc

Reviewed By: seemethere

Differential Revision: D38585848

fbshipit-source-id: 9da98581ceec51142ae31d3f8a06f9f296a16b23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: onnx Related to torch.onnx open source release notes: onnx torch.onnx related changes that should show up in the release notes topic: bug fixes topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants