Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Use parameter values in onnx shape inference #49706

Merged
merged 9 commits into from Jan 14, 2021

Conversation

BowenBao
Copy link
Collaborator

Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Dec 21, 2020
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 21, 2020

💊 CI failures summary and remediations

As of commit 82d3a69 (more details on the Dr. CI page):



❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test2 (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun) ❄️

Jan 14 08:12:39 RuntimeError: CUDA error: an illegal memory access was encountered
Jan 14 08:12:39                        ~~~~ <--- HERE
Jan 14 08:12:39 RuntimeError: CUDA error: an illegal memory access was encountered
Jan 14 08:12:39 
Jan 14 08:12:39 
Jan 14 08:12:39 ======================================================================
Jan 14 08:12:39 ERROR [0.203s]: test_where_and_typing (__main__.TestTEFuser)
Jan 14 08:12:39 ----------------------------------------------------------------------
Jan 14 08:12:39 Traceback (most recent call last):
Jan 14 08:12:39   File "test_jit_fuser_te.py", line 1142, in test_where_and_typing
Jan 14 08:12:39     x = torch.randn(4, 4, dtype=torch.double, device=device)
Jan 14 08:12:39 RuntimeError: CUDA error: an illegal memory access was encountered
Jan 14 08:12:39 
Jan 14 08:12:39 ======================================================================
Jan 14 08:12:39 ERROR [0.174s]: test_zero_element_tensors_cuda (__main__.TestTEFuser)
Jan 14 08:12:39 ----------------------------------------------------------------------
Jan 14 08:12:39 Traceback (most recent call last):
Jan 14 08:12:39   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 888, in wrapper
Jan 14 08:12:39     method(*args, **kwargs)
Jan 14 08:12:39   File "test_jit_fuser_te.py", line 178, in test_zero_element_tensors_cuda
Jan 14 08:12:39     self._test_zero_element_tensors(device="cuda")
Jan 14 08:12:39   File "test_jit_fuser_te.py", line 174, in _test_zero_element_tensors

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Copy link
Contributor

@neginraoof neginraoof left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@BowenBao BowenBao merged commit eaad7f2 into pytorch:onnx_ms_1 Jan 14, 2021
BowenBao added a commit that referenced this pull request Jan 21, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 21, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 21, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 22, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

Differential Revision: [D26023935](https://our.internmc.facebook.com/intern/diff/D26023935)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 25, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 25, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

Differential Revision: [D26050881](https://our.internmc.facebook.com/intern/diff/D26050881)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 26, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

Differential Revision: [D26050881](https://our.internmc.facebook.com/intern/diff/D26050881)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Jan 28, 2021
Summary:
Pull Request resolved: #50905

Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050881

Pulled By: SplitInfinity

fbshipit-source-id: 9e5d69c52b647133cd3a0781988e2ad1d1a9c09d
BowenBao added a commit to BowenBao/pytorch that referenced this pull request Jan 28, 2021
Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

ghstack-source-id: 8fecc35f33504d1634135b78235f195d998fc95e
Pull Request resolved: pytorch#50905

fix clang-format
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed oncall: jit Add this issue/PR to JIT oncall triage queue open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants