Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Fix bug in unfold symbolic (#50504) #51515

Closed
wants to merge 3 commits into from

Conversation

BowenBao
Copy link
Collaborator

@BowenBao BowenBao commented Feb 2, 2021

Stack from ghstack:

Fix bug in unfold symbolic

Differential Revision: D26203113

Fix bug in unfold symbolic

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 2, 2021

💊 CI failures summary and remediations

As of commit 5959531 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 1/2 non-CircleCI failure(s)

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test2 (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Feb 03 23:47:53 RuntimeError: CUDA error: an illegal memory access was encountered
Feb 03 23:47:53   File "test_optim.py", line 1994, in test_update_bn_dnn
Feb 03 23:47:53     self._test_update_bn(dnn.cuda(), dl_x, dl_xy, True)
Feb 03 23:47:53   File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in cuda
Feb 03 23:47:53     return self._apply(lambda t: t.cuda(device))
Feb 03 23:47:53   File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 387, in _apply
Feb 03 23:47:53     module._apply(fn)
Feb 03 23:47:53   File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 409, in _apply
Feb 03 23:47:53     param_applied = fn(param)
Feb 03 23:47:53   File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in <lambda>
Feb 03 23:47:53     return self._apply(lambda t: t.cuda(device))
Feb 03 23:47:53 RuntimeError: CUDA error: an illegal memory access was encountered
Feb 03 23:47:53 
Feb 03 23:47:53 ----------------------------------------------------------------------
Feb 03 23:47:53 Ran 104 tests in 39.948s
Feb 03 23:47:53 
Feb 03 23:47:53 FAILED (errors=13)
Feb 03 23:47:53 
Feb 03 23:47:53 Generating XML reports...
Feb 03 23:47:53 Generated XML report: test-reports/dist-gloo/TEST-TestLRScheduler-20210203234713.xml
Feb 03 23:47:53 Generated XML report: test-reports/dist-gloo/TEST-TestOptim-20210203234713.xml
Feb 03 23:47:53 Generated XML report: test-reports/dist-gloo/TEST-TestSWAUtils-20210203234713.xml

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Fix bug in unfold symbolic

[ghstack-poisoned]
Fix bug in unfold symbolic

Differential Revision: [D26203113](https://our.internmc.facebook.com/intern/diff/D26203113)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@SplitInfinity merged this pull request in 8dd9fef.

BowenBao added a commit to BowenBao/pytorch that referenced this pull request Feb 5, 2021
Fix bug in unfold symbolic

ghstack-source-id: 51f0c4d4aff14a82f225f79e2b704b3d56e158a8
Pull Request resolved: pytorch#51515
@facebook-github-bot facebook-github-bot deleted the gh/BowenBao/12/head branch February 8, 2021 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants