-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Replace CUDA 11.1 Linux CI with CUDA 11.2 #51905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ac30156
to
e633c90
Compare
💊 CI failures summary and remediationsAs of commit ca64aa4 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
Job | Step | Action |
---|---|---|
Build | 🔁 rerun | |
Report results | 🔁 rerun | |
Build | 🔁 rerun | |
Build | 🔁 rerun |
This comment was automatically generated by Dr. CI (expand for details).
Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group.
3071d7a
to
ca64aa4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. windows CI failures seems irrelevant?
self.assertEqual(p1, p2) | ||
|
||
|
||
@unittest.skipIf(True, "test does not pass for CUDA 11.2") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Link github issue here with comments to #51598?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the link to the issue in the description of this PR. Interestingly, these same tests do now fail for windows
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@janeyx99 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Blocked by #52054. |
Summary: This fixes an issue (currently blocking #51905) where the test time regression reporting step will fail if none of the most recent `master` ancestors have any reports in S3 (e.g. if a new job is added). Pull Request resolved: #52054 Test Plan: ``` python test/test_testing.py ``` Reviewed By: walterddr Differential Revision: D26369507 Pulled By: samestep fbshipit-source-id: 4c4e1e290cb943ce8fcdadacbf51d66b31c3262a
Summary: This fixes an issue (currently blocking pytorch#51905) where the test time regression reporting step will fail if none of the most recent `master` ancestors have any reports in S3 (e.g. if a new job is added). Pull Request resolved: pytorch#52054 Test Plan: ``` python test/test_testing.py ``` Reviewed By: walterddr Differential Revision: D26369507 Pulled By: samestep fbshipit-source-id: 4c4e1e290cb943ce8fcdadacbf51d66b31c3262a
Summary: Adding 11.2 to CI with BUILD_SPLIT_CUDA enabled. Disabled the following tests as they were failing in test_optim.py: test_adadelta test_adam test_adamw test_multi_tensor_optimizers test_rmsprop (Issue tracking that is here: pytorch#51992) Pull Request resolved: pytorch#51905 Reviewed By: VitalyFedyunin Differential Revision: D26368575 Pulled By: janeyx99 fbshipit-source-id: 31612c7d04d51afb3f18956e43dc7f7db8a91749
Adding 11.2 to CI with BUILD_SPLIT_CUDA enabled.
Disabled the following tests as they were failing in test_optim.py:
test_adadelta
test_adam
test_adamw
test_multi_tensor_optimizers
test_rmsprop
(Issue tracking that is here: #51992)