-
Notifications
You must be signed in to change notification settings - Fork 22.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable UFMT on test/test_cuda*.py
#124352
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/124352
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 5dcc005 with merge base 25c0d3f (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
6e629e5
to
83d4510
Compare
need to resolve merge conflict |
is clean |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
seems to be failing lint |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
merge conflict now |
lg |
@pytorchbot merge |
Merge failedReason: 1 mandatory check(s) are pending/not yet run. The first few are:
Dig deeper by viewing the pending checks on hud |
@pytorchbot merge -f "easycla go away little lily wants to play" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) are pending/not yet run. The first few are:
Dig deeper by viewing the pending checks on hud |
sorry about all the conflicting. Rebase it again and that should resolve the easycla problem (which was an outage on github) |
@ezyang You are welcome. All conflicts are resolved, see eec97ab diff: diff --git a/test/test_cuda.py b/test/test_cuda.py
index fca8039a62..c9b6063322 100644
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -2658,10 +2658,8 @@ exit(2)
# These stat checks are specific to the native allocator.
if share_mem != "Don't share":
self.assertEqual(
- reserved_no_sharing
- - torch.cuda.memory_stats()[
- "reserved_bytes.all.current"
- ], # noqa: F821
+ reserved_no_sharing # noqa: F821
+ - torch.cuda.memory_stats()["reserved_bytes.all.current"],
kSmallBuffer,
)
else: |
🤔 still conflicted |
fixed :p diff: + lintrunner -a --take UFMT --all-files
Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file.
ok No lint issues.
Successfully applied all patches.
+ git diff 5dcc005748241f6f53d86b02b7b5169f19f692b9
diff --git a/test/test_cuda.py b/test/test_cuda.py
index 24acfb0dc2..96c62a408a 100644
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -2663,8 +2663,10 @@ exit(2)
# These stat checks are specific to the native allocator.
if share_mem != "Don't share":
self.assertEqual(
- reserved_no_sharing # noqa: F821
- - torch.cuda.memory_stats()["reserved_bytes.all.current"],
+ reserved_no_sharing
+ - torch.cuda.memory_stats()[
+ "reserved_bytes.all.current"
+ ], # noqa: F821
kSmallBuffer,
)
else: |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Part of: #123062 Ran lintrunner on: - test/test_cuda.py - test/test_cuda_expandable_segments.py - test/test_cuda_multigpu.py - test/test_cuda_nvml_based_avail.py - test/test_cuda_primary_ctx.py - test/test_cuda_sanitizer.py - test/test_cuda_trace.py Detail: ```bash $ lintrunner -a --take UFMT --all-files ok No lint issues. Successfully applied all patches. ``` Pull Request resolved: #124352 Approved by: https://github.com/ezyang
Part of: #123062
Ran lintrunner on:
Detail:
cc @albanD