Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable UFMT on test/test_cuda*.py #124352

Closed
wants to merge 8 commits into from

Conversation

shink
Copy link
Contributor

@shink shink commented Apr 18, 2024

Part of: #123062

Ran lintrunner on:

  • test/test_cuda.py
  • test/test_cuda_expandable_segments.py
  • test/test_cuda_multigpu.py
  • test/test_cuda_nvml_based_avail.py
  • test/test_cuda_primary_ctx.py
  • test/test_cuda_sanitizer.py
  • test/test_cuda_trace.py

Detail:

$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.

cc @albanD

Copy link

pytorch-bot bot commented Apr 18, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/124352

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 5dcc005 with merge base 25c0d3f (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@shink shink force-pushed the style/ufmt/test/test_cuda branch from 6e629e5 to 83d4510 Compare April 18, 2024 06:59
@bdhirsh bdhirsh requested a review from ezyang April 21, 2024 16:16
@bdhirsh bdhirsh added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Apr 21, 2024
@ezyang
Copy link
Contributor

ezyang commented Apr 21, 2024

need to resolve merge conflict

@ezyang
Copy link
Contributor

ezyang commented Apr 22, 2024

is clean

@ezyang
Copy link
Contributor

ezyang commented Apr 22, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Apr 22, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@ezyang
Copy link
Contributor

ezyang commented Apr 23, 2024

seems to be failing lint

@shink
Copy link
Contributor Author

shink commented Apr 23, 2024

@ezyang fixed, please check it out: 6e0e19d

@ezyang
Copy link
Contributor

ezyang commented Apr 23, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@ezyang
Copy link
Contributor

ezyang commented Apr 23, 2024

merge conflict now

@ezyang
Copy link
Contributor

ezyang commented Apr 24, 2024

diff --git b/test/test_cuda.py a/test/test_cuda.py
index 27549eb82dd..9e9f6dc8cc1 100644
--- b/test/test_cuda.py
+++ a/test/test_cuda.py
@@ -2650,10 +2650,8 @@ exit(2)
                 # These stat checks are specific to the native allocator.
                 if share_mem != "Don't share":
                     self.assertEqual(
-                        reserved_no_sharing
-                        - torch.cuda.memory_stats()[
-                            "reserved_bytes.all.current"
-                        ],  # noqa: F821
+                        reserved_no_sharing  # noqa: F821
+                        - torch.cuda.memory_stats()["reserved_bytes.all.current"],
                         kSmallBuffer,
                     )
                 else:

lg

@ezyang
Copy link
Contributor

ezyang commented Apr 24, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) are pending/not yet run. The first few are:

  • EasyCLA

Dig deeper by viewing the pending checks on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@ezyang
Copy link
Contributor

ezyang commented Apr 24, 2024

@pytorchbot merge -f "easycla go away little lily wants to play"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) are pending/not yet run. The first few are:

  • EasyCLA

Dig deeper by viewing the pending checks on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@ezyang
Copy link
Contributor

ezyang commented Apr 25, 2024

sorry about all the conflicting. Rebase it again and that should resolve the easycla problem (which was an outage on github)

@shink
Copy link
Contributor Author

shink commented Apr 25, 2024

@ezyang You are welcome. All conflicts are resolved, see eec97ab

diff:

diff --git a/test/test_cuda.py b/test/test_cuda.py
index fca8039a62..c9b6063322 100644
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -2658,10 +2658,8 @@ exit(2)
                 # These stat checks are specific to the native allocator.
                 if share_mem != "Don't share":
                     self.assertEqual(
-                        reserved_no_sharing
-                        - torch.cuda.memory_stats()[
-                            "reserved_bytes.all.current"
-                        ],  # noqa: F821
+                        reserved_no_sharing  # noqa: F821
+                        - torch.cuda.memory_stats()["reserved_bytes.all.current"],
                         kSmallBuffer,
                     )
                 else:

@ezyang
Copy link
Contributor

ezyang commented Apr 25, 2024

🤔 still conflicted

@shink
Copy link
Contributor Author

shink commented Apr 25, 2024

fixed :p

diff:

+ lintrunner -a --take UFMT --all-files
Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file.
ok No lint issues.
Successfully applied all patches.
+ git diff 5dcc005748241f6f53d86b02b7b5169f19f692b9
diff --git a/test/test_cuda.py b/test/test_cuda.py
index 24acfb0dc2..96c62a408a 100644
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -2663,8 +2663,10 @@ exit(2)
                 # These stat checks are specific to the native allocator.
                 if share_mem != "Don't share":
                     self.assertEqual(
-                        reserved_no_sharing  # noqa: F821
-                        - torch.cuda.memory_stats()["reserved_bytes.all.current"],
+                        reserved_no_sharing
+                        - torch.cuda.memory_stats()[
+                            "reserved_bytes.all.current"
+                        ],  # noqa: F821
                         kSmallBuffer,
                     )
                 else:

@ezyang
Copy link
Contributor

ezyang commented Apr 25, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
Part of: #123062

Ran lintrunner on:

- test/test_cuda.py
- test/test_cuda_expandable_segments.py
- test/test_cuda_multigpu.py
- test/test_cuda_nvml_based_avail.py
- test/test_cuda_primary_ctx.py
- test/test_cuda_sanitizer.py
- test/test_cuda_trace.py

Detail:

```bash
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```

Pull Request resolved: #124352
Approved by: https://github.com/ezyang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged open source skip-pr-sanity-checks topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants