-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Remove unused paramter on CUDA AllocParams #159159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159159
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 125f772 with merge base 51eb41a ( NEW FAILURE - The following job has failed:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot rebase |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
BlockPool* pool, | ||
size_t alloc_size, | ||
DeviceStats& stats) | ||
size_t alloc_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this part of our public API? How did you check there are no usage if it in the wild?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is defined in cpp file and not exposed to be public.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok!
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 2 checks: pull / linux-jammy-py3_9-clang9-xla / test (xla, 1, 1, linux.12xlarge, unstable), trunk / linux-jammy-rocm-py3.10 / test (distributed, 1, 1, linux.rocm.gpu.4) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
# Motivation While refactoring the caching allocator, I noticed that the `AllocParams` constructor on CUDA had an unused parameter. This change removes that unused argument to avoid potential confusion. # Additional Context I noticed that `AllocParams` is defined in cpp file, so it should be safe to make this change. Pull Request resolved: #159159 Approved by: https://github.com/cyyever, https://github.com/albanD
# Motivation While refactoring the caching allocator, I noticed that the `ExpandableSegment` constructor on CUDA had an unused parameter. This change removes that unused argument to avoid potential confusion. # Additional Context I noticed that `ExpandableSegment` is defined in cpp file, so it should be safe to make this change. Pull Request resolved: #159356 Approved by: https://github.com/ngimel, https://github.com/albanD ghstack dependencies: #159159
Stack from ghstack (oldest at bottom):
Motivation
While refactoring the caching allocator, I noticed that the
AllocParams
constructor on CUDA had an unused parameter. This change removes that unused argument to avoid potential confusion.Additional Context
I noticed that
AllocParams
is defined in cpp file, so it should be safe to make this change.