-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Fix AllocatorConfig parse roundup division bug #165304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165304
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit c0bf0d1 with merge base 556fc09 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…5129) # Motivation This PR aims to restore `AcceleratorAllocatorConfig` to avoid the potential regression mentioned in #160666 (comment) These code change would be reverted in the following PR #165304 Pull Request resolved: #165129 Approved by: https://github.com/albanD
Hi @guangyey, thanks very much for taking extra efforts to split the original PR into the stack! I have completed the validation for the whole stack internally, and the performance regression that we previously observed is resolved on this new stack. Therefore, the new stack is in good shape to merge from our side, except the bug I found on the second last PR #165298, as shown in the attached screenshot (our internal tool is broken right now so I cannot import my fix to github). In retrospection, do you have any insight on what the changes between this diff and the original PR might be the root cause for this issue? Thanks again! |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@joshuuuasu Thanks for your verification and fix. I update the bug fix on #165298. |
…orch#165129) # Motivation This PR aims to restore `AcceleratorAllocatorConfig` to avoid the potential regression mentioned in pytorch#160666 (comment) These code change would be reverted in the following PR pytorch#165304 Pull Request resolved: pytorch#165129 Approved by: https://github.com/albanD
* pytorch#165288 Pull Request resolved: pytorch#165304 Approved by: https://github.com/albanD ghstack dependencies: pytorch#165288, pytorch#165289, pytorch#165291, pytorch#165298
Stack from ghstack (oldest at bottom):