Skip to content

Conversation

Copy link

pytorch-bot bot commented Oct 13, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165304

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit c0bf0d1 with merge base 556fc09 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

[ghstack-poisoned]
@albanD
Copy link
Collaborator

albanD commented Oct 13, 2025

@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

guangyey added a commit that referenced this pull request Oct 14, 2025
ghstack-source-id: 076e9a5
Pull Request resolved: #165304
guangyey added a commit that referenced this pull request Oct 14, 2025
ghstack-source-id: 7a96f84
Pull Request resolved: #165304
guangyey added a commit that referenced this pull request Oct 14, 2025
ghstack-source-id: 7a96f84
Pull Request resolved: #165304
[ghstack-poisoned]
[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Oct 16, 2025
…5129)

# Motivation
This PR aims to restore `AcceleratorAllocatorConfig` to avoid the potential regression mentioned in #160666 (comment)
These code change would be reverted in the following PR #165304
Pull Request resolved: #165129
Approved by: https://github.com/albanD
guangyey added a commit that referenced this pull request Oct 17, 2025
ghstack-source-id: 63a7e0e
Pull Request resolved: #165304
guangyey added a commit that referenced this pull request Oct 17, 2025
ghstack-source-id: 6e9e38f
Pull Request resolved: #165304
[ghstack-poisoned]
[ghstack-poisoned]
@joshuuuasu
Copy link
Contributor

Hi @guangyey, thanks very much for taking extra efforts to split the original PR into the stack! I have completed the validation for the whole stack internally, and the performance regression that we previously observed is resolved on this new stack. Therefore, the new stack is in good shape to merge from our side, except the bug I found on the second last PR #165298, as shown in the attached screenshot (our internal tool is broken right now so I cannot import my fix to github).
Screenshot 2025-10-17 at 5 28 02 PM

In retrospection, do you have any insight on what the changes between this diff and the original PR might be the root cause for this issue? Thanks again!

@albanD
Copy link
Collaborator

albanD commented Oct 19, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@guangyey
Copy link
Collaborator Author

@joshuuuasu Thanks for your verification and fix. I update the bug fix on #165298.
Actually, I’m not sure why the regression occurred or why it went away. I just tried to preserve the original behavior as much as possible in this new stack.

Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Oct 21, 2025
…orch#165129)

# Motivation
This PR aims to restore `AcceleratorAllocatorConfig` to avoid the potential regression mentioned in pytorch#160666 (comment)
These code change would be reverted in the following PR pytorch#165304
Pull Request resolved: pytorch#165129
Approved by: https://github.com/albanD
Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Oct 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants