-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix inference_mode with torch.compile #101219
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/101219
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit a469373: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 96a941f43bbc85d927166fd541d45fb57dc416d0 Pull Request resolved: #101219
// E.g. when running torch.compile under inference mode, we need to make sure that | ||
// for any inputs that were created outside of inference mode (so they are not inference tensors), | ||
// then the functional wrappers that we wrap them with should also not be inference tensors. | ||
version_counter_ = value_.unsafeGetTensorImpl()->version_counter(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this access to the version_counter raise an error on inference Tensors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talked offline - we're not accessing the version counter, just straight up copying the struct onto the wrapper.
Also - we copy the dispatch keyset from the inner tensor onto the wrapper, so if the inner tensor has the Autograd dispatch key (because it was created outside of inference mode), then the wrapper will as well (even though it was created in inference mode).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds ok!
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
ghstack-source-id: 7c24b4ff106383a8840c86c6a24a57eb92a0676f Pull Request resolved: #101219
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for fixing this!!! What a pain haha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for fixing this!!! What a pain haha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for fixing this!!! What a pain haha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for fixing this!!! What a pain haha
Wow Ed is really thankful for this fix. |
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
ghstack-source-id: 5bc15b0c2ec4cae3db00f095e875fbb0bee21ddc Pull Request resolved: #101219
@pytorchbot merge |
Merge failedReason: This PR needs a label If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@pytorchbot revert -c "nosignal" -m "breaking inductor tests" |
@pytorchbot successfully started a revert job. Check the current status here. |
@bdhirsh your PR has been successfully reverted. |
This reverts commit 11f7ae1. Reverted #101219 on behalf of https://github.com/PaliC due to breaking inductor tests ([comment](#101219 (comment)))
#100570)" This reverts commit 1fabee3. Reverted #100570 on behalf of https://github.com/PaliC due to breaking inductor tests along with #101219 ([comment](#100570 (comment)))
Fixes #100977 This will hopefully fix this error (from [issue](#99616)) This PR fixes an internal model: we were running an inductor inference graph, but `torch.is_grad_enabled()` was True, causing us to error inside of the inference graph when we encountered an out= operator. I haven't been able to create a smaller repro - before landing this, I want to create a smaller repro to convince myself of why we need to separate out these guards. Pull Request resolved: #100570 Approved by: https://github.com/ezyang
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
ghstack-source-id: fc724bccbfeb8315dc90599b7bd5e8299a11a652 Pull Request resolved: #101219
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
ghstack-source-id: 4aef060e6ca8055b49b354d2f8f1ace49f962a01 Pull Request resolved: #101219
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
ghstack-source-id: 2061c74c08bd4a763a31ab973b1e113a639bd840 Pull Request resolved: #101219
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
It looks like inference_mode wasn't playing well with functionalization.
If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor.
Hopefully fixes #101151
Stack from ghstack (oldest at bottom):