Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix inference_mode with torch.compile (#101219)
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 Pull Request resolved: #101219 Approved by: https://github.com/albanD, https://github.com/ezyang
- Loading branch information
11f7ae1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted #101219 on behalf of https://github.com/PaliC due to breaking inductor tests (comment)