-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Fix nested int caching for NJT inference tensor #133061
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/133061
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 261c8e6 with merge base a8f0979 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
torch/_subclasses/fake_tensor.py
Outdated
setattr(obj, self._memo_vc(obj), None) | ||
setattr(obj, self._memo_epoch(obj), None) | ||
elif not torch.is_inference_mode_enabled(): | ||
elif not torch.is_inference_mode_enabled() or self._opt_vc: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why we don't just check obj.is_inference()
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, is inference seems better
Tries to solve https://fb.workplace.com/groups/296513499814673/permalink/465746352891386/ cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
In general, I don't think you should need an opt_vc. Instead, you check if you have an inference tensor / are in inference mode before you do a vc test. If you are, you skip the vc test. |
Hmm I feel like I don't completely understand. Are you proposing that opt_vc always be True? In my head the idea behind the logic today seems to be that we want to be conservative in inference mode, e.g. always assume that a mutation happened, which makes sense because otherwise we'd could incorrectly infer that two nonzero have the same shape when they don't. Do we no longer want to do that? |
Tries to solve https://fb.workplace.com/groups/296513499814673/permalink/465746352891386/ cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
Closing as these changes have been added to the reland of the previous PR: #133196 |
This reverts commit 154d40c. and adds changes from #133061 Pull Request resolved: #133196 Approved by: https://github.com/ezyang ghstack dependencies: #133145
Stack from ghstack (oldest at bottom):
Tries to solve https://fb.workplace.com/groups/296513499814673/permalink/465746352891386/
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames