-
Notifications
You must be signed in to change notification settings - Fork 713
[ExecuTorch] Store the Tensor inline in TensorPtr #5684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
As an optimization, we can avoid an unnecessary heap allocation. Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5684
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit a2fbefd with merge base d2ba238 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
As an optimization, we can avoid an unnecessary heap allocation. Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/) ghstack-source-id: 244871312 Pull Request resolved: #5684
|
This pull request was exported from Phabricator. Differential Revision: D63468988 |
| mutable exec_aten::Tensor tensor_{nullptr}; | ||
| TensorImplPtr tensor_impl_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that tensor_ and tensor_impl_ are redundant -- under the hood, they are both just TensorImpl* that point to the same thing. PyTorch core has also wrestled with this problem. IIRC I thought I finally cracked it within the past year or so, but never committed the PR because I didn't have a clear reason; I will try to dig it up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking of pytorch/pytorch#95418 . We don't have the same problem here; rather than grafting not-reference-counting onto a reference-counting Tensor, we want to graft reference counting onto a not-reference-counting Tensor. I'll give it some more thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is solvable if we are able to remove get() and rely on -Waddress-of-temporary. diffs coming.
We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/) [ghstack-poisoned]
|
This pull request was exported from Phabricator. Differential Revision: D63468988 |
Pull Request resolved: #5684 We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. ghstack-source-id: 244938553 Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/) [ghstack-poisoned]
|
This pull request was exported from Phabricator. Differential Revision: D63468988 |
Pull Request resolved: #5684 We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. ghstack-source-id: 244947185 Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/) [ghstack-poisoned]
Pull Request resolved: #5684 We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. ghstack-source-id: 244948657 Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
|
This pull request was exported from Phabricator. Differential Revision: D63468988 |
We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/) [ghstack-poisoned]
|
This pull request was exported from Phabricator. Differential Revision: D63468988 |
|
This pull request has been merged in 53936dc. |
Pull Request resolved: pytorch/executorch#5684 We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation. ghstack-source-id: 244967796 Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
Stack from ghstack (oldest at bottom):
We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.
Differential Revision: D63468988