Skip to content

Conversation

@swolchok
Copy link
Contributor

@swolchok swolchok commented Sep 26, 2024

Stack from ghstack (oldest at bottom):

We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

Differential Revision: D63468988

As an optimization, we can avoid an unnecessary heap allocation.

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 26, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5684

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit a2fbefd with merge base d2ba238 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 26, 2024
swolchok added a commit that referenced this pull request Sep 26, 2024
As an optimization, we can avoid an unnecessary heap allocation.

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)

ghstack-source-id: 244871312
Pull Request resolved: #5684
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63468988

Comment on lines +86 to +87
mutable exec_aten::Tensor tensor_{nullptr};
TensorImplPtr tensor_impl_;
Copy link
Contributor Author

@swolchok swolchok Sep 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that tensor_ and tensor_impl_ are redundant -- under the hood, they are both just TensorImpl* that point to the same thing. PyTorch core has also wrestled with this problem. IIRC I thought I finally cracked it within the past year or so, but never committed the PR because I didn't have a clear reason; I will try to dig it up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of pytorch/pytorch#95418 . We don't have the same problem here; rather than grafting not-reference-counting onto a reference-counting Tensor, we want to graft reference counting onto a not-reference-counting Tensor. I'll give it some more thought.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is solvable if we are able to remove get() and rely on -Waddress-of-temporary. diffs coming.

We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63468988

swolchok added a commit that referenced this pull request Sep 26, 2024
Pull Request resolved: #5684

We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

ghstack-source-id: 244938553

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63468988

swolchok added a commit that referenced this pull request Sep 26, 2024
Pull Request resolved: #5684

We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

ghstack-source-id: 244947185

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)

[ghstack-poisoned]
swolchok added a commit that referenced this pull request Sep 26, 2024
Pull Request resolved: #5684

We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

ghstack-source-id: 244948657

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63468988

We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63468988

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 53936dc.

kedarnath03 pushed a commit to kedarnath03/executorch that referenced this pull request Jun 25, 2025
Pull Request resolved: pytorch/executorch#5684

We can preserve the existing interface (except release(), which is problematic anyway!) and avoid an unnecessary heap allocation.

ghstack-source-id: 244967796

Differential Revision: [D63468988](https://our.internmc.facebook.com/intern/diff/D63468988/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants