Skip to content

Fix dangling pointer in TextTokenGenerator non-kv-cache path (#18725)#18725

Merged
meta-codesync[bot] merged 1 commit intomainfrom
export-D99408541
Apr 6, 2026
Merged

Fix dangling pointer in TextTokenGenerator non-kv-cache path (#18725)#18725
meta-codesync[bot] merged 1 commit intomainfrom
export-D99408541

Conversation

@kirklandsign
Copy link
Copy Markdown
Contributor

@kirklandsign kirklandsign commented Apr 6, 2026

Summary:

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541

Copilot AI review requested due to automatic review settings April 6, 2026 19:24
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 6, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync bot commented Apr 6, 2026

@kirklandsign has exported this pull request. If you are a Meta employee, you can view the originating Diff in D99408541.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 6, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18725

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 9 Pending

As of commit ab0446e with merge base 19bbeac (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 6, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a memory-safety issue in the LLM token generation loop when KV-cache is disabled: from_blob() wraps token_data.data(), but subsequent push_back() could reallocate the vector and leave the tensor with a dangling data pointer.

Changes:

  • Pre-reserve token_data capacity in the non-KV-cache path to prevent reallocation during generation.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Contributor

@larryliu0820 larryliu0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

@meta-codesync meta-codesync bot changed the title Fix dangling pointer in TextTokenGenerator non-kv-cache path Fix dangling pointer in TextTokenGenerator non-kv-cache path (#18725) Apr 6, 2026
meta-codesync bot pushed a commit that referenced this pull request Apr 6, 2026
Summary:

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541
@meta-codesync meta-codesync bot force-pushed the export-D99408541 branch from b5e15c8 to 3795dee Compare April 6, 2026 21:36
meta-codesync bot pushed a commit that referenced this pull request Apr 6, 2026
Summary:

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541
Copilot AI review requested due to automatic review settings April 6, 2026 21:44
@meta-codesync meta-codesync bot force-pushed the export-D99408541 branch from 3795dee to fb668c3 Compare April 6, 2026 21:44
@kirklandsign kirklandsign review requested due to automatic review settings April 6, 2026 21:44
kirklandsign added a commit that referenced this pull request Apr 6, 2026
Summary:
Pull Request resolved: #18725

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541
meta-codesync bot pushed a commit that referenced this pull request Apr 6, 2026
Summary:

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541
Copilot AI review requested due to automatic review settings April 6, 2026 22:01
@meta-codesync meta-codesync bot force-pushed the export-D99408541 branch from 20b4866 to 6584d2c Compare April 6, 2026 22:01
@kirklandsign kirklandsign review requested due to automatic review settings April 6, 2026 22:01
meta-codesync bot pushed a commit that referenced this pull request Apr 6, 2026
Summary:

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541
@meta-codesync meta-codesync bot force-pushed the export-D99408541 branch from 6584d2c to c47eabc Compare April 6, 2026 22:03
kirklandsign added a commit that referenced this pull request Apr 6, 2026
Summary:
Pull Request resolved: #18725

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541
Summary:

In the non-kv-cache branch of TextTokenGenerator::generate(), push_back()
on token_data can trigger vector reallocation, but the tensor created via
from_blob still points to the old data address. resize_tensor_ptr only
updates shape metadata, not the data pointer, resulting in a dangling
pointer.

Fix by pre-allocating the vector with reserve() before creating the
tensor, ensuring push_back never triggers reallocation during the
generate loop.

Reviewed By: larryliu0820

Differential Revision: D99408541
Copilot AI review requested due to automatic review settings April 6, 2026 22:24
@meta-codesync meta-codesync bot force-pushed the export-D99408541 branch from 0bd02e3 to ab0446e Compare April 6, 2026 22:24
@kirklandsign kirklandsign review requested due to automatic review settings April 6, 2026 22:24
@meta-codesync meta-codesync bot merged commit e0e10cc into main Apr 6, 2026
165 of 170 checks passed
@meta-codesync meta-codesync bot deleted the export-D99408541 branch April 6, 2026 23:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants