Skip to content

Conversation

JacobSzwejbka
Copy link
Contributor

Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Differential Revision: D82250153

Copy link

pytorch-bot bot commented Sep 11, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14230

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Unrelated Failure

As of commit 12ecb5a with merge base e31cef6 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 11, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D82250153

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Member

@GregoryComer GregoryComer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have full context on the memory planning logic, but this looks reasonable to me. This will be a very nice feature for encoder-decoder models. As an aside, do you think it will be feasible to default share_mutable_buffers to true in the future?

return specs[0]

def _insert_mutable_buffer_specs(state: "_MemoryPlanningState", gm: torch.fx.GraphModule, gs: ExportGraphSignature):
for node in gm.graph.nodes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to have some docstring here

return PassResult(graph_module, True)

def run_multimethod(self):
"Resolve any memory planning done across entry points"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a docstring right?

Suggested change
"Resolve any memory planning done across entry points"
"""Resolve any memory planning done across entry points"""

assert(fqn)
spec = _get_spec_from_node(node)
if getattr(spec, 'mem_id', None) is not None or getattr(spec, 'mem_offset', None) is not None:
raise ValueError("Cannot share mutable buffers if they already have a mem_id or mem_offset assigned")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be an exception or just a warning?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Id rather start with an exception and relax it later if we have a use case that needs it. If a mem_id or mem_offset is already present that means someone is running a custom memory plan before this. This already probably doesn't compose well in that scenario because we just place the buffers on mem_id=2 and assert everything else is on 1 or not planned.

JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 12, 2025
…ross entry points (pytorch#14230)

Summary:

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
@facebook-github-bot
Copy link
Contributor

@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153.

JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 17, 2025
…ross entry points (pytorch#14230)

Summary:

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
@facebook-github-bot
Copy link
Contributor

@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153.

JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 17, 2025
…ross entry points (pytorch#14230)

Summary:

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
@facebook-github-bot
Copy link
Contributor

@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153.

JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 17, 2025
…ross entry points (pytorch#14230)

Summary:

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 17, 2025
…ross entry points (pytorch#14230)

Summary:

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
@facebook-github-bot
Copy link
Contributor

@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153.

1 similar comment
@facebook-github-bot
Copy link
Contributor

@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153.

JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 17, 2025
…ross entry points (pytorch#14230)

Summary:
Pull Request resolved: pytorch#14230

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 18, 2025
…ross entry points (pytorch#14230)

Summary:

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
…ross entry points (pytorch#14230)

Summary:
Pull Request resolved: pytorch#14230

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Sep 18, 2025
…ross entry points (pytorch#14230)

Summary:

API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.

Reviewed By: GregoryComer

Differential Revision: D82250153
@facebook-github-bot
Copy link
Contributor

@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153.

1 similar comment
@facebook-github-bot
Copy link
Contributor

@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153.

@facebook-github-bot facebook-github-bot merged commit d43cde5 into pytorch:main Sep 18, 2025
123 of 128 checks passed
StrycekSimon pushed a commit to nxp-upstream/executorch that referenced this pull request Sep 23, 2025
…ross entry points

Differential Revision: D82250153

Pull Request resolved: pytorch#14230
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants