-
Notifications
You must be signed in to change notification settings - Fork 683
Add option in memory planning to put shared state on same location across entry points #14230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option in memory planning to put shared state on same location across entry points #14230
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14230
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Unrelated FailureAs of commit 12ecb5a with merge base e31cef6 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D82250153 |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have full context on the memory planning logic, but this looks reasonable to me. This will be a very nice feature for encoder-decoder models. As an aside, do you think it will be feasible to default share_mutable_buffers to true in the future?
return specs[0] | ||
|
||
def _insert_mutable_buffer_specs(state: "_MemoryPlanningState", gm: torch.fx.GraphModule, gs: ExportGraphSignature): | ||
for node in gm.graph.nodes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to have some docstring here
return PassResult(graph_module, True) | ||
|
||
def run_multimethod(self): | ||
"Resolve any memory planning done across entry points" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a docstring right?
"Resolve any memory planning done across entry points" | |
"""Resolve any memory planning done across entry points""" |
exir/passes/memory_planning_pass.py
Outdated
assert(fqn) | ||
spec = _get_spec_from_node(node) | ||
if getattr(spec, 'mem_id', None) is not None or getattr(spec, 'mem_offset', None) is not None: | ||
raise ValueError("Cannot share mutable buffers if they already have a mem_id or mem_offset assigned") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be an exception or just a warning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Id rather start with an exception and relax it later if we have a use case that needs it. If a mem_id or mem_offset is already present that means someone is running a custom memory plan before this. This already probably doesn't compose well in that scenario because we just place the buffers on mem_id=2 and assert everything else is on 1 or not planned.
c7d6fda
to
77d31ee
Compare
…ross entry points (pytorch#14230) Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153. |
77d31ee
to
24ae8c7
Compare
…ross entry points (pytorch#14230) Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153. |
24ae8c7
to
67d882d
Compare
…ross entry points (pytorch#14230) Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153. |
…ross entry points (pytorch#14230) Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
67d882d
to
69b8a33
Compare
…ross entry points (pytorch#14230) Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
69b8a33
to
3089788
Compare
@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153. |
1 similar comment
@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153. |
…ross entry points (pytorch#14230) Summary: Pull Request resolved: pytorch#14230 API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
3089788
to
cd429e7
Compare
…ross entry points (pytorch#14230) Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
cd429e7
to
b2e8d75
Compare
…ross entry points (pytorch#14230) Summary: Pull Request resolved: pytorch#14230 API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
…ross entry points (pytorch#14230) Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same. Reviewed By: GregoryComer Differential Revision: D82250153
b2e8d75
to
7f7ef76
Compare
@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153. |
1 similar comment
@JacobSzwejbka has exported this pull request. If you are a Meta employee, you can view the originating diff in D82250153. |
7f7ef76
to
12ecb5a
Compare
…ross entry points Differential Revision: D82250153 Pull Request resolved: pytorch#14230
Summary: API that lets you place the same state tensor on the same id and offset across entry points. Lets you have get and set state more natively in the runtime if the underlying arenas are the same.
Differential Revision: D82250153