-
Couldn't load subscription status.
- Fork 25.7k
[export] Update dynamo_graph_capture_for_export to return GraphModule. #166091
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166091
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 2 New Failures, 1 Unrelated FailureAs of commit 3e31eb6 with merge base 1e836bc ( NEW FAILURES - The following jobs have failed:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
c5e4a57 to
4331a46
Compare
| else: | ||
| return "\n " + "".join(x + "; " for x in has_annotation) + "\n" | ||
|
|
||
| def gen_var_bindings(self, fn_args, free_vars, expanded_def) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it just codemod?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a subclass called _ExportCodegen which adds shuffling.
| gm_torch_level._in_spec, | ||
| out_spec, | ||
| ) | ||
| gm_torch_level.graph._codegen.pytree_info = _PyTreeInfo( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm why did it change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will have different subclass called _ExportCodeGen here, so it's wrong to always assign _PyTreeCodegen here.
99ea1aa to
1b303ae
Compare
a7bf577 to
031371e
Compare
|
@anijain2305 I tested this with autoparallel and pytorch unittests. Now with diff we can make dynamo -> aot autograd working without:
My plan is to codemod the API usage in autoparallel first. Then I will go for 6lib. |
031371e to
8b81d5b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks mostly good. Pls get review from @anijain2305 as well.
a0fd1b6 to
b43dcec
Compare
| def forward(self, args_0, args_1): | ||
| _tree_leaf_0, _tree_leaf_1, _tree_leaf_2, = pytree.tree_leaves((self, args_0, args_1,)) | ||
| L_fw_in_ , L_bw_in_ , = self._in_shuffle_graph(_tree_leaf_0, _tree_leaf_1, _tree_leaf_2) | ||
| l_fw_in_ = L_fw_in_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this feels little redundant, any chance this can be simplified?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do you mean specifically by redundant?
| return types.MethodType(pytree_call, mod.__self__) | ||
| else: | ||
| return pytree_call | ||
| def normalize_graph_module(gm): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am pretty sure this is not enough for export, but probably fine for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, let's tackle torch.export separately.
| ] | ||
| graph_module.graph._codegen = _ExportCodeGen( | ||
| _PyTreeInfo( | ||
| argument_names(inspect.signature(mod), args, kwargs), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually use argument names in torch IR produced graph? If not, can we add this logic in the follow up PR? if the previous export behaviour is that it doesn't produce torch IR with correct user argument names, i am ok btw.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok I can add a TODO here if you're ok with that.
| assert not hasattr(graph_module, "_out_shuffle_graph") | ||
| graph_module._in_shuffle_graph = pyt.in_shuffle_graph | ||
| graph_module._out_shuffle_graph = pyt.out_shuffle_graph | ||
| delattr(graph_module, "_param_name_to_source") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm is this the issue in previous torch IR graph capture as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in previous impl, you just transform and return a new graph module which doesn't have all the attached attributes like this.
| graph_module._out_shuffle_graph = pyt.out_shuffle_graph | ||
| delattr(graph_module, "_param_name_to_source") | ||
| graph_module.recompile() | ||
| graph_module.meta["module_call_specs"] = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my understanding, technically you don't need this right because you are anyway running the bytecode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
technically I'm not using this. This is just for passing some basic torch.export tests.
b43dcec to
b251c24
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / inductor-build / build Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: trunk / inductor-build / build Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (distributed, 1, 3, lf.linux.g4dn.12xlarge.nvidia.gpu) Details for Dev Infra teamRaised by workflow job |
b251c24 to
6f5b716
Compare
6f5b716 to
3e31eb6
Compare
|
@pytorchbot merge -i Test failures seem not related. |
Merge startedYour change will be merged while ignoring the following 3 checks: trunk / linux-jammy-py3-clang12-executorch / test (executorch, 1, 1, lf.linux.2xlarge, unstable), trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 3, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu), trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 1, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Make dynamo_graph_capture_for_export return a more compatible GraphModule object which is closer the the original behavior of dynamo
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @Lucaskabela