Skip to content

Conversation

@zhxchen17
Copy link
Contributor

@zhxchen17 zhxchen17 commented Oct 22, 2025

Make dynamo_graph_capture_for_export return a more compatible GraphModule object which is closer the the original behavior of dynamo

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @Lucaskabela

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166091

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 2 New Failures, 1 Unrelated Failure

As of commit 3e31eb6 with merge base 1e836bc (image):

NEW FAILURES - The following jobs have failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added ciflow/inductor module: dynamo oncall: distributed Add this issue/PR to distributed oncall triage queue labels Oct 22, 2025
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Oct 22, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: zhxchen17 / name: Zhengxu Chen (3e31eb6)

@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch 4 times, most recently from c5e4a57 to 4331a46 Compare October 23, 2025 17:14
else:
return "\n " + "".join(x + "; " for x in has_annotation) + "\n"

def gen_var_bindings(self, fn_args, free_vars, expanded_def) -> str:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it just codemod?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a subclass called _ExportCodegen which adds shuffling.

gm_torch_level._in_spec,
out_spec,
)
gm_torch_level.graph._codegen.pytree_info = _PyTreeInfo(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm why did it change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will have different subclass called _ExportCodeGen here, so it's wrong to always assign _PyTreeCodegen here.

@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch 2 times, most recently from 99ea1aa to 1b303ae Compare October 23, 2025 19:36
@zhxchen17 zhxchen17 requested a review from anijain2305 October 23, 2025 20:09
@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch 2 times, most recently from a7bf577 to 031371e Compare October 23, 2025 20:42
@zhxchen17
Copy link
Contributor Author

@anijain2305 I tested this with autoparallel and pytorch unittests.

Now with diff we can make dynamo -> aot autograd working without:

  1. install_free_tensors
  2. restore_state_dict

My plan is to codemod the API usage in autoparallel first. Then I will go for 6lib.

@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch from 031371e to 8b81d5b Compare October 24, 2025 03:10
Copy link
Contributor

@tugsbayasgalan tugsbayasgalan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks mostly good. Pls get review from @anijain2305 as well.

@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch 2 times, most recently from a0fd1b6 to b43dcec Compare October 24, 2025 18:33
@zhxchen17 zhxchen17 requested a review from anijain2305 October 24, 2025 18:43
def forward(self, args_0, args_1):
_tree_leaf_0, _tree_leaf_1, _tree_leaf_2, = pytree.tree_leaves((self, args_0, args_1,))
L_fw_in_ , L_bw_in_ , = self._in_shuffle_graph(_tree_leaf_0, _tree_leaf_1, _tree_leaf_2)
l_fw_in_ = L_fw_in_
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this feels little redundant, any chance this can be simplified?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you mean specifically by redundant?

return types.MethodType(pytree_call, mod.__self__)
else:
return pytree_call
def normalize_graph_module(gm):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am pretty sure this is not enough for export, but probably fine for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, let's tackle torch.export separately.

]
graph_module.graph._codegen = _ExportCodeGen(
_PyTreeInfo(
argument_names(inspect.signature(mod), args, kwargs),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually use argument names in torch IR produced graph? If not, can we add this logic in the follow up PR? if the previous export behaviour is that it doesn't produce torch IR with correct user argument names, i am ok btw.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok I can add a TODO here if you're ok with that.

assert not hasattr(graph_module, "_out_shuffle_graph")
graph_module._in_shuffle_graph = pyt.in_shuffle_graph
graph_module._out_shuffle_graph = pyt.out_shuffle_graph
delattr(graph_module, "_param_name_to_source")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm is this the issue in previous torch IR graph capture as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in previous impl, you just transform and return a new graph module which doesn't have all the attached attributes like this.

graph_module._out_shuffle_graph = pyt.out_shuffle_graph
delattr(graph_module, "_param_name_to_source")
graph_module.recompile()
graph_module.meta["module_call_specs"] = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for my understanding, technically you don't need this right because you are anyway running the bytecode?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

technically I'm not using this. This is just for passing some basic torch.export tests.

@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch from b43dcec to b251c24 Compare October 27, 2025 19:15
@zhxchen17
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 27, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / inductor-build / build

Details for Dev Infra team Raised by workflow job

@zhxchen17
Copy link
Contributor Author

@pytorchbot merge -i

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 1 checks: trunk / inductor-build / build

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (distributed, 1, 3, lf.linux.g4dn.12xlarge.nvidia.gpu)

Details for Dev Infra team Raised by workflow job

@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch from b251c24 to 6f5b716 Compare October 28, 2025 01:26
@zhxchen17 zhxchen17 force-pushed the zhxchen17/precompile/export_gm branch from 6f5b716 to 3e31eb6 Compare October 28, 2025 01:43
@zhxchen17
Copy link
Contributor Author

@pytorchbot merge -i

Test failures seem not related.

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 3 checks: trunk / linux-jammy-py3-clang12-executorch / test (executorch, 1, 1, lf.linux.2xlarge, unstable), trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 3, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu), trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 1, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged module: dynamo oncall: distributed Add this issue/PR to distributed oncall triage queue topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants