-
Notifications
You must be signed in to change notification settings - Fork 758
Get export APIs ready for PTC #566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for resplendent-gnome-14e531 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
|
This pull request was exported from Phabricator. Differential Revision: D49742989 |
17d913f to
8a36b5d
Compare
|
This pull request was exported from Phabricator. Differential Revision: D49742989 |
Summary: X-link: pytorch/pytorch#110410 https://docs.google.com/document/d/1QJJEGnj2nHGPODlw38BEG3KLLCOTfdOVjPrNQbz_LM8/edit#bookmark=id.lp80wfshq130 Changes: * `torch.export` will return a functional ATen graph but not lowered to core aten decompositions (CompositeImplicitAutograd decomps still run) * `exported_program.run_decompositions(decomposition_table)` will optionally take a decomposition table, and run decompositions on the exported program, returning a new exported program. By default we will run the Core ATen decomposition table. Calling convention for Executorch stays the same: ``` pre_autograd_graph = capture_pre_autograd_graph(f, args, ...) aten_graph_no_decomps = torch.export.export(pre_autograd_graph, args, ...) # Within to_edge we decompose to core aten and then convert to edge edge_graph = exir.to_edge(aten_graph_no_decomps) ``` Reviewed By: larryliu0820, guangy10 Differential Revision: D49742989
Summary: X-link: pytorch/executorch#566 https://docs.google.com/document/d/1QJJEGnj2nHGPODlw38BEG3KLLCOTfdOVjPrNQbz_LM8/edit#bookmark=id.lp80wfshq130 Changes: * `torch.export` will return a functional ATen graph but not lowered to core aten decompositions (CompositeImplicitAutograd decomps still run) * `exported_program.run_decompositions(decomposition_table)` will optionally take a decomposition table, and run decompositions on the exported program, returning a new exported program. By default we will run the Core ATen decomposition table. Calling convention for Executorch stays the same: ``` pre_autograd_graph = capture_pre_autograd_graph(f, args, ...) aten_graph_no_decomps = torch.export.export(pre_autograd_graph, args, ...) # Within to_edge we decompose to core aten and then convert to edge edge_graph = exir.to_edge(aten_graph_no_decomps) ``` Test Plan: CI Reviewed By: larryliu0820, guangy10 Differential Revision: D49742989
8a36b5d to
19acbca
Compare
|
This pull request was exported from Phabricator. Differential Revision: D49742989 |
|
This pull request has been merged in a4a86df. |
|
This pull request has been reverted by 0f2bc69. |
Summary:
https://docs.google.com/document/d/1QJJEGnj2nHGPODlw38BEG3KLLCOTfdOVjPrNQbz_LM8/edit#bookmark=id.lp80wfshq130
Changes:
torch.exportwill return a functional ATen graph w/o decompositionsexported_program.run_decompositions(decomposition_table)will optionally take a decomposition table, and run decompositions on the exported program, returning a new exported program. By default we will run the Core ATen decomposition table.Calling convention for Executorch stays the same:
Differential Revision: D49742989