-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[hop free symbols][refactor] make create_graph_input always take example_value #138428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ple_value [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/138428
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit 0a8e9df with merge base 8e27833 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…s take example_value" cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
…s take example_value" cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
…s take example_value" cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
| example_value = wrap_to_fake_tensor_and_record( | ||
| tensor_value, | ||
| tx=self.tx, | ||
| is_tensor=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is_tensor is dermined by whether target_cls in (TensorVariable, TensorWithTFOverrideVariable). NumpyNdarraryVaraible is not in them so is_tensor = False. This PR keeps this behavior unchanged and only do code-refactoring.
torch/_dynamo/variables/builder.py
Outdated
| # TODO: Maybe the tensor-ification should be built into the source, | ||
| # rather than by special pattern match | ||
| example_value = wrap_to_fake_tensor_and_record( | ||
| wrapped_value, tx=self.tx, is_tensor=False, source=self.get_source() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here for is_tensor=False
| options.update({"raw_value": value}) | ||
|
|
||
| example_value = wrap_to_fake_tensor_and_record( | ||
| wrapped_value, tx=self.tx, is_tensor=False, source=self.get_source() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
| # the subgraph. | ||
| # See NOTE [HigherOrderOperator tracing design] for more details. | ||
|
|
||
| example_value = wrap_to_fake_tensor_and_record( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We move the wrap_to_fake_tensor_and_record logic in wrap_fx_proxy out, so that we could provide an example_value when create_graph_input. Since example_value has already been fakified, it won't get fakified again in wrap_fx_proxy.
…s take example_value" Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must have example values** cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
…s take example_value" Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must have example values** cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
| class fwd_body_0(torch.nn.Module): | ||
| def forward(self, ctx, x: "f32[]", z: "f32[]", l_weird_b: "f32[]", l_weird_c: "f32[]"): | ||
| def forward(self, ctx : torch.autograd.function.Function, x: "f32[]", z: "f32[]", l_weird_b: "f32[]", l_weird_c: "f32[]"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ctx's type is torch.autograd.funciton.Function instead of FunctionCtx because of https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/side_effects.py#L270
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What causes some of these to state their types but some of them not to?
…s take example_value" Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must set their example values when creating the inputs**. This invariant helps us to identify all the free symbols in the graph in top-level and sub-graphs. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
…s take example_value" Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must set their example values when creating the inputs**. This invariant helps us to identify all the free symbols in the graph in top-level and sub-graphs. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
…s take example_value" Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must set their example values when creating the inputs**. This invariant helps us to identify all the free symbols in the graph in top-level and sub-graphs. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
| gm.cond_fn_0.code.strip(), | ||
| """\ | ||
| def forward(self, l_iter_, l_x_, l_self_buffers_dec__cond_fn, l_self_modules_linear_parameters_bias__body_fn, l_self_modules_linear_parameters_weight__body_fn): | ||
| def forward(self, l_iter_ : torch.Tensor, l_x_ : torch.Tensor, l_self_buffers_dec__cond_fn, l_self_modules_linear_parameters_bias__body_fn, l_self_modules_linear_parameters_weight__body_fn): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do the l_self_buffers_dec__cond_fn don't have types?
| if self.backward_state_proxy is None: | ||
| if self.export: | ||
| unimplemented("backward_state does not support export") | ||
| example_value = BackwardState() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you check (add a test?) that this doesn't cause a warning? Direct initialization of FunctionCtx usually induces a warniing
| proxy = self.root_tracer.create_graph_input( | ||
| str(s0), | ||
| torch.SymInt, | ||
| type(s), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to pass type(s) if there's always an example value? create_graph_input can always get the type from the example value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you test that creating the backward state does not add user-facing warnings? We've had that problem in the past.
Other than that, this looks fine to me
…s take example_value" Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must set their example values when creating the inputs**. This invariant helps us to identify all the free symbols in the graph in top-level and sub-graphs. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
…s take example_value" Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must set their example values when creating the inputs**. This invariant helps us to identify all the free symbols in the graph in top-level and sub-graphs. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec [ghstack-poisoned]
…nt (#138558) Pull Request resolved: #138558 Approved by: https://github.com/zou3519 ghstack dependencies: #138345, #138428
…nts (#138737) Pull Request resolved: #138737 Approved by: https://github.com/drisspg, https://github.com/zou3519, https://github.com/Chillee ghstack dependencies: #138345, #138428, #138558
…ing to subgraph (pytorch#138559) This refactoring is for getting a deterministic ordering of binding tensors and sizes of tensors. When seeing a free tensor x with shape (s0,) in subgraph, the ordering of lifting changes from ``` lift_x_in_child, lift_s0_in_child, lift_s0_in_parent, lift_x_in_parent ``` to ``` lift_x_in_parent, lift_s0_in_parent, lift_x_in_child, lift_s0_in_child ``` This produces a determinstic ordering of handling the symints in lifted tensors. This is also the current contract of dynamo top-level graph: we lift free_symbols in sizes after tensor x and insert the free symbols before the tensor x's proxy. Pull Request resolved: pytorch#138559 Approved by: https://github.com/zou3519 ghstack dependencies: pytorch#138345, pytorch#138428, pytorch#138558, pytorch#138737
…p_fx_proxy. (#139663) Refactoring only. Previously, we manually cal SymNodeVariable.create, now we handle it with wrap_fx_proxy. This unifies the handling of operations that produce symints in wrap_fx_proxy. Pull Request resolved: #139663 Approved by: https://github.com/zou3519 ghstack dependencies: #138345, #138428, #138558, #138737, #138559
…p_fx_proxy. (pytorch#139663) Refactoring only. Previously, we manually cal SymNodeVariable.create, now we handle it with wrap_fx_proxy. This unifies the handling of operations that produce symints in wrap_fx_proxy. Pull Request resolved: pytorch#139663 Approved by: https://github.com/zou3519 ghstack dependencies: pytorch#138345, pytorch#138428, pytorch#138558, pytorch#138737, pytorch#138559
Stack from ghstack (oldest at bottom):
Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that all graph inputs must set their example values when creating the inputs. This invariant helps us to identify all the free symbols in the graph in top-level and sub-graphs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @rec