New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AOTI] Improve the two-pass wrapper codegen #114067
Conversation
Summary: For the second-pass, we don't have to rerun the whole inductor flow again. This PR moves that second-pass to the codegen time. This change not only speeds up the compilation, but also removes kernel scheduling inconsistency between the two passes. Another future improvement is to make the second-pass reuse the scheduler and do the wrapper codegen only. This is a copy of #113762 to land in github first. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/114067
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit af16cb8 with merge base 0bd4d1f (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: For the second-pass, we don't have to rerun the whole inductor flow again. This PR moves that second-pass to the codegen time. This change not only speeds up the compilation, but also removes kernel scheduling inconsistency between the two passes. Another future improvement is to make the second-pass reuse the scheduler and do the wrapper codegen only. This is a copy of #113762 to land in github first. ghstack-source-id: 40f58d193fb3424d162ad0be43ae69f2a8cbe691 Pull Request resolved: #114067
@pytorchbot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
# example_inputs will be used by AOTInductor to dry-run the generated code for Triton kernel tuning. | ||
# For the forward pass, we have the real inputs to be used as example_inputs. For the backward pass, | ||
# we currently use fake tensors and defake them later. | ||
example_inputs=V.real_inputs if is_inference else example_inputs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@desertfire this line is wrong. real_inputs
does not correspond in anyway to all inputs to graph module.
In particular, this ignores all of the Parameter
inputs to the graph module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mistake is that previously, during cpp codegen, we would actually patch real_inputs
with example_inputs
. So this line is mistaken. It should just use example_inputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Repro:
from torch import nn
from torch._inductor import config
class Model(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10, device='cuda')
def forward(self, x):
return self.linear(x)
with torch.no_grad(), config.patch({"cpp_wrapper": True}):
model = Model()
model_opt = torch.compile(model)
model_opt(torch.zeros(10, device="cuda"))
Stack from ghstack (oldest at bottom):
Summary: For the second-pass, we don't have to rerun the whole inductor flow again. This PR moves that second-pass to the codegen time. This change not only speeds up the compilation, but also removes kernel scheduling inconsistency between the two passes. Another future improvement is to make the second-pass reuse the scheduler and do the wrapper codegen only.
This is a copy of #113762 to land in github first.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler