Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[inductor][cpp] epilogue support for gemm template #126019

Closed
wants to merge 15 commits into from

Conversation

jgong5
Copy link
Collaborator

@jgong5 jgong5 commented May 12, 2024

Stack from ghstack (oldest at bottom):

As part of #125683, this PR adds the epilogue support for c++ gemm template by reusing the c++ vector codegen on sub-slices of tensors. This is implemented by retracing the epilogue IR nodes with new ranges and offsets. The new codegen_loop_bodies and codegen_functions methods are added to c++ vector codegen for this purpose. This is leveraged by the store_output method of the template kernel for epilogue codegen and store to the final result.

cc @voznesenskym @penguinwu @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang

[ghstack-poisoned]
Copy link

pytorch-bot bot commented May 12, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/126019

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (4 Unrelated Failures)

As of commit b30c694 with merge base 7a506dd (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

jgong5 added a commit that referenced this pull request May 12, 2024
ghstack-source-id: 5c5aa78127c399dc804cc7f768fe038cbf05a7e4
Pull Request resolved: #126019
@jgong5 jgong5 marked this pull request as draft May 12, 2024 08:52
[ghstack-poisoned]
jgong5 added a commit that referenced this pull request May 13, 2024
ghstack-source-id: 58c56a7ef3271a127573415e5391b8f1ac5d1875
Pull Request resolved: #126019
@jgong5 jgong5 marked this pull request as ready for review May 13, 2024 01:46
[ghstack-poisoned]
[ghstack-poisoned]
As part of #125683, this PR adds the epilogue support for c++ gemm template by reusing the c++ vector codegen on sub-slices of tensors. This is implemented by retracing the epilogue IR nodes with new ranges and offsets. The new `codegen_loop_bodies` and `codegen_functions` methods are added to c++ vector codegen for this purpose. This is leveraged by the `store_output` method of the template kernel for epilogue codegen and store to the final result.

cc voznesenskym penguinwu EikanWang Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
As part of #125683, this PR adds the epilogue support for c++ gemm template by reusing the c++ vector codegen on sub-slices of tensors. This is implemented by retracing the epilogue IR nodes with new ranges and offsets. The new `codegen_loop_bodies` and `codegen_functions` methods are added to c++ vector codegen for this purpose. This is leveraged by the `store_output` method of the template kernel for epilogue codegen and store to the final result.

cc voznesenskym penguinwu EikanWang Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
As part of #125683, this PR adds the epilogue support for c++ gemm template by reusing the c++ vector codegen on sub-slices of tensors. This is implemented by retracing the epilogue IR nodes with new ranges and offsets. The new `codegen_loop_bodies` and `codegen_functions` methods are added to c++ vector codegen for this purpose. This is leveraged by the `store_output` method of the template kernel for epilogue codegen and store to the final result.

cc voznesenskym penguinwu EikanWang Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
@jgong5 jgong5 requested review from jansel and lezcano May 14, 2024 12:16
@jgong5
Copy link
Collaborator Author

jgong5 commented May 15, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 15, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request May 16, 2024
…ue fusion (#126068)

As part of #125683, this PR adds the initial bf16/fp16 gemm template support with micro-gemm implemented with fused type casting and fp32 computation. It doesn't provide epilogue fusion support yet which will be added in the next PR.

Pull Request resolved: #126068
Approved by: https://github.com/jansel
ghstack dependencies: #126019
pytorchmergebot pushed a commit that referenced this pull request May 24, 2024
…ue fusion (#126068)

As part of #125683, this PR adds the initial bf16/fp16 gemm template support with micro-gemm implemented with fused type casting and fp32 computation. It doesn't provide epilogue fusion support yet which will be added in the next PR.

Pull Request resolved: #126068
Approved by: https://github.com/jansel
ghstack dependencies: #124021, #126019
pytorchmergebot pushed a commit that referenced this pull request May 24, 2024
)

As part of #125683, this PR adds epilogue fusion support for bf16/fp16 gemms. The key changes are as follows:
1. bf16 linear w/ epilogue fusion of some ops was originally supported via ATen oneDNN linear pointwise ops. In order to match the ATen op semantics, in-template epilogue support is added to the cpp gemm template so that we would have: "gemm + in-template epilogues -> template buffer". If the template is chosen for codegen, the in-template epilogues will be concatenated with the out-of-template epilogues that are appended during the scheduling.
2. Support bf16/fp16 legalization for `codegen_loop_bodies` which is used to generate the epilogue loops.
3. We used to leverage the in-place buffer mechanism to handle the in-place buffers in the epilogue codegen, in particular, for the reuses for output buffers of GEMM, template and epilogues. This is not correct since the output buffer is an "output" not an "in-place" buffer of the template kernel itself. Now, we use a dedicated "aliases" dict to manage such buffer reuses and the intermediate aliasing buffers are removed after codegen.
4. Add `localize_buffer` method to `LocalBufferScope` to allow the replacement of a global buffer with a local one in the given inductor IR nodes. This helps the fused loops to work on smaller-sized local buffers for better data locality.

Pull Request resolved: #126545
Approved by: https://github.com/jansel
ghstack dependencies: #124021, #126019, #126068
pytorchmergebot added a commit that referenced this pull request May 27, 2024
This reverts commit 56c412d.

Reverted #126019 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](#124021 (comment)))
@pytorchmergebot
Copy link
Collaborator

@jgong5 your PR has been successfully reverted.

[ghstack-poisoned]
cpp_argdefs, _, _ = self.args.cpp_argdefs()
return f"void {self.kernel_name}({', '.join(cpp_argdefs)})"

placeholder = "<DEFINE_KERNEL>"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, rename this to <DEF_KERNEL>, or it's going to merge conflict with #127144

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the reminder. Fixed.

titaiwangms pushed a commit to titaiwangms/pytorch that referenced this pull request May 28, 2024
As part of pytorch#125683, this PR adds the epilogue support for c++ gemm template by reusing the c++ vector codegen on sub-slices of tensors. This is implemented by retracing the epilogue IR nodes with new ranges and offsets. The new `codegen_loop_bodies` and `codegen_functions` methods are added to c++ vector codegen for this purpose. This is leveraged by the `store_output` method of the template kernel for epilogue codegen and store to the final result.

Pull Request resolved: pytorch#126019
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#124021
titaiwangms pushed a commit to titaiwangms/pytorch that referenced this pull request May 28, 2024
…ue fusion (pytorch#126068)

As part of pytorch#125683, this PR adds the initial bf16/fp16 gemm template support with micro-gemm implemented with fused type casting and fp32 computation. It doesn't provide epilogue fusion support yet which will be added in the next PR.

Pull Request resolved: pytorch#126068
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#124021, pytorch#126019
titaiwangms pushed a commit to titaiwangms/pytorch that referenced this pull request May 28, 2024
…rch#126545)

As part of pytorch#125683, this PR adds epilogue fusion support for bf16/fp16 gemms. The key changes are as follows:
1. bf16 linear w/ epilogue fusion of some ops was originally supported via ATen oneDNN linear pointwise ops. In order to match the ATen op semantics, in-template epilogue support is added to the cpp gemm template so that we would have: "gemm + in-template epilogues -> template buffer". If the template is chosen for codegen, the in-template epilogues will be concatenated with the out-of-template epilogues that are appended during the scheduling.
2. Support bf16/fp16 legalization for `codegen_loop_bodies` which is used to generate the epilogue loops.
3. We used to leverage the in-place buffer mechanism to handle the in-place buffers in the epilogue codegen, in particular, for the reuses for output buffers of GEMM, template and epilogues. This is not correct since the output buffer is an "output" not an "in-place" buffer of the template kernel itself. Now, we use a dedicated "aliases" dict to manage such buffer reuses and the intermediate aliasing buffers are removed after codegen.
4. Add `localize_buffer` method to `LocalBufferScope` to allow the replacement of a global buffer with a local one in the given inductor IR nodes. This helps the fused loops to work on smaller-sized local buffers for better data locality.

Pull Request resolved: pytorch#126545
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#124021, pytorch#126019, pytorch#126068
titaiwangms pushed a commit to titaiwangms/pytorch that referenced this pull request May 28, 2024
[ghstack-poisoned]
[ghstack-poisoned]
@jgong5
Copy link
Collaborator Author

jgong5 commented May 29, 2024

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Tried to rebase and push PR #126019, but it was already up to date. Try rebasing against main by issuing:
@pytorchbot rebase -b main

pytorchmergebot pushed a commit that referenced this pull request May 29, 2024
ghstack-source-id: 26e170c08eb2d226dcefcc77be2669ebff9eb9ee
Pull Request resolved: #126019
@jgong5
Copy link
Collaborator Author

jgong5 commented May 29, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request May 29, 2024
…ue fusion (#126068)

As part of #125683, this PR adds the initial bf16/fp16 gemm template support with micro-gemm implemented with fused type casting and fp32 computation. It doesn't provide epilogue fusion support yet which will be added in the next PR.

Pull Request resolved: #126068
Approved by: https://github.com/jansel
ghstack dependencies: #124021, #126019
@kit1980 kit1980 removed the Reverted label May 29, 2024
Aidyn-A pushed a commit to tinglvv/pytorch that referenced this pull request May 30, 2024
Aidyn-A pushed a commit to tinglvv/pytorch that referenced this pull request May 30, 2024
As part of pytorch#125683, this PR adds the epilogue support for c++ gemm template by reusing the c++ vector codegen on sub-slices of tensors. This is implemented by retracing the epilogue IR nodes with new ranges and offsets. The new `codegen_loop_bodies` and `codegen_functions` methods are added to c++ vector codegen for this purpose. This is leveraged by the `store_output` method of the template kernel for epilogue codegen and store to the final result.

Pull Request resolved: pytorch#126019
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#124021
Aidyn-A pushed a commit to tinglvv/pytorch that referenced this pull request May 30, 2024
…ue fusion (pytorch#126068)

As part of pytorch#125683, this PR adds the initial bf16/fp16 gemm template support with micro-gemm implemented with fused type casting and fp32 computation. It doesn't provide epilogue fusion support yet which will be added in the next PR.

Pull Request resolved: pytorch#126068
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#124021, pytorch#126019
@github-actions github-actions bot deleted the gh/jgong5/45/head branch June 29, 2024 01:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants