Skip to content

Add 3d Attn Pattern to match HF Whisper #109156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 15 commits into from

Conversation

eellison
Copy link
Contributor

@eellison eellison commented Sep 12, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 12, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/109156

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 71c03e1 with merge base 518308a (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
eellison added a commit that referenced this pull request Sep 13, 2023
ghstack-source-id: 074872c
Pull Request resolved: #109156
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
@eellison eellison added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 13, 2023
@eellison
Copy link
Contributor Author

cc @Valentine233 this is causing some cpu failures because the cpu attention meta is incorrect.

@Chillee
Copy link
Collaborator

Chillee commented Sep 13, 2023

Can we use my recent faketensor updater to update the meta?

@eellison
Copy link
Contributor Author

@Chillee that wouldn't help in this case. The meta function is incorrect, so when we run the op in ExternKernelFallback we generate incorrect strides within inductor. Then those incorrect strides become runtime assertion failures.

@Valentine233
Copy link
Collaborator

Valentine233 commented Sep 14, 2023

cc @Valentine233 this is causing some cpu failures because the cpu attention meta is incorrect.

Please help modify with the following diff patch. This could avoid the AssertionError. It is also okay for me to open another independent PR to fix this.

diff --git a/torch/_meta_registrations.py b/torch/_meta_registrations.py
index 82b9d1bf3dd..fb5fbc51491 100644
--- a/torch/_meta_registrations.py
+++ b/torch/_meta_registrations.py
@@ -4811,14 +4811,12 @@ def meta__scaled_dot_product_flash(
     max_seqlen_batch_k = key.size(2)
     Nnz_q = batch_size * max_seqlen_batch_q

-    query_t = query.transpose(1, 2)
-    query_reshaped = query_t.reshape(Nnz_q, num_heads, head_dim)
-    attention = torch.empty_like(query_reshaped, device=query.device)
-    attention = attention.view(
-        batch_size, max_seqlen_batch_q, num_heads, head_dim
-    ).transpose(1, 2)
-
     if device_hint(query) == "cpu":
+        attention = torch.empty(
+            (batch_size, max_seqlen_batch_q, num_heads, head_dim),
+            dtype=query.dtype,
+            device=query.device,
+        ).transpose(1, 2)
         logsumexp = torch.empty(
             (
                 batch_size,
@@ -4839,6 +4837,12 @@ def meta__scaled_dot_product_flash(
             torch.empty((), dtype=torch.long, device="meta"),
             torch.empty((), dtype=query.dtype, device=query.device),
         )
+    query_t = query.transpose(1, 2)
+    query_reshaped = query_t.reshape(Nnz_q, num_heads, head_dim)
+    attention = torch.empty_like(query_reshaped, device=query.device)
+    attention = attention.view(
+        batch_size, max_seqlen_batch_q, num_heads, head_dim
+    ).transpose(1, 2)
     max_seqlen_q = math.ceil(max_seqlen_batch_q / 16) * 16
     logsumexp = torch.empty(
         (batch_size, num_heads, max_seqlen_q),

Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
@eellison
Copy link
Contributor Author

@Valentine233, updated, but your diff didn't exactly apply. could you post the new meta registration for sdpa in its entirety, not just the diff ?

@Valentine233
Copy link
Collaborator

@Valentine233, updated, but your diff didn't exactly apply. could you post the new meta registration for sdpa in its entirety, not just the diff ?

Sure! Pls try again, thanks.

@register_meta(
    [
        aten._scaled_dot_product_flash_attention,
    ]
)
def meta__scaled_dot_product_flash(
    query: Tensor,
    key: Tensor,
    value: Tensor,
    dropout_p: float = 0.0,
    is_causal: bool = False,
    return_debug_mask: bool = False,
    scale: Optional[float] = None,
):
    batch_size = query.size(0)
    num_heads = query.size(1)
    max_seqlen_batch_q = query.size(2)
    head_dim = query.size(3)

    max_seqlen_batch_k = key.size(2)
    Nnz_q = batch_size * max_seqlen_batch_q

    if device_hint(query) == "cpu":
        attention = torch.empty(
            (batch_size, max_seqlen_batch_q, num_heads, head_dim),
            dtype=query.dtype,
            device=query.device,
        ).transpose(1, 2)
        logsumexp = torch.empty(
            (
                batch_size,
                max_seqlen_batch_q,
                num_heads,
            ),
            dtype=torch.float,
            device=query.device,
        ).transpose(1, 2)
        return (
            attention,
            logsumexp,
            torch.empty((), dtype=torch.int32, device="meta"),
            torch.empty((), dtype=torch.int32, device="meta"),
            0,
            0,
            torch.empty((), dtype=torch.long, device="meta"),
            torch.empty((), dtype=torch.long, device="meta"),
            torch.empty((), dtype=query.dtype, device=query.device),
        )
    query_t = query.transpose(1, 2)
    query_reshaped = query_t.reshape(Nnz_q, num_heads, head_dim)
    attention = torch.empty_like(query_reshaped, device=query.device)
    attention = attention.view(
        batch_size, max_seqlen_batch_q, num_heads, head_dim
    ).transpose(1, 2)
    max_seqlen_q = math.ceil(max_seqlen_batch_q / 16) * 16
    logsumexp = torch.empty(
        (batch_size, num_heads, max_seqlen_q),
        dtype=torch.float,
        device=query.device,
    )
    cumulative_sequence_length_q = torch.empty(
        batch_size + 1, dtype=torch.int32, device="meta"
    )
    cumulative_sequence_length_k = torch.empty(
        batch_size + 1, dtype=torch.int32, device="meta"
    )

    if return_debug_mask:
        blocksize_c = 128 if head_dim > 64 else 256
        max_seqlen_k = math.ceil(max_seqlen_batch_q / blocksize_c)
        if max_seqlen_batch_k <= 128:
            max_seqlen_k = 128
        elif max_seqlen_batch_k <= 256:
            max_seqlen_k = 256
        debug_mask = torch.empty(
            (batch_size, num_heads, max_seqlen_q, max_seqlen_k),
            dtype=query.dtype,
            device=query.device,
        )
    else:
        debug_mask = torch.empty(0, dtype=query.dtype, device=query.device)

    # Note [Seed and Offset]: device for seed and offset below depends on whether we are
    # capturing or not, but at the time of tracing we don't know if we
    # are going to use cudagraphs or not, so we return meta tensors here
    # it's possible we'll need to have some special handling in inductor for sdpa

    return (
        attention,
        logsumexp,
        cumulative_sequence_length_q,
        cumulative_sequence_length_k,
        max_seqlen_batch_q,
        max_seqlen_batch_k,
        torch.empty((), dtype=torch.long, device="meta"),
        torch.empty((), dtype=torch.long, device="meta"),
        debug_mask,
    )

Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
@eellison
Copy link
Contributor Author

cc @Valentine233 accuracy failing on HF whisper Cpu with these changes

Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Adds a 3d pattern that improves perf of HF Whisper from 1.3 -> 4.1. We could be matching more generally on 3d, but i'll leave that for another pr. 

Thanks to drisspg for helping me write the pattern.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
@eellison
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@eellison
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Sep 20, 2023
The pretty print is faster and more concise because it memoizes objects.

Pull Request resolved: #109066
Approved by: https://github.com/yanboliang
ghstack dependencies: #109663, #108894, #108917, #109142, #109156
@facebook-github-bot facebook-github-bot deleted the gh/eellison/540/head branch September 24, 2023 14:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants