Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to disable flash_attention #6

Open
lqniunjunlper opened this issue Feb 28, 2024 · 6 comments
Open

How to disable flash_attention #6

lqniunjunlper opened this issue Feb 28, 2024 · 6 comments

Comments

@lqniunjunlper
Copy link

When i run test code, got the error:
RuntimeError: FlashAttention only supports Ampere GPUs or newer.

@Sunwood-ai-labs
Copy link

I also encounter the same problem

image

@OmkarThawakar
Copy link
Member

Hi @lqniunjunlper and @Sunwood-ai-labs ,

Thanks for the interest in our work.

In place of flash_attention you can use default PyTorch attention.

Just replace line "from flash_attn import flash_attn_func" with "torch.nn.functional.scaled_dot_product_attention" .

@lqniunjunlper
Copy link
Author

@OmkarThawakar Thanks, it's worked!
But i got another error like this "
attn_output = torch.nn.functional.scaled_dot_product_attention(
RuntimeError: The size of tensor a (10) must match the size of tensor b (19) at non-singleton dimension 1 "
May the shape of q, k,v not matched!

@simplew2011
Copy link

same

@Luoyingfeng8
Copy link

@OmkarThawakar Thanks, it's worked! But i got another error like this " attn_output = torch.nn.functional.scaled_dot_product_attention( RuntimeError: The size of tensor a (10) must match the size of tensor b (19) at non-singleton dimension 1 " May the shape of q, k,v not matched!

Do you solve the problem?

@zimenglan-sysu-512
Copy link

+1, @OmkarThawakar did u solve it?
thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants