Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bugs in llama flash attention #681

Merged
merged 2 commits into from
Nov 27, 2023
Merged

Conversation

yaoguany
Copy link
Collaborator

fix bugs in llama2-70b multi-query attention

Copy link
Contributor

@research4pan research4pan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! (This should support llama models with multi-query attention, such as Llama-2-70b, whose num_heads != num_key_value_heads)

@research4pan research4pan merged commit 4124102 into OptimalScale:main Nov 27, 2023
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants