Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): flash attention v2 #624

Merged
merged 6 commits into from
Jul 18, 2023
Merged

feat(server): flash attention v2 #624

merged 6 commits into from
Jul 18, 2023

Conversation

OlivierDehaene
Copy link
Member

No description provided.

Narsil
Narsil previously approved these changes Jul 17, 2023
Copy link
Collaborator

@Narsil Narsil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@krzim
Copy link
Contributor

krzim commented Jul 17, 2023

It looks like this will remove compatibility with compute capability 7.5. That means T4 cards and AWS G4 instance types will no longer be supported.

@Narsil
Copy link
Collaborator

Narsil commented Jul 18, 2023

Support for 7.5 is coming apparently, in the meantime we might try to keep both

Narsil
Narsil previously approved these changes Jul 18, 2023
@krzim
Copy link
Contributor

krzim commented Jul 18, 2023

That would be great. The motivation for my 4bit bnb PR was to improve throughout on Falcon models on g4s. We have workloads that run in regions that only have g4 availability so maintaining compatibility would be best.

@OlivierDehaene
Copy link
Member Author

I know. That's why the PR is not merged.

@OlivierDehaene OlivierDehaene merged commit 3b71c38 into main Jul 18, 2023
4 of 5 checks passed
@OlivierDehaene OlivierDehaene deleted the feat/flash_v2 branch July 18, 2023 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants