Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ROCm] include gfx908 as supported #2792

Merged
merged 3 commits into from
Feb 20, 2024
Merged

Conversation

jamestwhedbee
Copy link
Contributor

ROCm/flash-attention supports the gfx908 architecture.

Without this change, vLLM appears to build successfully for me, but serving an LLM on an MI100 results in gibberish output.

With this change, everything works as expected.

@jamestwhedbee jamestwhedbee changed the title include gfx908 as supported [ROCm] include gfx908 as supported Feb 6, 2024
@tjtanaa tjtanaa mentioned this pull request Feb 7, 2024
@jamestwhedbee
Copy link
Contributor Author

@zhuohan123 would you have time to review this?

@jamestwhedbee
Copy link
Contributor Author

@WoosukKwon is there anything I should be doing differently to get a review here?

Copy link
Collaborator

@zhuohan123 zhuohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the fix!

@zhuohan123 zhuohan123 merged commit 264017a into vllm-project:main Feb 20, 2024
17 checks passed
xjpang pushed a commit to xjpang/vllm that referenced this pull request Feb 22, 2024
xjpang pushed a commit to xjpang/vllm that referenced this pull request Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants