Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix weigit loading for GQA with TP #2379

Merged
merged 1 commit into from
Jan 15, 2024
Merged

Conversation

zhangch9
Copy link
Contributor

@zhangch9 zhangch9 commented Jan 8, 2024

Fixes #1735.

This PR modifies the weight loading logic when tp_size is larger than num_kv_heads.

Copy link
Collaborator

@zhuohan123 zhuohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch! Thanks for the fix!

@zhuohan123 zhuohan123 merged commit f780504 into vllm-project:main Jan 15, 2024
2 of 4 checks passed
@zhangch9 zhangch9 deleted the fix-gqa-tp branch January 16, 2024 05:48
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Jan 18, 2024
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

anyone test chatglm3-6b? set tensor_parallel_size=4, get wrong response
2 participants