Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BugFix] Fix weight loading for Mixtral with TP #2208

Merged
merged 1 commit into from
Dec 20, 2023
Merged

Conversation

WoosukKwon
Copy link
Collaborator

Fixes #2202

Currently, the Mixtral model does not support quantization with TP > 1 because DummyModule does not use quantized linear methods. This PR removes DummyModule and instead modifies the weight loading logic to consider expert parallelism.

Copy link
Collaborator

@Yard1 Yard1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@WoosukKwon WoosukKwon merged commit ba4f826 into main Dec 20, 2023
2 checks passed
@WoosukKwon WoosukKwon deleted the fix-mixtral-tp branch December 20, 2023 00:16
@WoosukKwon WoosukKwon mentioned this pull request Jan 4, 2024
2 tasks
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Mixtral-8x7B-Instruct-v0.1-GPTQ weight loading error
2 participants