-
-
Notifications
You must be signed in to change notification settings - Fork 11.9k
[Model][Quantization] Fix / Add GGUF support for Qwen2 MoE models #30307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This commit fixes two issues preventing to load Qwen2 MoE + GGUF models:
1. Initialization of `Qwen2MoeModel.embed_tokens` is fixed to use
correct prefix and respect quantization settings.
2. Added GGUF-specific compatibility layer for
`Qwen2MoeSparseMoeBlock.shared_expert_gate`
(GGUF: 1D tensor `(n)`, HF/vLLM: 2D tensor `(1, n)`).
The latter was implemented in HF Transformers as a part of
`Qwen2MoeTensorProcessor` but since vLLM's weight loader directly uses
`gguf` library without such compatibility layer, we need to implement
the equivalent.
cf. <https://github.com/huggingface/transformers/blob/v4.57.3/src/transformers/modeling_gguf_pytorch_utils.py#L110-L113>
Beware that, due to a bug in HF Transformers, Qwen2-57B-A14B-Instruct
(with the same architecture but some non-default parameters) will not work
with this fix alone.
Signed-off-by: Tsukasa OI <floss_llm@irq.a4lg.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces fixes to enable GGUF support for Qwen2 MoE models. The changes include correctly initializing embed_tokens to respect quantization settings and adding a compatibility layer to handle a shape mismatch for the shared_expert_gate weight in GGUF models.
The changes are correct and address the described issues. I have one suggestion to make the weight loading logic for shared_expert_gate more robust by adding more explicit checks against the model's parameter shape, which will prevent potential issues with other formats in the future.
…lm-project#30307) Signed-off-by: Tsukasa OI <floss_llm@irq.a4lg.com>
…lm-project#30307) Signed-off-by: Tsukasa OI <floss_llm@irq.a4lg.com> Signed-off-by: Nathan Price <nathan@abridge.com>
…lm-project#30307) Signed-off-by: Tsukasa OI <floss_llm@irq.a4lg.com> Signed-off-by: Nathan Price <nathan@abridge.com> Signed-off-by: Nathan Price <nathan@abridge.com>
…lm-project#30307) Signed-off-by: Tsukasa OI <floss_llm@irq.a4lg.com> Signed-off-by: Nathan Price <nathan@abridge.com>
Purpose
This is a follow-up after #30116 (which unblocks loading Qwen2/3 MoE + GGUF models).
This commit fixes two issues preventing to load Qwen2 MoE + GGUF models:
Qwen2MoeModel.embed_tokensis fixed to use correct prefix and respect quantization settings.Qwen2MoeSparseMoeBlock.shared_expert_gate(GGUF: 1D tensor(n), HF/vLLM: 2D tensor(1, n)).The latter was implemented in HF Transformers as a part of
Qwen2MoeTensorProcessorbut since vLLM's weight loader directly usesgguflibrary without such compatibility layer, we need to implement the equivalent.Test Plan
You may download a quantized Qwen1.5-MoE-A2.7B-Chat model such as tensorblock/Qwen1.5-MoE-A2.7B-Chat-GGUF and run like:
WARNING: Qwen2-57B-A14B-Instruct with the same architecture won't work (alone) because a bug in HF Transformers prevents loading non-default parameters.
See #30116 for details (fixed in upcoming V5 by huggingface/transformers#42650 but not sure whether V4 (which vLLM currently depends on) will be fixed the same).
Test Result
Before this PR, either one of the errors will occur (each corresponds to a fix above):
KeyError: 'embed_tokens.qweight_type'(you should normally see this) orAssertionError: Attempted to load weight (torch.Size([2048])) into parameter (torch.Size([1, 2048]))(2048 is the value on
Qwen1.5-MoE-A2.7B{,-Chat}).After this PR is applied, these errors will go away and you should be able to use Qwen2 MoE + GGUF models.
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.