Fix GlmMoeDsaConfig default mlp_layer_types in modular conversion#43876
Merged
zucchini-nlp merged 3 commits intohuggingface:mainfrom Feb 10, 2026
Merged
Conversation
Contributor
Author
|
Thanks for the detailed review. I pushed commit What I changed:
Validation run locally:
I also verified both |
Member
|
run-slow: glm_moe_dsa |
Contributor
|
This comment contains models: ["models/glm_moe_dsa"] |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Member
|
@bot /style |
Contributor
|
Style fix bot fixed some files and pushed the changes. |
Contributor
|
[For maintainers] Suggested jobs to run (before merge) run-slow: glm_moe_dsa |
jiosephlee
pushed a commit
to jiosephlee/transformers_latest
that referenced
this pull request
Feb 11, 2026
…ggingface#43876) * Fix GlmMoeDsaConfig default mlp layer pattern * fix(glm-moe-dsa): dedupe config init and colocate test * Apply repo consistency fixes --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR fixes #43864 by preserving the
GlmMoeDsaConfigdefaultmlp_layer_typesfrom the modular source.GlmMoeDsaConfigshould default to dense MLP for the first 3 layers and sparse afterward. During modular conversion, the parent init body was being inlined and overwrote that default with the parent pattern (["dense"] + ["sparse"] * ...).Changes
modular_glm_moe_dsa.py, callPreTrainedConfig.__init__(self, **kwargs)instead ofsuper().__init__(**kwargs)to avoid inlining parent init logic.configuration_glm_moe_dsa.pyvia modular converter, which removes the duplicated parent default block.tests/models/glm_moe_dsa/test_configuration_glm_moe_dsa.pyto assert the expected default pattern fornum_hidden_layers=8.Validation
PYTHONPATH=src python3 utils/modular_model_converter.py glm_moe_dsaPYTHONPATH=src python3 utils/check_modular_conversion.py --files src/transformers/models/glm_moe_dsa/modular_glm_moe_dsa.pyPYTHONPATH=src python3 -m pytest tests/models/glm_moe_dsa/test_configuration_glm_moe_dsa.py -qPYTHONPATH=src python3 -m trace --count --summary --module unittest tests.models.glm_moe_dsa.test_configuration_glm_moe_dsa | grep -E "configuration_glm_moe_dsa|test_configuration_glm_moe_dsa"configuration_glm_moe_dsa ... 100%