Skip to content

[Modular Dependencies] Fixup qwen rms norms#43772

Merged
vasqu merged 5 commits intomainfrom
rm-torch-rms-norm
Feb 6, 2026
Merged

[Modular Dependencies] Fixup qwen rms norms#43772
vasqu merged 5 commits intomainfrom
rm-torch-rms-norm

Conversation

@vasqu
Copy link
Contributor

@vasqu vasqu commented Feb 5, 2026

As per title, this led to weird dependencies where modeling files used direct imports

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nice, happy to get rid of bad modular patterns!

@use_kernel_forward_from_hub("RMSNorm")
class Dots1RMSNorm(nn.Module):
def __init__(self, hidden_size, eps: float = 1e-6) -> None:
def __init__(self, hidden_size, eps=1e-6):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the type hints were pretty, can we add them in llama so it's copied to all models?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair, added it in c857489

Comment on lines -1067 to +1089
self.norm1 = Qwen2RMSNorm(config.hidden_size, eps=1e-6)
self.norm2 = Qwen2RMSNorm(config.hidden_size, eps=1e-6)
self.norm1 = Qwen2_5OmniRMSNorm(config.hidden_size, eps=1e-6)
self.norm2 = Qwen2_5OmniRMSNorm(config.hidden_size, eps=1e-6)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

finally, have been annoyed by this as well!

@vasqu vasqu enabled auto-merge (squash) February 5, 2026 17:51
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should still keep the alias IMO

"Qwen2PreTrainedModel",
"Qwen2Model",
"Qwen2ForCausalLM",
"Qwen2RMSNorm",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is breaking haha

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, it shouldn't have been this way in the beginning but it's not worth to break

@github-actions
Copy link
Contributor

github-actions bot commented Feb 6, 2026

[For maintainers] Suggested jobs to run (before merge)

run-slow: afmoe, aimv2, apertus, arcee, aria, bamba, bitnet, blt, chameleon, clvp, csm, cwm, deepseek_v2, deepseek_v3, dia, diffllama

@vasqu vasqu merged commit 8c3ac8f into main Feb 6, 2026
26 checks passed
@vasqu vasqu deleted the rm-torch-rms-norm branch February 6, 2026 12:30
jiosephlee pushed a commit to jiosephlee/transformers_latest that referenced this pull request Feb 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants