Skip to content

Layer/RMS Norm + Quantization Fusion#2406

Closed
yaox12 wants to merge 1 commit intoNVIDIA:mainfrom
yaox12:xiny/norm_quant_fusion_main
Closed

Layer/RMS Norm + Quantization Fusion#2406
yaox12 wants to merge 1 commit intoNVIDIA:mainfrom
yaox12:xiny/norm_quant_fusion_main

Conversation

@yaox12
Copy link
Member

@yaox12 yaox12 commented Nov 26, 2025

What does this PR do ?

Ideally, we should use LayerNormLinear to fuse normalization and quantization. But there're some exceptions, e.g., in MLA, we're using separate input layernorm and Linear layers. So we provide this option to fuse quantization in LayerNorm/RMSNorm.

Another option is to use a single GEMM for both kv_down/q_down projection in MLA, just like what we're doing for GQA/MHA, so we can do the fusion through TE's LayernormLinear. We will wait for the final decision.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@yaox12 yaox12 self-assigned this Nov 26, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 26, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

config: TransformerConfig,
hidden_size: int,
eps: float = 1e-5,
persist_layer_norm: bool = True,
Copy link
Member Author

@yaox12 yaox12 Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These args were once removed but re-added in 67a0e5d. I thought it was by accident.

Signed-off-by: Xin Yao <xiny@nvidia.com>
@yaox12 yaox12 force-pushed the xiny/norm_quant_fusion_main branch from 3371d3f to 4836cda Compare November 28, 2025 01:18
@yaox12
Copy link
Member Author

yaox12 commented Feb 1, 2026

Close this PR in favor of #3039.

@yaox12 yaox12 closed this Feb 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant