Skip to content

General scaling policy for HP transfer + Depth MuP recipe#4381

Open
plugyawn wants to merge 28 commits into
NVIDIA:mainfrom
plugyawn:feature/scaling-transfer
Open

General scaling policy for HP transfer + Depth MuP recipe#4381
plugyawn wants to merge 28 commits into
NVIDIA:mainfrom
plugyawn:feature/scaling-transfer

Conversation

@plugyawn
Copy link
Copy Markdown
Contributor

@plugyawn plugyawn commented Apr 19, 2026

What does this PR do ?

Addresses part of #4088, introducing a scaling policy for transfer recipes.
This also subsumes and refactors #3058 and #3715 into a mup recipe.
For now, I've kept the --use-mup flag connected (since it was part of the last release), but I'll add in a warning for deprecation?

Current recipes:

  • none (default) being the standard Megatron parameterization
  • mup being the refactored existing MuP behavior, preserved exactly.
  • depth_mup (draft) with support for Adam/AdamW + dense residual transformers to demonstrate support for other scaling recipes.

depth_mup recipe

We do the following:

  • dense residual branch multiplier: depth_mult^-1
  • hidden matrix-like LR depth multiplier: depth_mult^0
  • hidden Adam epsilon depth multiplier: depth_mult^-1

And also,

  • a dense block output-projection init compensation: depth_mult^+0.5
    • Megatron already uses layer-count-based residual output-projection initialization. This allows us to avoid accidentally double-counting depth at initialization while making the residual branch scaling explicit in the forward graph.

Chiefly, this PR adds a new megatron.core.parameterization package, which handles:

  • canonical scaling recipe configs
  • final resolved model layer and trainer/optimizer scaling policies
  • parameter-role helpers for embedding/output/shared/vector/matrix-like parameters, for support extending to Muon.

For unsupported optimizers like SGD, we raise an error for now. Some math discovery might be necessary here. SGD depth transfer appears to need explicit hidden-weight, hidden-bias laws that would be complex to implement in one go.

Plots for MuP: Adam, SGD, Muon respectively:
image
image
image
240 iterations, wikitext8, on an A100 80 GB.

Needs some discussion on the YAML changing and the resume loop.

Some more thought: I think a scaling policy and training recipes could be a great feature moving forward as we can do more with the CLI, considering codex seems quite adept at using CLIs in general! I understand it's a more convenience-based idea, but would be exciting to see first-class support for autonomous and principled pretraining runs.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@plugyawn plugyawn requested review from a team as code owners April 19, 2026 16:13
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 19, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 19, 2026 16:13
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@plugyawn plugyawn marked this pull request as ready for review April 19, 2026 16:14
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 19, 2026 16:14
@plugyawn
Copy link
Copy Markdown
Contributor Author

Adding the plots for depth-MuP soon.

@plugyawn
Copy link
Copy Markdown
Contributor Author

@Phlip79 @janEbert would love for you to have a look!

@plugyawn
Copy link
Copy Markdown
Contributor Author

Shifting Depth-MuP implementation from TP VI (Yang's) to the Bytedance version, since it covers GPT-style transformers more comfortably. The paper is about MuonClip/Kimi-Muon, but also talks about Adam/AdamW.

Comment thread megatron/training/arguments.py Outdated
@plugyawn
Copy link
Copy Markdown
Contributor Author

plugyawn commented Apr 20, 2026

image Preliminary depth-MuP results on wikitext8, 300M tokens; depths 4, 8, 16, and 48 underway!

Edit:
image
For width: 2048, across 4 GPUs (4xA100 80GB)
image

@chtruong814 chtruong814 added the waiting-on-customer Waiting on the original author to respond label Apr 21, 2026
@plugyawn
Copy link
Copy Markdown
Contributor Author

plugyawn commented Apr 21, 2026

Thanks for the review @janEbert! Could you take a look at the depth-MuP plots too? The optimas seem to align well, and the curves seem bundled in the right way, but I'm not sure about the tail behaviour. Lower depths here prefer slightly higher LRs, which makes sense going by general wisdom.

@chtruong814 chtruong814 removed the waiting-on-customer Waiting on the original author to respond label Apr 21, 2026
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the waiting-on-maintainers Waiting on maintainers to respond label Apr 23, 2026
@plugyawn plugyawn requested a review from janEbert April 27, 2026 11:15
@janEbert
Copy link
Copy Markdown
Contributor

Hey, this is a huge PR, so it will take some time to review. :)
Thanks for your patience in advance!

@svcnvidia-nemo-ci svcnvidia-nemo-ci added waiting-on-maintainers Waiting on maintainers to respond and removed waiting-on-maintainers Waiting on maintainers to respond labels Apr 27, 2026
@ericharper ericharper requested a review from mkhona-nvidia May 8, 2026 17:01
@janEbert
Copy link
Copy Markdown
Contributor

janEbert commented May 8, 2026

Hey @plugyawn, we discussed in a smaller round how to tackle this PR and came to the conclusion that a design doc would be extremely helpful in order to better understand and review the PR. Ideally, you would include explanatory diagrams, considerations of edge cases, and explanations on why certain parts of the code needed to be touched. For example, the plots you already produced would also be perfect for this kind of document.

This is also part of the PR template, so I hope you don't feel too thrown off by this request:

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

@plugyawn plugyawn force-pushed the feature/scaling-transfer branch from 4b7b8e2 to 7647593 Compare May 11, 2026 05:44
@plugyawn plugyawn requested a review from a team as a code owner May 11, 2026 05:44
@plugyawn
Copy link
Copy Markdown
Contributor Author

Fixed some bugs and redid the depth-MuP plots with width 2048, and verified across 4xA100s. Preparing the design doc, will share it soon!

@janEbert
Copy link
Copy Markdown
Contributor

@plugyawn really appreciate you taking time for the additional input! Just tagging @NVIDIA/mcore-oncall should be good enough (I'll update the PR template with this; "@mcore-oncall" seems to be outdated). Oncall is a rotating member of our team and, coincidentally, it's me right now, so feel free to just ask here/reach out via mail. :)

Stacked PRs are not currently enabled in this repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-request waiting-on-customer Waiting on the original author to respond

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants