Skip to content

[DEV] fix(megatron-fsdp): preserve non-meta tensors during meta device materialization#4155

Open
xuwchen wants to merge 4 commits into
NVIDIA:devfrom
xuwchen:fix_mfsdp_meta_device_init_dev
Open

[DEV] fix(megatron-fsdp): preserve non-meta tensors during meta device materialization#4155
xuwchen wants to merge 4 commits into
NVIDIA:devfrom
xuwchen:fix_mfsdp_meta_device_init_dev

Conversation

@xuwchen
Copy link
Copy Markdown
Contributor

@xuwchen xuwchen commented Apr 6, 2026

main PR: #4154

What does this PR do ?

During meta device initialization, the materialization path calls m.to_empty(device, recurse=False) before reset_parameters(). to_empty() unconditionally replaces all tensors (both parameters and buffers) in the module with uninitialized memory, regardless of whether they are actually on the meta device.

This becomes a problem for MoE when --moe-router-enable-expert-bias is enabled: expert_bias is registered with an explicit device=torch.cuda.current_device() [link], so it bypasses the meta context and is correctly initialized to zeros on GPU. But to_empty() clobbers it with uninitialized memory (NaN), and reset_parameters() only reinitializes parameters, not buffers, so expert_bias stays NaN for the entire training run.

The fix here replaces m.to_empty() with a targeted helper that only materializes is_meta=True tensors, leaving non-meta buffers (like expert_bias) untouched.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@xuwchen xuwchen requested review from a team as code owners April 6, 2026 10:37
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 6, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@xuwchen xuwchen requested a review from shjwudp April 7, 2026 02:15
Copy link
Copy Markdown
Contributor

@shjwudp shjwudp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Some bias parameters are stored as nn.Module buffers (though that’s not a standard PyTorch practice). These buffers weren’t initialized on the meta device and shouldn’t be converted to empty tensors. This PR fixes that issue and also resolves the unexpected NaNs observed in the functional tests with the MoE layer.

for name, param in module.named_parameters(recurse=False):
if param.is_meta:
new = torch.empty_like(param, device=device)
setattr(module, name, torch.nn.Parameter(new, requires_grad=param.requires_grad))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not reconfigures module parameters, which might cause loss of existing attributes on those parameters or invalidate maps that used these parameters as keys.

Copy link
Copy Markdown
Contributor Author

@xuwchen xuwchen Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the implementation to use Module._apply() with an is_meta guard, which is the same mechanism that to_empty() uses internally. For meta→cuda conversion, _apply always creates new Parameter objects; this is a PyTorch behavior, not something we introduced. The existing _reset_parameters already handles dict remapping and attribute copying for this case.

@xuwchen xuwchen force-pushed the fix_mfsdp_meta_device_init_dev branch from 4f391be to 537f906 Compare April 20, 2026 14:52
@xuwchen xuwchen requested a review from shjwudp May 8, 2026 07:18
@yaox12 yaox12 enabled auto-merge May 11, 2026 01:06
@yaox12
Copy link
Copy Markdown
Member

yaox12 commented May 11, 2026

/ok to test 48b0ff6

@yaox12 yaox12 added this pull request to the merge queue May 11, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25646899623

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks May 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants