[DEV] fix(megatron-fsdp): preserve non-meta tensors during meta device materialization#4155
[DEV] fix(megatron-fsdp): preserve non-meta tensors during meta device materialization#4155xuwchen wants to merge 4 commits into
Conversation
shjwudp
left a comment
There was a problem hiding this comment.
Good catch! Some bias parameters are stored as nn.Module buffers (though that’s not a standard PyTorch practice). These buffers weren’t initialized on the meta device and shouldn’t be converted to empty tensors. This PR fixes that issue and also resolves the unexpected NaNs observed in the functional tests with the MoE layer.
| for name, param in module.named_parameters(recurse=False): | ||
| if param.is_meta: | ||
| new = torch.empty_like(param, device=device) | ||
| setattr(module, name, torch.nn.Parameter(new, requires_grad=param.requires_grad)) |
There was a problem hiding this comment.
Please do not reconfigures module parameters, which might cause loss of existing attributes on those parameters or invalidate maps that used these parameters as keys.
There was a problem hiding this comment.
Updated the implementation to use Module._apply() with an is_meta guard, which is the same mechanism that to_empty() uses internally. For meta→cuda conversion, _apply always creates new Parameter objects; this is a PyTorch behavior, not something we introduced. The existing _reset_parameters already handles dict remapping and attribute copying for this case.
4f391be to
537f906
Compare
|
/ok to test 48b0ff6 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25646899623 |
main PR: #4154
What does this PR do ?
During meta device initialization, the materialization path calls
m.to_empty(device, recurse=False)beforereset_parameters().to_empty()unconditionally replaces all tensors (both parameters and buffers) in the module with uninitialized memory, regardless of whether they are actually on the meta device.This becomes a problem for MoE when
--moe-router-enable-expert-biasis enabled:expert_biasis registered with an explicitdevice=torch.cuda.current_device()[link], so it bypasses the meta context and is correctly initialized to zeros on GPU. Butto_empty()clobbers it with uninitialized memory (NaN), andreset_parameters()only reinitializes parameters, not buffers, soexpert_biasstays NaN for the entire training run.The fix here replaces
m.to_empty()with a targeted helper that only materializesis_meta=Truetensors, leaving non-meta buffers (likeexpert_bias) untouched.Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.