Skip to content

Fix incorrectly set decoupled_grad in training.py for MFSDP.#4133

Open
cspades wants to merge 2 commits intoNVIDIA:mainfrom
cspades:cye/decgrad-argfix
Open

Fix incorrectly set decoupled_grad in training.py for MFSDP.#4133
cspades wants to merge 2 commits intoNVIDIA:mainfrom
cspades:cye/decgrad-argfix

Conversation

@cspades
Copy link
Copy Markdown
Member

@cspades cspades commented Apr 3, 2026

What does this PR do ?

Details

  • Megatron-FSDP does not use FusedAdam's master weights, but Megatron-LM hard-codes master_weights=True if OptimizerConfig.use_precision_aware_optimizer_no_fp8_or_ds_fp8 / use_decoupled_grad are True. This PR turns off FusedAdam master weights when using Megatron-FSDP, as FusedAdam should only provide an optimizer.step() to Megatron-FSDP's DTensor(FP32/BF16) main weights.

Testing

  • Adding E2E --use-precision-aware-optimizer unit test to temporarily guarantee functionality.
  • Using HFSDP + FP8 delayed scaling, we get a lot of memory back by turning off master_weights:
# No FusedAdam Master Weights (HFSDP + FP8 Delayed Scaling)
[Rank 0] (after 2 iterations) memory (MB) | allocated: 19595.55 | max allocated: 27022.17 | reserved: 22504.00 | max reserved: 30774.00
[2026-04-03 12:37:38.220805] iteration      100/15258789 | consumed samples:        12800 | elapsed time per iteration (ms): 3263.1 | throughput per GPU (TFLOP/s/GPU): 230.1 | learning rate: 4.915198E-07 | global batch size:   128 | lm loss: 5.473558E+00 | loss scale: 1.0 | grad norm: 9.413 | num zeros: 0 | number of skipped iterations:   0 | number of nan iterations:   0 |

# FusedAdam Master Weights (HFSDP + FP8 Delayed Scaling)
[Rank 0] (after 2 iterations) memory (MB) | allocated: 23425.69 | max allocated: 30852.31 | reserved: 26344.00 | max reserved: 34788.00
[2026-04-03 12:44:12.848377] iteration      100/15258789 | consumed samples:        12800 | elapsed time per iteration (ms): 3149.4 | throughput per GPU (TFLOP/s/GPU): 238.4 | learning rate: 4.915198E-07 | global batch size:   128 | lm loss: 5.476564E+00 | loss scale: 1.0 | grad norm: 9.424 | num zeros: 0 | number of skipped iterations:   0 | number of nan iterations:   0 |

TODO

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@cspades cspades self-assigned this Apr 3, 2026
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 3, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@cspades cspades force-pushed the cye/decgrad-argfix branch 3 times, most recently from 3860daf to a230105 Compare April 3, 2026 19:35
@cspades cspades marked this pull request as ready for review April 3, 2026 19:52
@cspades cspades requested review from a team as code owners April 3, 2026 19:52
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 3, 2026 19:52
@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Apr 3, 2026
@cspades cspades force-pushed the cye/decgrad-argfix branch from a230105 to 349b8ff Compare April 3, 2026 20:00
@cspades cspades force-pushed the cye/decgrad-argfix branch from 349b8ff to d562ada Compare April 3, 2026 20:02
@cspades cspades requested a review from a team April 3, 2026 20:02
Copy link
Copy Markdown
Contributor

@shjwudp shjwudp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM, just have a little concern about megatron_fsdp_use_decoupled_grad.

This PR avoids FusedAdam from redundantly creating master weights when M-FSDP already maintains FP32 main weights, which is the correct way to use FusedAdam under M-FSDP (in terms of memory usage).

# the same conditions that the distributed optimizer uses decoupled gradients.
args.main_params_dtype != torch.float32
or (args.fp8_recipe is None or args.fp8_recipe == "delayed")
or args.optimizer_cpu_offload
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to follow this combination of conditions? 🤔
Unless there are some specific constraints, I think we’d better avoid using such a complicated setup — it makes maintenance harder.

Theoretically, M-FSDP shouldn’t be restricted by these conditions and can freely use decoupled_grad.

Copy link
Copy Markdown
Member Author

@cspades cspades Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I can change the current logic for non-MFSDP, so if there are no constraints on the Megatron-FSDP side, I think matching the behavior should be inconsequential, right?

Otherwise, I will likely need to do something like this:

  • If Megatron-FSDP, use a separate decoupled_grad argument to determine if it is used.
    • if use_megatron_fsdp and use_precision_aware_optimizer: ...
  • If not Megatron-FSDP, follow the MLM logic where only BF16 and FP8 DS use it.
    • if use_precision_aware_optimizer_no_fp8_or_ds_fp8: ...

Another thing I can do is to move the OptimizerConfig.__post_init__ logic outside so we can use the same variable for the DP wrapper and the optimizer, but it won't make it significantly easier to maintain.

Copy link
Copy Markdown
Member Author

@cspades cspades Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is the cleanest solution?

# Optimizer Functions
if use_precision_aware_optimizer_no_fp8_or_ds_fp8 or (
    use_megatron_fsdp and use_precision_aware_optimizer
):
    # Make sure no use of FusedAdam master weights when using Megatron-FSDP.

And Megatron-FSDP can just always use decoupled_grad when using FusedAdam, or it can be controlled using a new argument but in my opinion users do not need to care about this.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this approach is better — it avoids complex condition checks later that might make us wonder why M-FSDP doesn’t support using decoupled_grad in certain cases.

Signed-off-by: Cory Ye <cye@nvidia.com>
@cspades cspades force-pushed the cye/decgrad-argfix branch from d562ada to f30d840 Compare April 8, 2026 01:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants