Skip to content

Fix E2E tests for HFSDP + EP.#4595

Open
cspades wants to merge 4 commits into
NVIDIA:mainfrom
cspades:hfsdp-moe-outer-dp-grad-bugfix
Open

Fix E2E tests for HFSDP + EP.#4595
cspades wants to merge 4 commits into
NVIDIA:mainfrom
cspades:hfsdp-moe-outer-dp-grad-bugfix

Conversation

@cspades
Copy link
Copy Markdown
Member

@cspades cspades commented May 3, 2026

What does this PR do ?

  • Clean logic for DP-Outer group retrieval between non-expert and expert groups. (Note that the inter-distributed-optimizer group is not dependent on EP size, so this PR should not change behavior.)
  • Fix E2E unit tests using trivial DP and EP size.

Testing

  • Tests are properly testing parallelisms now!
# Dense TP2 (DDP)
MLP(
  (linear_fc1): TELayerNormColumnParallelLinear(in_features=128, out_features=320, bias=False, TP=2)
  (linear_fc2): TERowParallelLinear(in_features=160, out_features=128, bias=False, TP=2)
)
...
  (local_experts): ModuleList(
    (0-3): 4 x MLP(
      (linear_fc1): TEColumnParallelLinear(in_features=128, out_features=640, bias=False, TP=1)
      (linear_fc2): TERowParallelLinear(in_features=320, out_features=128, bias=False, TP=1)
    )
  )

# Dense TP2 (MFSDP)
MLP(
  (linear_fc1): TELayerNormColumnParallelLinear(in_features=128, out_features=320, bias=False, TP=2)
  (linear_fc2): TERowParallelLinear(in_features=160, out_features=128, bias=False, TP=2)
)
...
  (local_experts): ModuleList(
    (0-3): 4 x MLP(
      (linear_fc1): TEColumnParallelLinear(in_features=128, out_features=640, bias=False, TP=1)
      (linear_fc2): TERowParallelLinear(in_features=320, out_features=128, bias=False, TP=1)
    )
  )

# EP2 ETP2 (MFSDP)
   (local_experts): ModuleList(
     (0-1): 2 x MLP(
       (linear_fc1): TEColumnParallelLinear(in_features=128, out_features=320, bias=False, TP=1)
       (linear_fc2): TERowParallelLinear(in_features=160, out_features=128, bias=False, TP=1)
     )
   )

# EP2 HFSDP2 (MFSDP)
  (local_experts): ModuleList(
    (0-1): 2 x MLP(
      (linear_fc1): TEColumnParallelLinear(in_features=128, out_features=640, bias=False, TP=1)
      (linear_fc2): TERowParallelLinear(in_features=320, out_features=128, bias=False, TP=1)
    )
  )

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@cspades cspades self-assigned this May 3, 2026
@cspades cspades requested review from a team as code owners May 3, 2026 18:46
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft May 3, 2026 18:46
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 3, 2026

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 3, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@cspades cspades changed the title Fix incorrect gradient reduce-scatter for MoE buckets. Fix incorrect HFSDP gradient reduce-scatter for MoE buckets. May 3, 2026
@cspades cspades added bug Something isn't working module: megatron-fsdp labels May 3, 2026
@cspades cspades changed the title Fix incorrect HFSDP gradient reduce-scatter for MoE buckets. Fix E2E tests for HFSDP + EP. May 5, 2026
Signed-off-by: Cory Ye <cye@nvidia.com>
@cspades cspades force-pushed the hfsdp-moe-outer-dp-grad-bugfix branch from 5f9ae2f to ac4ca1c Compare May 6, 2026 21:43
@cspades cspades marked this pull request as ready for review May 6, 2026 21:44
@cspades cspades added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label May 6, 2026
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team May 6, 2026 21:44
Signed-off-by: Cory Ye <cye@nvidia.com>
@cspades cspades force-pushed the hfsdp-moe-outer-dp-grad-bugfix branch from 5ceda18 to 9285a82 Compare May 7, 2026 16:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working complexity: low Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. module: megatron-fsdp

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants