Skip to content

MoE dispatcher fixes: size NVLS dispatcher buffers from actual tensor sizes#4576

Merged
ericharper merged 4 commits intoNVIDIA:mainfrom
mathemakitten:helenn-fix-dispatcher-errors
May 2, 2026
Merged

MoE dispatcher fixes: size NVLS dispatcher buffers from actual tensor sizes#4576
ericharper merged 4 commits intoNVIDIA:mainfrom
mathemakitten:helenn-fix-dispatcher-errors

Conversation

@mathemakitten
Copy link
Copy Markdown
Contributor

What does this PR do ?

NVLSAllGatherVDispatcher.allocate_buffers was calling SymmetricMemoryManager.get_buffer(...) without size_mb=, so all EP symmetric-memory buffers were sharing the 256 MB default. For non-trivial max_tokens × hidden_size × ep_size configs (e.g. Nano on GB200) the two bf16 [global_max, hidden_size] buffers ep_agv_h and ep_rsv overflow, causing _can_allocate to return False. The dispatcher incorrectly raises the catch-all error RuntimeError: ... requires Hopper+ GPUs with NVLink, causing the user to think it's a versioning problem.

Each buffer now self-sizes from shape * dtype.element_size, rounded up

The error message will now only fire on a real symm_mem.rendezvous failure (EP group not fully NVLink-connected, or missing symmetric_memory).

Note: no changes to _default_size_mb or the TP buffer path in inference_layers.py.

Issue tracking

For PRs from open-source community contributors:

  • New features: a linked issue is required. Please open a feature request and reference it here before submitting the PR.
  • Small updates (bug fixes, minor improvements): a linked issue is recommended and will accelerate the PR review process.

Linked issue:

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@mathemakitten mathemakitten requested review from a team as code owners May 1, 2026 14:49
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft May 1, 2026 14:49
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 1, 2026

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 1, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@mathemakitten mathemakitten marked this pull request as ready for review May 1, 2026 15:02
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team May 1, 2026 15:03
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label May 1, 2026
@ericharper ericharper enabled auto-merge May 1, 2026 18:32
@ericharper
Copy link
Copy Markdown
Contributor

/ok to test 18e61a8

@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels May 1, 2026
@ericharper
Copy link
Copy Markdown
Contributor

/ok to test 9496cac

@ericharper ericharper added this pull request to the merge queue May 1, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25232577498

@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25235057776

@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25239439405

Merged via the queue into NVIDIA:main with commit 4e0f636 May 2, 2026
66 of 78 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: low

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants