MoE dispatcher fixes: size NVLS dispatcher buffers from actual tensor sizes#4576
Conversation
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/ok to test 18e61a8 |
|
/ok to test 9496cac |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25232577498 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25235057776 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25239439405 |
What does this PR do ?
NVLSAllGatherVDispatcher.allocate_bufferswas callingSymmetricMemoryManager.get_buffer(...)withoutsize_mb=, so all EP symmetric-memory buffers were sharing the 256 MB default. For non-trivialmax_tokens × hidden_size × ep_sizeconfigs (e.g. Nano on GB200) the two bf16[global_max, hidden_size]buffersep_agv_handep_rsvoverflow, causing_can_allocateto return False. The dispatcher incorrectly raises the catch-all errorRuntimeError: ... requires Hopper+ GPUs with NVLink, causing the user to think it's a versioning problem.Each buffer now self-sizes from
shape * dtype.element_size, rounded upThe error message will now only fire on a real
symm_mem.rendezvousfailure (EP group not fully NVLink-connected, or missing symmetric_memory).Note: no changes to
_default_size_mbor the TP buffer path in inference_layers.py.Issue tracking
For PRs from open-source community contributors:
Linked issue:
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.