Skip to content

[None][fix] Fix moe_chunking_tokens during MoE A2A#12929

Merged
Wanli-Jiang merged 1 commit intoNVIDIA:mainfrom
Wanli-Jiang:user/williamj/fix-moe-chunking-tokens
Apr 15, 2026
Merged

[None][fix] Fix moe_chunking_tokens during MoE A2A#12929
Wanli-Jiang merged 1 commit intoNVIDIA:mainfrom
Wanli-Jiang:user/williamj/fix-moe-chunking-tokens

Conversation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator

@Wanli-Jiang Wanli-Jiang commented Apr 10, 2026

Summary by CodeRabbit

  • Improvements

    • Enhanced Mixture of Experts (MoE) distributed computation. Optimized chunk calculation logic when using data parallelism with inter-rank communication enabled, improving memory efficiency and ensuring proper alignment with GPU workspace requirements. This change refines how chunk counts are calculated in multi-rank deployment scenarios.
  • Documentation

    • Updated documentation to reflect the optimized MoE chunk calculation methodology.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@Wanli-Jiang Wanli-Jiang requested a review from a team as a code owner April 10, 2026 08:56
@Wanli-Jiang Wanli-Jiang requested a review from QiJune April 10, 2026 08:56
@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --add-multi-gpu-test --disable-fail-fast

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 10, 2026

📝 Walkthrough

Walkthrough

The calculate_num_chunks() function logic was modified to compute chunking row count differently when data-parallel mode is active with communication enabled, using mapping.moe_ep_size * max(all_rank_num_tokens) instead of sum(all_rank_num_tokens). The docstring was updated to reflect this change.

Changes

Cohort / File(s) Summary
MoE Chunk Calculation
tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
Modified calculate_num_chunks() to use a different formula for data-parallel scenarios with communication enabled, switching from summing all rank tokens to multiplying MoE EP size by max rank tokens. Docstring updated to document A2A communication and recv buffer shape.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete. While the template structure is present, the required 'Description' and 'Test Coverage' sections are empty, containing only placeholder comments. Fill in the Description section explaining the issue and solution, and the Test Coverage section listing relevant tests that safeguard the changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: fixing moe_chunking_tokens behavior during MoE A2A communication, which aligns with the code modification.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py`:
- Around line 357-360: The current condition in configurable_moe (the block
computing num_rows using self.mapping.moe_ep_size and all_rank_num_tokens) uses
"self.comm is not None" which incorrectly applies the A2A-specific max-based
formula to non-A2A communicators; change the conditional to explicitly detect
the All-to-All communicator (A2A) instead of just non-None (e.g., check an
explicit communicator type flag or enum like self.comm.comm_type == "A2A" or
isinstance(self.comm, AllToAllComm) depending on your communicator API) so the
max-based branch runs only for A2A; update the condition guarding the num_rows
computation in the method where self.use_dp, self.comm, mapping.moe_ep_size, and
all_rank_num_tokens are referenced.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 60e4d70f-be3b-4c5d-ba0e-38a70e0ccb8a

📥 Commits

Reviewing files that changed from the base of the PR and between 4811704 and c20ceb6.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py

Comment thread tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42697 [ run ] triggered by Bot. Commit: c20ceb6 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42697 [ run ] completed with state SUCCESS. Commit: c20ceb6
/LLM/main/L0_MergeRequest_PR pipeline #33392 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42959 [ run ] triggered by Bot. Commit: c20ceb6 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42959 [ run ] completed with state SUCCESS. Commit: c20ceb6
/LLM/main/L0_MergeRequest_PR pipeline #33616 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43238 [ run ] triggered by Bot. Commit: c20ceb6 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43238 [ run ] completed with state DISABLED
Freeze main and open the PR merge only after CI is back to healthy https://nvidia.slack.com/archives/C059LSY62BT/p1776141760843319?thread_ts=1775985925.442509&cid=C059LSY62BT

Link to invocation

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/fix-moe-chunking-tokens branch from c20ceb6 to cf4333e Compare April 15, 2026 04:40
@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot help

@github-actions
Copy link
Copy Markdown

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-AutoDeploy-1"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43387 [ run ] triggered by Bot. Commit: cf4333e Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43387 [ run ] completed with state SUCCESS. Commit: cf4333e
/LLM/main/L0_MergeRequest_PR pipeline #33921 (Partly Tested) completed with status: 'SUCCESS'

CI Report

Link to invocation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot skip --comment "all runs passed within different CI tests"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43434 [ skip ] triggered by Bot. Commit: cf4333e Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43434 [ skip ] completed with state SUCCESS. Commit: cf4333e
Skipping testing for commit cf4333e

Link to invocation

@Wanli-Jiang Wanli-Jiang merged commit fc83799 into NVIDIA:main Apr 15, 2026
5 checks passed
chienchunhung pushed a commit to chienchunhung/TensorRT-LLM that referenced this pull request Apr 16, 2026
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants