Skip to content

Comments

[TRTLLM-11568][feat] Fix collective calls#11632

Open
greg-kwasniewski1 wants to merge 3 commits intoNVIDIA:mainfrom
nv-auto-deploy:gk/dist_cleanup
Open

[TRTLLM-11568][feat] Fix collective calls#11632
greg-kwasniewski1 wants to merge 3 commits intoNVIDIA:mainfrom
nv-auto-deploy:gk/dist_cleanup

Conversation

@greg-kwasniewski1
Copy link
Collaborator

@greg-kwasniewski1 greg-kwasniewski1 commented Feb 22, 2026

Fixes #11568

Summary by CodeRabbit

  • Refactor
    • Internal distributed operations have been reorganized to use a centralized utility module for improved code organization and consistency.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
@greg-kwasniewski1
Copy link
Collaborator Author

/bot run

@greg-kwasniewski1 greg-kwasniewski1 removed the request for review from govind-ramnarayan February 22, 2026 21:30
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 22, 2026

No actionable comments were generated in the recent review. 🎉


📝 Walkthrough

Walkthrough

Replaced direct torch.distributed calls with a centralized distributed common wrapper module in the fused MOE implementation. Added import for dist_common and updated all_gather() and all_reduce() calls to dispatch through the wrapper instead of calling torch.distributed directly.

Changes

Cohort / File(s) Summary
Distributed Operations Refactoring
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py
Replaced direct torch.distributed.all_gather() and torch.distributed.all_reduce() calls with dist_common wrapper equivalents. Added import of distributed common module.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the template boilerplate and provides no substantive information about the changes, objectives, or test coverage. Complete the description sections with details about what was changed and why, list specific test coverage, and ensure the PR checklist accurately reflects the actual work done.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title '[TRTLLM-11568][feat] Fix collective calls' directly corresponds to the PR objective of fixing collective operations by removing torch.distributed fallback.
Linked Issues check ✅ Passed The code changes correctly replace torch.distributed.all_gather and all_reduce calls with dist_common wrappers, directly addressing issue #11568's requirement to remove torch.distributed fallback in the collective operations path.
Out of Scope Changes check ✅ Passed All changes are confined to replacing torch.distributed calls with dist_common wrappers in the torch_moe module, which is directly scoped to the issue's objectives.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36436 [ run ] triggered by Bot. Commit: 26cbab0 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36436 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 6 PM PST on 2/22.

Link to invocation

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36465 [ run ] triggered by Bot. Commit: 26cbab0 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36463 [ run ] triggered by Bot. Commit: 26cbab0 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36465 [ run ] completed with state ABORTED. Commit: 26cbab0

Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36463 [ run ] completed with state SUCCESS. Commit: 26cbab0
/LLM/main/L0_MergeRequest_PR pipeline #28211 completed with status: 'SUCCESS'

Link to invocation

@lucaslie
Copy link
Member

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36525 [ run ] triggered by Bot. Commit: 26cbab0 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36525 [ run ] completed with state SUCCESS. Commit: 26cbab0
/LLM/main/L0_MergeRequest_PR pipeline #28261 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" --disable-fail-fast --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36641 [ run ] triggered by Bot. Commit: 26cbab0 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36641 [ run ] completed with state SUCCESS. Commit: 26cbab0
/LLM/main/L0_MergeRequest_PR pipeline #28364 completed with status: 'SUCCESS'

Link to invocation

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Remove torch.distributed from template_moe all_to_all

3 participants