Skip to content

[TRTLLM-12250][feat] added lm head sharding#12252

Merged
greg-kwasniewski1 merged 4 commits intoNVIDIA:mainfrom
nv-auto-deploy:gk/sharding_lm_head
Mar 20, 2026
Merged

[TRTLLM-12250][feat] added lm head sharding#12252
greg-kwasniewski1 merged 4 commits intoNVIDIA:mainfrom
nv-auto-deploy:gk/sharding_lm_head

Conversation

@greg-kwasniewski1
Copy link
Collaborator

@greg-kwasniewski1 greg-kwasniewski1 commented Mar 16, 2026

Fixes #12250

Summary by CodeRabbit

  • New Features
    • Added shard_all_unprocessed configuration option for enhanced sharding control in model deployment configurations.
    • Enhanced sharding transformation system with selective filtering capabilities for granular control over weight distribution.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
@greg-kwasniewski1 greg-kwasniewski1 requested a review from a team as a code owner March 16, 2026 17:05
@greg-kwasniewski1
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39115 [ run ] triggered by Bot. Commit: 9045653 Link to invocation

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

Adds a new configuration option to enable selective sharding of lm_head nodes in Qwen model. Introduces a key_filter parameter to the _process_simple_shard function, allowing callers to selectively filter which nodes undergo sharding based on node names.

Changes

Cohort / File(s) Summary
Configuration Enhancement
examples/auto_deploy/model_registry/configs/qwen3.5_moe_35b.yaml
Added new boolean option shard_all_unprocessed: true under transforms.detect_sharding section.
Sharding Filter Logic
tensorrt_llm/_torch/auto_deploy/transform/library/sharding.py
Introduced optional key_filter parameter to _process_simple_shard() function to selectively filter nodes during processing. Default behavior preserves existing functionality when filter is not provided. Updated invocation sites to apply filter for nodes containing "lm_head" in their names.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description only contains 'Fixes #12250' with no explanation of the issue, solution, or test coverage despite the template requirements. Fill in the Description section explaining what lm_head sharding is and why it's needed. Add Test Coverage section listing tests that verify the sharding functionality.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'added lm head sharding' clearly and concisely describes the main change: implementing sharding for the lm_head component.
Linked Issues check ✅ Passed The PR implements lm_head sharding for Qwen model as required by issue #12250 through key_filter parameter and configuration changes.
Out of Scope Changes check ✅ Passed All changes are directly related to implementing lm_head sharding: adding key_filter to enable selective sharding and configuration to apply it.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can scan for known vulnerabilities in your dependencies using OSV Scanner.

OSV Scanner will automatically detect and report security vulnerabilities in your project's dependencies. No additional configuration is required.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tensorrt_llm/_torch/auto_deploy/transform/library/sharding.py`:
- Around line 3191-3194: The call guarded by config.shard_all_unprocessed
currently restricts sharding to nodes whose name contains "lm_head" via
key_filter=lambda n: "lm_head" in n.name, which contradicts the
documented/shard_all_unprocessed semantics; fix by removing the key_filter
argument so the call to _process_simple_shard(unprocessed_linear_nodes,
transform_container) shards all unprocessed nodes (or alternatively add a new
config flag like shard_lm_head or shard_unprocessed_pattern and wire that to the
key_filter if you intend to keep lm_head-only behavior). Ensure you update any
related config handling to support the new flag if you choose the alternate
approach.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 70eb3560-e2fd-4ef8-9b9f-1ae5d9955214

📥 Commits

Reviewing files that changed from the base of the PR and between 93b0dc7 and 9045653.

📒 Files selected for processing (2)
  • examples/auto_deploy/model_registry/configs/qwen3.5_moe_35b.yaml
  • tensorrt_llm/_torch/auto_deploy/transform/library/sharding.py

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39115 [ run ] completed with state SUCCESS. Commit: 9045653
/LLM/main/L0_MergeRequest_PR pipeline #30374 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39296 [ run ] triggered by Bot. Commit: 05dafee Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39296 [ run ] completed with state SUCCESS. Commit: 05dafee
/LLM/main/L0_MergeRequest_PR pipeline #30550 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39450 [ run ] triggered by Bot. Commit: aa03a85 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39450 [ run ] completed with state SUCCESS. Commit: aa03a85
/LLM/main/L0_MergeRequest_PR pipeline #30677 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Replace hardcoded lm_head key_filter with configurable
simple_shard_filter field so shard_all_unprocessed preserves
its default semantics. Enable lm_head filtering in all Qwen
model configs.

Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
Made-with: Cursor
@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39621 [ run ] triggered by Bot. Commit: 2fe137c Link to invocation

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39621 [ run ] completed with state SUCCESS. Commit: 2fe137c
/LLM/main/L0_MergeRequest_PR pipeline #30832 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39657 [ run ] triggered by Bot. Commit: 2fe137c Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39657 [ run ] completed with state SUCCESS. Commit: 2fe137c
/LLM/main/L0_MergeRequest_PR pipeline #30862 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test

1 similar comment
@greg-kwasniewski1
Copy link
Collaborator Author

/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39682 [ run ] triggered by Bot. Commit: 2fe137c Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39683 [ run ] triggered by Bot. Commit: 2fe137c Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39682 [ run ] completed with state ABORTED. Commit: 2fe137c

Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39683 [ run ] completed with state SUCCESS. Commit: 2fe137c
/LLM/main/L0_MergeRequest_PR pipeline #30882 completed with status: 'SUCCESS'

CI Report

Link to invocation

@greg-kwasniewski1 greg-kwasniewski1 merged commit 7502e4b into NVIDIA:main Mar 20, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[AutoDeploy][Feature]: Add lm_head sharding for Qwen model

4 participants