[TRTLLM-12250][feat] added lm head sharding#12252
[TRTLLM-12250][feat] added lm head sharding#12252greg-kwasniewski1 merged 4 commits intoNVIDIA:mainfrom
Conversation
Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
|
/bot run |
|
PR_Github #39115 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughAdds a new configuration option to enable selective sharding of lm_head nodes in Qwen model. Introduces a key_filter parameter to the _process_simple_shard function, allowing callers to selectively filter which nodes undergo sharding based on node names. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip CodeRabbit can scan for known vulnerabilities in your dependencies using OSV Scanner.OSV Scanner will automatically detect and report security vulnerabilities in your project's dependencies. No additional configuration is required. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tensorrt_llm/_torch/auto_deploy/transform/library/sharding.py`:
- Around line 3191-3194: The call guarded by config.shard_all_unprocessed
currently restricts sharding to nodes whose name contains "lm_head" via
key_filter=lambda n: "lm_head" in n.name, which contradicts the
documented/shard_all_unprocessed semantics; fix by removing the key_filter
argument so the call to _process_simple_shard(unprocessed_linear_nodes,
transform_container) shards all unprocessed nodes (or alternatively add a new
config flag like shard_lm_head or shard_unprocessed_pattern and wire that to the
key_filter if you intend to keep lm_head-only behavior). Ensure you update any
related config handling to support the new flag if you choose the alternate
approach.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 70eb3560-e2fd-4ef8-9b9f-1ae5d9955214
📒 Files selected for processing (2)
examples/auto_deploy/model_registry/configs/qwen3.5_moe_35b.yamltensorrt_llm/_torch/auto_deploy/transform/library/sharding.py
|
PR_Github #39115 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #39296 [ run ] triggered by Bot. Commit: |
|
PR_Github #39296 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #39450 [ run ] triggered by Bot. Commit: |
|
PR_Github #39450 [ run ] completed with state
|
Replace hardcoded lm_head key_filter with configurable simple_shard_filter field so shard_all_unprocessed preserves its default semantics. Enable lm_head filtering in all Qwen model configs. Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> Made-with: Cursor
|
/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test |
|
PR_Github #39621 [ run ] triggered by Bot. Commit: |
|
/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test |
|
PR_Github #39621 [ run ] completed with state
|
|
PR_Github #39657 [ run ] triggered by Bot. Commit: |
|
PR_Github #39657 [ run ] completed with state
|
|
/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test |
1 similar comment
|
/bot run --disable-fail-fast --reuse-test --add-multi-gpu-test |
|
PR_Github #39682 [ run ] triggered by Bot. Commit: |
|
PR_Github #39683 [ run ] triggered by Bot. Commit: |
|
PR_Github #39682 [ run ] completed with state |
|
PR_Github #39683 [ run ] completed with state |
Fixes #12250
Summary by CodeRabbit
shard_all_unprocessedconfiguration option for enhanced sharding control in model deployment configurations.Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.