[TRTLLM-11315][feat] Extend python cache transceiver to support Qwen-Next#12772
[TRTLLM-11315][feat] Extend python cache transceiver to support Qwen-Next#12772bo-nv merged 7 commits intoNVIDIA:mainfrom
Conversation
…Next Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
|
/bot run --add-multi-gpu-test --disable-fail-fast |
|
PR_Github #41948 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughThe changes introduce Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/pyexecutor/_util.py (1)
952-1080:⚠️ Potential issue | 🟠 MajorHonor
layer_maskin the Qwen3 hybrid path.Unlike the Nemotron branch above, this branch always rebuilds the masks from
configand then appends speculative layers. In one-model speculative decoding, the caller passeslayer_maskto split target vs. draft ownership; dropping it here lets both KV cache managers claim the same spec layers.Suggested fix
hybrid_layer_mask, mamba_layer_mask = get_qwen3_hybrid_layer_masks( config) + if layer_mask is not None: + base_hybrid_layer_mask = hybrid_layer_mask + base_mamba_layer_mask = mamba_layer_mask + pattern_len = len(base_hybrid_layer_mask) + hybrid_layer_mask = [] + mamba_layer_mask = [] + for i, include in enumerate(layer_mask): + is_attention = (base_hybrid_layer_mask[i] + if i < pattern_len else True) + is_mamba = (base_mamba_layer_mask[i] + if i < pattern_len else False) + hybrid_layer_mask.append(is_attention and include) + mamba_layer_mask.append(is_mamba and include) # For hybrid models, hybrid_layer_mask is always passed as # layer_mask to KVCacheManager, which means get_pp_layers # sees a non-None layer_mask and won't auto-add spec layers. # Extend the masks here to include MTP spec layers (full # attention, no linear states) so they get KV cache entries. - if spec_config is not None: + elif spec_config is not None: from ..speculative.utils import get_num_spec_layers num_spec_layers = get_num_spec_layers(spec_config) if num_spec_layers > 0: hybrid_layer_mask.extend([True] * num_spec_layers) mamba_layer_mask.extend([False] * num_spec_layers)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/_torch/pyexecutor/_util.py` around lines 952 - 1080, The Qwen3 hybrid path ignores an incoming layer_mask and rebuilds masks from config, causing spec layers to be claimed incorrectly; modify the branch that currently calls get_qwen3_hybrid_layer_masks(config) to first check if layer_mask is not None and, if so, compute hybrid_layer_mask and mamba_layer_mask from the provided layer_mask and the model's hybrid pattern (mirroring the Nemotron logic that builds hybrid_layer_mask/mamba_layer_mask and computes num_layers/num_mamba_layers), otherwise fall back to get_qwen3_hybrid_layer_masks(config); ensure you then extend both masks with spec layers when spec_config is present (use get_num_spec_layers(spec_config)), compute num_layers/num_mamba_layers from the resulting masks, and pass those masks into kv_cache_manager_cls (same arguments as before) so the KVCacheManager honors the caller's layer_mask.
🧹 Nitpick comments (1)
tests/integration/test_lists/test-db/l0_dgx_b200.yml (1)
151-154: Add the new Qwen3-Next case to this selector update.This PR also adds
accuracy/test_disaggregated_serving.py::TestQwen3NextInstruct::test_auto_dtype, but this lane update only wires the Nemotron variants. Please add the Qwen3-Next test here—and intests/integration/test_lists/qa/llm_function_core.txt—or the new transceiver path will not run in the B200/QA coverage touched by this PR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml` around lines 151 - 154, Add the new Qwen3-Next test case to the selector update by including the test identifier accuracy/test_disaggregated_serving.py::TestQwen3NextInstruct::test_auto_dtype alongside the existing Nemotron entries in the l0_dgx_b200.yml selector, and also add that same test identifier into the qa list llm_function_core.txt so the new transceiver path for Qwen3-Next is covered by the B200/QA lane.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tensorrt_llm/_torch/pyexecutor/_util.py`:
- Around line 952-1080: The Qwen3 hybrid path ignores an incoming layer_mask and
rebuilds masks from config, causing spec layers to be claimed incorrectly;
modify the branch that currently calls get_qwen3_hybrid_layer_masks(config) to
first check if layer_mask is not None and, if so, compute hybrid_layer_mask and
mamba_layer_mask from the provided layer_mask and the model's hybrid pattern
(mirroring the Nemotron logic that builds hybrid_layer_mask/mamba_layer_mask and
computes num_layers/num_mamba_layers), otherwise fall back to
get_qwen3_hybrid_layer_masks(config); ensure you then extend both masks with
spec layers when spec_config is present (use get_num_spec_layers(spec_config)),
compute num_layers/num_mamba_layers from the resulting masks, and pass those
masks into kv_cache_manager_cls (same arguments as before) so the KVCacheManager
honors the caller's layer_mask.
---
Nitpick comments:
In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml`:
- Around line 151-154: Add the new Qwen3-Next test case to the selector update
by including the test identifier
accuracy/test_disaggregated_serving.py::TestQwen3NextInstruct::test_auto_dtype
alongside the existing Nemotron entries in the l0_dgx_b200.yml selector, and
also add that same test identifier into the qa list llm_function_core.txt so the
new transceiver path for Qwen3-Next is covered by the B200/QA lane.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9f34e5be-4e76-4738-a0a3-73e294dfbfc2
📒 Files selected for processing (5)
tensorrt_llm/_torch/pyexecutor/_util.pytensorrt_llm/_torch/pyexecutor/mamba_cache_manager.pytests/integration/defs/accuracy/test_disaggregated_serving.pytests/integration/test_lists/qa/llm_function_core.txttests/integration/test_lists/test-db/l0_dgx_b200.yml
|
PR_Github #41948 [ run ] completed with state
|
|
/bot run --add-multi-gpu-test --disable-fail-fast |
|
PR_Github #42015 [ run ] triggered by Bot. Commit: |
|
PR_Github #42015 [ run ] completed with state
|
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
|
/bot run --add-multi-gpu-test --disable-fail-fast |
|
PR_Github #42052 [ run ] triggered by Bot. Commit: |
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
|
/bot run --add-multi-gpu-test --disable-fail-fast |
|
PR_Github #42055 [ run ] triggered by Bot. Commit: |
|
PR_Github #42052 [ run ] completed with state |
|
PR_Github #42055 [ run ] completed with state
|
|
/bot run --add-multi-gpu-test --disable-fail-fast |
|
PR_Github #42212 [ run ] triggered by Bot. Commit: |
|
/bot run --add-multi-gpu-test --disable-fail-fast |
|
PR_Github #42348 [ run ] triggered by Bot. Commit: |
|
PR_Github #42348 [ run ] completed with state |
|
/bot run --add-multi-gpu-test --disable-fail-fast |
|
PR_Github #42620 [ run ] triggered by Bot. Commit: |
|
PR_Github #42620 [ run ] completed with state |
…Next (NVIDIA#12772) Signed-off-by: Bo Deng <deemod@nvidia.com>
Summary by CodeRabbit
Release Notes
New Features
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.