Skip to content

[TRTLLM-11315][feat] Extend python cache transceiver to support Qwen-Next#12772

Merged
bo-nv merged 7 commits intoNVIDIA:mainfrom
bo-nv:main-mamba_python_transfer
Apr 13, 2026
Merged

[TRTLLM-11315][feat] Extend python cache transceiver to support Qwen-Next#12772
bo-nv merged 7 commits intoNVIDIA:mainfrom
bo-nv:main-mamba_python_transfer

Conversation

@bo-nv
Copy link
Copy Markdown
Collaborator

@bo-nv bo-nv commented Apr 6, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for model type configuration in cache management systems to enable optimized handling across different model architectures.
  • Tests

    • Expanded disaggregated serving test coverage with parameterized test variants.
    • Added test support for Qwen3-Next model with accuracy validation.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

bo-nv added 3 commits April 6, 2026 12:20
…Next

Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
@bo-nv bo-nv requested review from a team as code owners April 6, 2026 16:21
@bo-nv
Copy link
Copy Markdown
Collaborator Author

bo-nv commented Apr 6, 2026

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41948 [ run ] triggered by Bot. Commit: f3a2641 Link to invocation

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 6, 2026

📝 Walkthrough

Walkthrough

The changes introduce model_type parameter support to Mamba cache managers, enabling different conv-state dimension orderings for Nemotron-hybrid and Qwen3-Next models. The parameter is threaded from KV cache manager initialization down through hybrid and Python implementations. Test infrastructure is updated with parameterized variants and a new Qwen3-Next test class.

Changes

Cohort / File(s) Summary
KV Cache Manager Parameter Threading
tensorrt_llm/_torch/pyexecutor/_util.py, tensorrt_llm/_torch/pyexecutor/mamba_cache_manager.py
Added model_type parameter to KV cache manager constructors ("nemotron_hybrid" or "qwen3_next"). Mamba cache manager now conditionally assigns conv_section_dims based on model type, with different dimension orderings: [ng_ds_local, ng_ds_local, d_inner_local] for Qwen3-Next and [d_inner_local, ng_ds_local, ng_ds_local] for Nemotron-Hybrid.
Disaggregated Serving Tests
tests/integration/defs/accuracy/test_disaggregated_serving.py
Parameterized TestNemotron3Super120B.test_auto_dtype with use_py_transceiver boolean variants. Added new TestQwen3NextInstruct test class with Qwen3-Next-80B model, PYTHON transceiver runtime, and GSM8K accuracy validation.
Test List Updates
tests/integration/test_lists/qa/llm_function_core.txt, tests/integration/test_lists/test-db/l0_dgx_b200.yml
Updated test selection to replace single test_auto_dtype entry with two parameterized variants (use_py_transceiver=True and False). Added new test_ctx_dp2_gen_tp4 test case to matrix.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 10.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description is entirely the template with no actual content filled in. Required sections like 'Description', 'Test Coverage' are blank or only contain template comments. Fill in the Description section explaining what was changed and why, and the Test Coverage section listing relevant tests that safeguard the changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main feature: extending Python cache transceiver to support Qwen-Next model, which aligns with the core changes across multiple files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/pyexecutor/_util.py (1)

952-1080: ⚠️ Potential issue | 🟠 Major

Honor layer_mask in the Qwen3 hybrid path.

Unlike the Nemotron branch above, this branch always rebuilds the masks from config and then appends speculative layers. In one-model speculative decoding, the caller passes layer_mask to split target vs. draft ownership; dropping it here lets both KV cache managers claim the same spec layers.

Suggested fix
         hybrid_layer_mask, mamba_layer_mask = get_qwen3_hybrid_layer_masks(
             config)
+        if layer_mask is not None:
+            base_hybrid_layer_mask = hybrid_layer_mask
+            base_mamba_layer_mask = mamba_layer_mask
+            pattern_len = len(base_hybrid_layer_mask)
+            hybrid_layer_mask = []
+            mamba_layer_mask = []
+            for i, include in enumerate(layer_mask):
+                is_attention = (base_hybrid_layer_mask[i]
+                                if i < pattern_len else True)
+                is_mamba = (base_mamba_layer_mask[i]
+                            if i < pattern_len else False)
+                hybrid_layer_mask.append(is_attention and include)
+                mamba_layer_mask.append(is_mamba and include)
         # For hybrid models, hybrid_layer_mask is always passed as
         # layer_mask to KVCacheManager, which means get_pp_layers
         # sees a non-None layer_mask and won't auto-add spec layers.
         # Extend the masks here to include MTP spec layers (full
         # attention, no linear states) so they get KV cache entries.
-        if spec_config is not None:
+        elif spec_config is not None:
             from ..speculative.utils import get_num_spec_layers
             num_spec_layers = get_num_spec_layers(spec_config)
             if num_spec_layers > 0:
                 hybrid_layer_mask.extend([True] * num_spec_layers)
                 mamba_layer_mask.extend([False] * num_spec_layers)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_torch/pyexecutor/_util.py` around lines 952 - 1080, The Qwen3
hybrid path ignores an incoming layer_mask and rebuilds masks from config,
causing spec layers to be claimed incorrectly; modify the branch that currently
calls get_qwen3_hybrid_layer_masks(config) to first check if layer_mask is not
None and, if so, compute hybrid_layer_mask and mamba_layer_mask from the
provided layer_mask and the model's hybrid pattern (mirroring the Nemotron logic
that builds hybrid_layer_mask/mamba_layer_mask and computes
num_layers/num_mamba_layers), otherwise fall back to
get_qwen3_hybrid_layer_masks(config); ensure you then extend both masks with
spec layers when spec_config is present (use get_num_spec_layers(spec_config)),
compute num_layers/num_mamba_layers from the resulting masks, and pass those
masks into kv_cache_manager_cls (same arguments as before) so the KVCacheManager
honors the caller's layer_mask.
🧹 Nitpick comments (1)
tests/integration/test_lists/test-db/l0_dgx_b200.yml (1)

151-154: Add the new Qwen3-Next case to this selector update.

This PR also adds accuracy/test_disaggregated_serving.py::TestQwen3NextInstruct::test_auto_dtype, but this lane update only wires the Nemotron variants. Please add the Qwen3-Next test here—and in tests/integration/test_lists/qa/llm_function_core.txt—or the new transceiver path will not run in the B200/QA coverage touched by this PR.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml` around lines 151 - 154,
Add the new Qwen3-Next test case to the selector update by including the test
identifier
accuracy/test_disaggregated_serving.py::TestQwen3NextInstruct::test_auto_dtype
alongside the existing Nemotron entries in the l0_dgx_b200.yml selector, and
also add that same test identifier into the qa list llm_function_core.txt so the
new transceiver path for Qwen3-Next is covered by the B200/QA lane.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tensorrt_llm/_torch/pyexecutor/_util.py`:
- Around line 952-1080: The Qwen3 hybrid path ignores an incoming layer_mask and
rebuilds masks from config, causing spec layers to be claimed incorrectly;
modify the branch that currently calls get_qwen3_hybrid_layer_masks(config) to
first check if layer_mask is not None and, if so, compute hybrid_layer_mask and
mamba_layer_mask from the provided layer_mask and the model's hybrid pattern
(mirroring the Nemotron logic that builds hybrid_layer_mask/mamba_layer_mask and
computes num_layers/num_mamba_layers), otherwise fall back to
get_qwen3_hybrid_layer_masks(config); ensure you then extend both masks with
spec layers when spec_config is present (use get_num_spec_layers(spec_config)),
compute num_layers/num_mamba_layers from the resulting masks, and pass those
masks into kv_cache_manager_cls (same arguments as before) so the KVCacheManager
honors the caller's layer_mask.

---

Nitpick comments:
In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml`:
- Around line 151-154: Add the new Qwen3-Next test case to the selector update
by including the test identifier
accuracy/test_disaggregated_serving.py::TestQwen3NextInstruct::test_auto_dtype
alongside the existing Nemotron entries in the l0_dgx_b200.yml selector, and
also add that same test identifier into the qa list llm_function_core.txt so the
new transceiver path for Qwen3-Next is covered by the B200/QA lane.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9f34e5be-4e76-4738-a0a3-73e294dfbfc2

📥 Commits

Reviewing files that changed from the base of the PR and between 662e45f and f3a2641.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/pyexecutor/_util.py
  • tensorrt_llm/_torch/pyexecutor/mamba_cache_manager.py
  • tests/integration/defs/accuracy/test_disaggregated_serving.py
  • tests/integration/test_lists/qa/llm_function_core.txt
  • tests/integration/test_lists/test-db/l0_dgx_b200.yml

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41948 [ run ] completed with state SUCCESS. Commit: f3a2641
/LLM/main/L0_MergeRequest_PR pipeline #32804 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@bo-nv
Copy link
Copy Markdown
Collaborator Author

bo-nv commented Apr 7, 2026

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42015 [ run ] triggered by Bot. Commit: f3a2641 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42015 [ run ] completed with state FAILURE. Commit: f3a2641
/LLM/main/L0_MergeRequest_PR pipeline #32861 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

bo-nv added 2 commits April 7, 2026 03:36
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
@bo-nv
Copy link
Copy Markdown
Collaborator Author

bo-nv commented Apr 7, 2026

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42052 [ run ] triggered by Bot. Commit: d12b6a8 Link to invocation

bo-nv added 2 commits April 7, 2026 03:48
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
@bo-nv
Copy link
Copy Markdown
Collaborator Author

bo-nv commented Apr 7, 2026

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42055 [ run ] triggered by Bot. Commit: f5f8b50 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42052 [ run ] completed with state ABORTED. Commit: d12b6a8

Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42055 [ run ] completed with state SUCCESS. Commit: f5f8b50
/LLM/main/L0_MergeRequest_PR pipeline #32896 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@bo-nv
Copy link
Copy Markdown
Collaborator Author

bo-nv commented Apr 8, 2026

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42212 [ run ] triggered by Bot. Commit: f5f8b50 Link to invocation

@bo-nv
Copy link
Copy Markdown
Collaborator Author

bo-nv commented Apr 8, 2026

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42348 [ run ] triggered by Bot. Commit: f5f8b50 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42348 [ run ] completed with state ABORTED. Commit: f5f8b50

Link to invocation

@bo-nv
Copy link
Copy Markdown
Collaborator Author

bo-nv commented Apr 10, 2026

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42620 [ run ] triggered by Bot. Commit: f5f8b50 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42620 [ run ] completed with state SUCCESS. Commit: f5f8b50
/LLM/main/L0_MergeRequest_PR pipeline #33339 completed with status: 'SUCCESS'

CI Report

Link to invocation

@bo-nv bo-nv enabled auto-merge (squash) April 13, 2026 02:03
@bo-nv bo-nv merged commit 5092f82 into NVIDIA:main Apr 13, 2026
6 of 7 checks passed
bmarimuthu-nv pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Apr 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants