Skip to content

Conversation

@lfr-0531
Copy link
Collaborator

@lfr-0531 lfr-0531 commented Nov 10, 2025

Summary by CodeRabbit

  • New Features
    • Added support for multi-draft token generation in speculative decoding workflows
    • Optimized attention computation for efficient handling of multiple draft tokens during inference
    • Maintains full backward compatibility with existing single-draft configurations

Description

In the DeepSeek-v3.2 model, the deep_gemm.fp8_paged_mqa_logits kernel cannot support next_n > 2, i.e., MTP>1. In this PR, we flatten q_decode along the batch_size and seq_len dimensions for MTP > 1 cases. Then we can use the fp8_paged_mqa_logits kernel to calculate the MQA. For the causal mask, we can apply a mask to the logits before the top-k.

Accuracy with NVFP4 model and MTP=3:

[TRT-LLM] [I] lm-eval gpqa_diamond_cot_zeroshot_aa results (scores normalized to range 0~100):
|           Tasks            |Version|   Filter   |n-shot|  Metric   |   | Value |   |Stderr|
|----------------------------|------:|------------|-----:|-----------|---|------:|---|-----:|
|gpqa_diamond_cot_zeroshot_aa|      1|strict-match|     0|exact_match|↑  |81.8182|±  | 2.748|

[TRT-LLM] [I] lm-eval gpqa_diamond_cot_zeroshot_aa average accuracy: 81.82

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@lfr-0531 lfr-0531 requested review from chang-l and hlu1 November 10, 2025 15:08
@lfr-0531 lfr-0531 requested a review from a team as a code owner November 10, 2025 15:08
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 10, 2025

📝 Walkthrough

Walkthrough

Adds multi-draft token expansion support to DSA attention backend. Introduces expanded KV and block table buffers that activate when max_draft_tokens > 1. Modifies decode-time logic to conditionally use expanded buffers based on next_n value. Adds max_draft_tokens field to track parameter propagation across attention metadata.

Changes

Cohort / File(s) Summary
DSA Sparse Attention Expansion
tensorrt_llm/_torch/attention_backend/sparse/dsa.py
Introduces expanded KV lens and block table buffers (kv_lens_expanded_cuda, kv_lens_expanded_host, block_table_expanded, scheduler_metadata_buffer_expanded). Extends prepare flow to populate these buffers when max_draft_tokens > 1. Modifies decode-time path to branch: if next_n <= 2, use original context and metadata; if next_n > 2, use expanded buffers. Expands scheduler metadata preparation with optional expanded metadata generation.
Attention Metadata Configuration
tensorrt_llm/_torch/attention_backend/trtllm.py
Adds public field max_draft_tokens: int = 0 to TrtllmAttentionMetadata class. Propagates value into runtime behavior via update_spec_dec_param method.

Sequence Diagram(s)

sequenceDiagram
    participant Prepare as Prepare Flow
    participant Buffer as Buffer Management
    participant Decode as Decode-Time Path
    participant Metadata as Scheduler Metadata

    Prepare->>Buffer: Check max_draft_tokens
    
    alt max_draft_tokens > 1
        Prepare->>Buffer: Allocate expanded buffers<br/>(kv_lens_expanded, block_table_expanded)
        Prepare->>Metadata: Generate expanded scheduler metadata
        Prepare->>Buffer: Populate expanded buffers with<br/>KV lens expansion & block-offset expansion
    else max_draft_tokens <= 1
        Prepare->>Buffer: Use original buffers
    end
    
    Prepare->>Decode: Pass configuration
    
    Decode->>Decode: Check next_n value
    
    alt next_n <= 2
        Decode->>Buffer: Use original context, generation lens,<br/>and block table with original metadata
    else next_n > 2
        Decode->>Buffer: Use expanded buffers<br/>(kv_lens_expanded, block_table_expanded,<br/>scheduler_metadata_expanded)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • dsa.py conditional logic: Verify correctness of the two decode-time paths (next_n <= 2 vs next_n > 2) and ensure expanded buffer population aligns with block-offset expansion semantics
  • Buffer lifecycle: Confirm expanded buffers are properly allocated, initialized, and cleaned up only when max_draft_tokens > 1
  • Metadata propagation: Validate that max_draft_tokens is correctly threaded through TrtllmAttentionMetadata and reaches DSA decision points
  • Integration points: Ensure the conditional branching in decode-time path doesn't introduce edge cases when transitioning between single and multi-draft scenarios

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning PR description is incomplete. Missing title format, comprehensive description section details, and explicit test coverage information. Add properly formatted title [None][feat] to PR, expand description section with clear problem/solution explanation, and explicitly list test cases that validate the MTP>1 changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title '[None][feat] Add MTP>1 support for DS-v3.2' clearly describes the main change—adding multi-draft token (MTP) support greater than 1 for DeepSeek-v3.2, which aligns with the changeset that extends DSA attention and TrtllmAttentionMetadata to support multiple draft tokens.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@lfr-0531 lfr-0531 force-pushed the user/fanrongl/mtp3_support_for_ds32 branch from 9d7e52b to b7551b0 Compare November 10, 2025 15:13
@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24029 [ run ] triggered by Bot. Commit: 10b5ded

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24029 [ run ] completed with state SUCCESS. Commit: 10b5ded
/LLM/main/L0_MergeRequest_PR pipeline #18103 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24108 [ run ] triggered by Bot. Commit: 7611b09

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24108 [ run ] completed with state FAILURE. Commit: 7611b09
/LLM/main/L0_MergeRequest_PR pipeline #18178 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24127 [ run ] triggered by Bot. Commit: af5528f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24127 [ run ] completed with state FAILURE. Commit: af5528f
/LLM/main/L0_MergeRequest_PR pipeline #18189 completed with status: 'FAILURE'

@lfr-0531 lfr-0531 force-pushed the user/fanrongl/mtp3_support_for_ds32 branch from af5528f to 2ba510b Compare November 11, 2025 09:10
@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24154 [ run ] triggered by Bot. Commit: 2ba510b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24154 [ run ] completed with state SUCCESS. Commit: 2ba510b
/LLM/main/L0_MergeRequest_PR pipeline #18211 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24183 [ run ] triggered by Bot. Commit: 9e6c4b4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24183 [ run ] completed with state SUCCESS. Commit: 9e6c4b4
/LLM/main/L0_MergeRequest_PR pipeline #18234 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24199 [ run ] triggered by Bot. Commit: 9e6c4b4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24199 [ run ] completed with state SUCCESS. Commit: 9e6c4b4
/LLM/main/L0_MergeRequest_PR pipeline #18247 completed with status: 'SUCCESS'

@lfr-0531 lfr-0531 requested a review from yuxianq November 12, 2025 02:38
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
@lfr-0531 lfr-0531 force-pushed the user/fanrongl/mtp3_support_for_ds32 branch from 9e6c4b4 to 311e743 Compare November 12, 2025 05:45
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
@lfr-0531
Copy link
Collaborator Author

/bot run

@lfr-0531 lfr-0531 enabled auto-merge (squash) November 12, 2025 05:47
@tensorrt-cicd
Copy link
Collaborator

PR_Github #24251 [ run ] triggered by Bot. Commit: 9b56f8a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24251 [ run ] completed with state SUCCESS. Commit: 9b56f8a
/LLM/main/L0_MergeRequest_PR pipeline #18296 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24306 [ run ] triggered by Bot. Commit: 9b56f8a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24306 [ run ] completed with state SUCCESS. Commit: 9b56f8a
/LLM/main/L0_MergeRequest_PR pipeline #18337 completed with status: 'SUCCESS'

@lfr-0531 lfr-0531 merged commit 780d4f9 into NVIDIA:main Nov 12, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants