Skip to content

[https://nvbugs/6032056][fix] Clamp block indices to prevent OOB in DSA with MTP#12657

Merged
sunnyqgg merged 1 commit intoNVIDIA:mainfrom
sunnyqgg:bug_6032056
Apr 2, 2026
Merged

[https://nvbugs/6032056][fix] Clamp block indices to prevent OOB in DSA with MTP#12657
sunnyqgg merged 1 commit intoNVIDIA:mainfrom
sunnyqgg:bug_6032056

Conversation

@sunnyqgg
Copy link
Copy Markdown
Collaborator

@sunnyqgg sunnyqgg commented Apr 1, 2026

Summary

  • Fix out-of-bounds block index access in _compute_slot_mappings when using DSA (Dense Sparse Attention) with MTP (Multi-Token Prediction) during CUDA graph capture/replay.
  • Stale token-to-sequence mappings during CUDA graph padding can produce block indices that exceed block_offsets.shape[1], causing illegal memory access. This fix clamps the indices on GPU to stay within bounds.

Changes

  • tensorrt_llm/_torch/attention_backend/sparse/dsa.py: Move max_blocks computation before the branch. On CUDA tensors, clamp block_indices_in_seq to [0, max_blocks-1] instead of skipping the check entirely. CPU path retains the assertion.

Test plan

  • Run MTP + DSA throughput benchmark with GLM-5-NVFP4 on 4 GPUs (TP=4, EP=4)
  • Verify no OOB errors during CUDA graph capture/replay
  • CI pre-merge tests pass

Summary by CodeRabbit

Bug Fixes

  • Improved sparse attention operation handling on CUDA devices to prevent out-of-bounds errors during graph execution.

@sunnyqgg sunnyqgg requested a review from a team as a code owner April 1, 2026 11:31
@sunnyqgg sunnyqgg requested a review from pengbowang-nv April 1, 2026 11:31
@sunnyqgg
Copy link
Copy Markdown
Collaborator Author

sunnyqgg commented Apr 1, 2026

/bot run

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2026

📝 Walkthrough

Walkthrough

The change modifies boundary checking logic in _compute_slot_mappings to handle CUDA tensor indexing during graph capture. For CUDA tensors, the code now clamps indices to valid bounds; for non-CUDA tensors, it retains explicit assertions.

Changes

Cohort / File(s) Summary
Sparse DSA CUDA Graph Safe Clamping
tensorrt_llm/_torch/attention_backend/sparse/dsa.py
Modified _compute_slot_mappings to unconditionally compute max_blocks and apply index clamping for CUDA tensors, while preserving explicit bounds assertions for non-CUDA tensors. Addresses out-of-bounds indexing caused by stale token→sequence mappings during CUDA graph capture and replay.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the specific issue (OOB in DSA with MTP), the fix approach (clamping block indices), and includes the NVBugs ticket identifier.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed The PR description clearly explains the issue, solution, and test plan with sufficient detail and follows the repository's guidelines.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41192 [ run ] triggered by Bot. Commit: a05184e Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41192 [ run ] completed with state SUCCESS. Commit: a05184e
/LLM/main/L0_MergeRequest_PR pipeline #32155 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@sunnyqgg
Copy link
Copy Markdown
Collaborator Author

sunnyqgg commented Apr 1, 2026

/bot run

@sunnyqgg sunnyqgg requested a review from NVShreyas April 1, 2026 14:25
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41203 [ run ] triggered by Bot. Commit: a05184e Link to invocation

Copy link
Copy Markdown
Collaborator

@NVShreyas NVShreyas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41203 [ run ] completed with state SUCCESS. Commit: a05184e
/LLM/main/L0_MergeRequest_PR pipeline #32164 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@sunnyqgg
Copy link
Copy Markdown
Collaborator Author

sunnyqgg commented Apr 2, 2026

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41311 [ run ] triggered by Bot. Commit: a05184e Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41311 [ run ] completed with state SUCCESS. Commit: a05184e
/LLM/main/L0_MergeRequest_PR pipeline #32264 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

…SA with MTP during CUDA graph capture

Signed-off-by: qgai <qgai@nvidia.com>
@longlee0622
Copy link
Copy Markdown
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41334 [ run ] triggered by Bot. Commit: 398d76f Link to invocation

Copy link
Copy Markdown
Collaborator

@pengbowang-nv pengbowang-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41334 [ run ] completed with state SUCCESS. Commit: 398d76f
/LLM/main/L0_MergeRequest_PR pipeline #32282 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@longlee0622
Copy link
Copy Markdown
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41423 [ run ] triggered by Bot. Commit: 398d76f Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41423 [ run ] completed with state SUCCESS. Commit: 398d76f
/LLM/main/L0_MergeRequest_PR pipeline #32356 completed with status: 'SUCCESS'

CI Report

Link to invocation

@sunnyqgg sunnyqgg merged commit c60615a into NVIDIA:main Apr 2, 2026
5 checks passed
karen-sy pushed a commit to karen-sy/TensorRT-LLM that referenced this pull request Apr 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants