Skip to content

[https://nvbugs/6098442][fix] Add fix for IMA with TRTLLM-Gen GmemReductionWithSeparateKernel#13541

Merged
liji-nv merged 1 commit intoNVIDIA:mainfrom
pengbowang-nv:dev-fix-trtllm-gen-gmem-reduction-with-seperate-kernel
Apr 30, 2026
Merged

[https://nvbugs/6098442][fix] Add fix for IMA with TRTLLM-Gen GmemReductionWithSeparateKernel#13541
liji-nv merged 1 commit intoNVIDIA:mainfrom
pengbowang-nv:dev-fix-trtllm-gen-gmem-reduction-with-seperate-kernel

Conversation

@pengbowang-nv
Copy link
Copy Markdown
Collaborator

@pengbowang-nv pengbowang-nv commented Apr 28, 2026

Summary by CodeRabbit

Bug Fixes

  • Fixed a stability issue in Flash Multi-Head Attention (FMHA) kernels that could cause crashes when specific memory conditions occurred. The safety validation mechanism has been enhanced to cover additional operational mode variants, improving overall kernel reliability across different configurations.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
@pengbowang-nv
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@pengbowang-nv pengbowang-nv enabled auto-merge (squash) April 28, 2026 03:07
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 28, 2026

📝 Walkthrough

Walkthrough

The hotfix that prevents crashes from null multiCtasKvScratchPtr or multiCtasKvCounterPtr pointers was extended to cover the GmemReductionWithSeparateKernel mode in addition to the existing GmemReduction mode. The safeguard was applied consistently in both kernel selection and execution code paths.

Changes

Cohort / File(s) Summary
FMHA Kernel Header
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h
Extended null-pointer safety checks to handle GmemReductionWithSeparateKernel mode alongside GmemReduction mode in both checkIfKernelExist and run functions.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the template structure with empty sections and no actual implementation details, rationale, test coverage, or context. Fill in the Description section explaining the issue and solution, and the Test Coverage section listing relevant tests that safeguard the changes.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the fix (IMA with TRTLLM-Gen GmemReductionWithSeparateKernel) and includes proper ticket format [https://nvbugs/6098442][fix].
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h (1)

312-320: ⚠️ Potential issue | 🔴 Critical

Null-pointer undefined behavior still occurs in setFmhaData() despite mode guard.

The guard at lines 307–315 disables MultiCtasKvMode when pointers are null, but setFmhaData() (called at line 338) unconditionally performs pointer arithmetic on potentially null params.multiCtasKvScratchPtr at lines 712–715:

  • Line 712: reinterpret_cast<float2*>(params.multiCtasKvScratchPtr) on null pointer
  • Line 715: pointer arithmetic on the result

Both operations constitute undefined behavior under the C++ standard. The mode being disabled prevents kernel execution but does not prevent the UB in setFmhaData().

Conditionally initialize pointers to nullptr and only set them when GmemReduction modes are enabled with valid pointers, using TLLM_CHECK_WITH_INFO to enforce the invariant.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h` around lines
312 - 320, The code currently casts and does pointer arithmetic on
params.multiCtasKvScratchPtr inside setFmhaData(), causing undefined behavior
when that pointer is null even if options.mMultiCtasKvMode was later forced to
Disabled; to fix, change setFmhaData() so it initializes any local multi-cta
pointer variables to nullptr and only perform reinterpret_cast<float2*>(...) and
subsequent pointer arithmetic when options.mMultiCtasKvMode is one of
GmemReduction or GmemReductionWithSeparateKernel AND
params.multiCtasKvScratchPtr (and multiCtasKvCounterPtr if used) are non-null,
and add TLLM_CHECK_WITH_INFO guards to assert those invariants (referencing
options.mMultiCtasKvMode, params.multiCtasKvScratchPtr,
params.multiCtasKvCounterPtr, and setFmhaData) so the function never performs
pointer arithmetic on a null pointer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h`:
- Around line 312-320: The code currently casts and does pointer arithmetic on
params.multiCtasKvScratchPtr inside setFmhaData(), causing undefined behavior
when that pointer is null even if options.mMultiCtasKvMode was later forced to
Disabled; to fix, change setFmhaData() so it initializes any local multi-cta
pointer variables to nullptr and only perform reinterpret_cast<float2*>(...) and
subsequent pointer arithmetic when options.mMultiCtasKvMode is one of
GmemReduction or GmemReductionWithSeparateKernel AND
params.multiCtasKvScratchPtr (and multiCtasKvCounterPtr if used) are non-null,
and add TLLM_CHECK_WITH_INFO guards to assert those invariants (referencing
options.mMultiCtasKvMode, params.multiCtasKvScratchPtr,
params.multiCtasKvCounterPtr, and setFmhaData) so the function never performs
pointer arithmetic on a null pointer.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: fd7863a8-3222-4f98-a302-8b7c88f4c6df

📥 Commits

Reviewing files that changed from the base of the PR and between 3a790bd and 6f13f2b.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45838 [ run ] triggered by Bot. Commit: 6f13f2b Link to invocation

@pengbowang-nv pengbowang-nv disabled auto-merge April 28, 2026 04:51
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45838 [ run ] completed with state ABORTED. Commit: 6f13f2b

Link to invocation

@pengbowang-nv
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46272 [ run ] triggered by Bot. Commit: 6f13f2b Link to invocation

@liji-nv liji-nv enabled auto-merge (squash) April 30, 2026 08:18
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46272 [ run ] completed with state SUCCESS. Commit: 6f13f2b
/LLM/main/L0_MergeRequest_PR pipeline #36378 completed with status: 'SUCCESS'

CI Report

Link to invocation

@liji-nv liji-nv merged commit e09f5ef into NVIDIA:main Apr 30, 2026
9 checks passed
evezhier pushed a commit to evezhier/TensorRT-LLM that referenced this pull request May 4, 2026
…uctionWithSeparateKernel (NVIDIA#13541)

Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants