Skip to content

[TRTLLM-11587][feat] Enable chunked prefix for Nemotron models on sm120#12414

Merged
pamelap-nvidia merged 1 commit intoNVIDIA:mainfrom
pamelap-nvidia:nemotron_chunked_prefill
Mar 21, 2026
Merged

[TRTLLM-11587][feat] Enable chunked prefix for Nemotron models on sm120#12414
pamelap-nvidia merged 1 commit intoNVIDIA:mainfrom
pamelap-nvidia:nemotron_chunked_prefill

Conversation

@pamelap-nvidia
Copy link
Collaborator

@pamelap-nvidia pamelap-nvidia commented Mar 20, 2026

Summary by CodeRabbit

  • New Features
    • Expanded flash attention kernel configurations with bfloat16 output support across additional head sizes for improved performance optimization on SM120 GPUs.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: Pamela <179191831+pamelap-nvidia@users.noreply.github.com>
@pamelap-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39749 [ run ] triggered by Bot. Commit: 47f2448 Link to invocation

@pamelap-nvidia pamelap-nvidia marked this pull request as ready for review March 20, 2026 16:54
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 20, 2026

📝 Walkthrough

Walkthrough

Kernel enumeration for SM120 flash attention is expanded to generate additional configurations producing bf16 output when input dtype is e4m3_fp32. The supported head_sizes for these MLA kernels increases from [192, 576] to [128, 192, 576], with the comment updated accordingly.

Changes

Cohort / File(s) Summary
SM120 Kernel Enumeration
cpp/kernels/fmha_v2/setup.py
Expanded head_sizes list for bf16-output MLA flash attention kernels with e4m3_fp32 input from [192, 576] to [128, 192, 576]. Updated accompanying comment to reflect new bf16 output kernel generation scope.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is completely unfilled; it contains only the template with no actual description, test coverage, or implementation details provided by the author. Add a detailed description explaining the changes, the reason for enabling SM120 kernel configurations, relevant test coverage, and which Nemotron models are affected.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title describes enabling chunked prefix for Nemotron models on sm120, but the actual changes only expand kernel enumeration head sizes in setup.py without mentioning Nemotron or chunked prefix functionality. Clarify whether the title accurately reflects the kernel enumeration changes or if the PR scope differs from the code changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can customize the tone of the review comments and chat replies.

Configure the tone_instructions setting to customize the tone of the review comments and chat replies. For example, you can set the tone to Act like a strict teacher, Act like a pirate and more.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/kernels/fmha_v2/setup.py (1)

1-2: ⚠️ Potential issue | 🟠 Major

Update the SPDX copyright year for this modified file.

Line 1 still ends at 2025, but this file is modified in this PR and should be updated to include 2026.

As per coding guidelines: "Add NVIDIA copyright header on ALL new files, and update year on modified files".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/kernels/fmha_v2/setup.py` around lines 1 - 2, Update the SPDX copyright
header in cpp/kernels/fmha_v2/setup.py by changing the year range on the first
line from "2020-2025" to "2020-2026" so the modified file reflects the current
year per the project's header policy; ensure the SPDX-License-Identifier line
remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@cpp/kernels/fmha_v2/setup.py`:
- Around line 1-2: Update the SPDX copyright header in
cpp/kernels/fmha_v2/setup.py by changing the year range on the first line from
"2020-2025" to "2020-2026" so the modified file reflects the current year per
the project's header policy; ensure the SPDX-License-Identifier line remains
unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6e990c07-d676-4684-a57a-bd1df66c796e

📥 Commits

Reviewing files that changed from the base of the PR and between 68001ce and 47f2448.

📒 Files selected for processing (1)
  • cpp/kernels/fmha_v2/setup.py

@pamelap-nvidia
Copy link
Collaborator Author

/bot help

@github-actions
Copy link

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39749 [ run ] completed with state SUCCESS. Commit: 47f2448
/LLM/main/L0_MergeRequest_PR pipeline #30944 completed with status: 'SUCCESS'

CI Report

Link to invocation

@pamelap-nvidia
Copy link
Collaborator Author

/bot run --stage-list "Build-Docker-Images"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39767 [ run ] triggered by Bot. Commit: 47f2448 Link to invocation

@pamelap-nvidia
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39767 [ run ] completed with state SUCCESS. Commit: 47f2448
/LLM/main/L0_MergeRequest_PR pipeline #30962 (Partly Tested) completed with status: 'SUCCESS'

CI Report

Link to invocation

@pamelap-nvidia
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39787 [ reuse-pipeline ] triggered by Bot. Commit: 47f2448 Link to invocation

@pamelap-nvidia pamelap-nvidia enabled auto-merge (squash) March 21, 2026 05:10
@tensorrt-cicd
Copy link
Collaborator

PR_Github #39787 [ reuse-pipeline ] completed with state SUCCESS. Commit: 47f2448
Reusing PR_Github #39767 (Partly Tested) for commit 47f2448

Link to invocation

@pamelap-nvidia pamelap-nvidia merged commit 45c1d93 into NVIDIA:main Mar 21, 2026
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants