Skip to content

[None][revert] Revert "[TRTLLM-11119][feat] Blackwell SageAttention, Integrate into …#12679

Merged
yuxianq merged 1 commit intoNVIDIA:mainfrom
yunruis:user/yunruis/revert_visual_gen
Apr 2, 2026
Merged

[None][revert] Revert "[TRTLLM-11119][feat] Blackwell SageAttention, Integrate into …#12679
yuxianq merged 1 commit intoNVIDIA:mainfrom
yunruis:user/yunruis/revert_visual_gen

Conversation

@yunruis
Copy link
Copy Markdown
Contributor

@yunruis yunruis commented Apr 2, 2026

…AttentionOp API (#11718)"

This reverts commit 1b66e96.

Summary by CodeRabbit

Release Notes

  • Removed Features

    • Removed SageAttention support, including per-block INT8 quantization for attention operations and associated configuration options.
    • Eliminated --enable_sage_attention flag from example scripts and simplified attention backend configuration.
    • Removed SageAttention-specific kernel variants and related parameters from public APIs.
  • Simplified APIs

    • Attention backend constructors and factory functions now accept fewer parameters.
    • Streamlined configuration to remove attention metadata state management.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

…AttentionOp API (NVIDIA#11718)"

This reverts commit 1b66e96.

Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com>
@yunruis yunruis requested review from a team as code owners April 2, 2026 03:20
@yunruis yunruis requested review from QiJune and chang-l April 2, 2026 03:20
@yunruis yunruis changed the title [None][Feat] Revert "[TRTLLM-11119][feat] Blackwell SageAttention, Integrate into … [None][revert] Revert "[TRTLLM-11119][feat] Blackwell SageAttention, Integrate into … Apr 2, 2026
@yunruis
Copy link
Copy Markdown
Contributor Author

yunruis commented Apr 2, 2026

/bot run --disable-fail-fast

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 2, 2026

📝 Walkthrough

Walkthrough

Comprehensive removal of SageAttention (per-block quantized attention) support from TensorRT-LLM, including deletion of SageQuant kernels, simplification of FMHA kernel metadata/loading logic, removal of Sage parameters from attention operators and Python APIs, and elimination of VisualGen-specific kernel infrastructure.

Changes

Cohort / File(s) Summary
SageQuant Implementation Removal
cpp/tensorrt_llm/common/sageQuant.cu, cpp/tensorrt_llm/common/sageQuant.h
Removed entire SageQuant quantization implementation including CUDA kernels for per-token-block Q/K quantization and per-channel V quantization, host-side dispatch functions, and the SageQuantParams struct.
Attention Operator Cleanup
cpp/tensorrt_llm/common/attentionOp.cpp, cpp/tensorrt_llm/common/attentionOp.h
Removed SageAttention-specific parameters (sage_attn_sfs_* pointers, mSageAttnNumEltsPerBlk*, mSageAttnQkInt8) from EnqueueParams and AttentionOp class; simplified context FMHA runner setup; reduced workspace buffer count from 26 to 23.
FMHA Kernel Infrastructure
cpp/tensorrt_llm/kernels/fmhaDispatcher.cpp, cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h, cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/kernelParamsVisualGen.h, cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/fused_multihead_attention_common.h
Removed VisualGen kernel metadata integration, simplified kernel loading to standard metadata only (dropping SageAttention-specific field matching), removed KernelMetaVx type and related dataTypeQkReinterpret field, eliminated KernelParamsVisualGen struct and TMA descriptor handling for Sage variants.
FMHA Runner Simplification
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.cpp, cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.h, cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunnerParams.h
Simplified TllmGenFmhaRunner constructor to accept only (dtypeQ, dtypeKv, dtypeOut), removing SageAttention block-size and int8 configuration parameters; removed sageAttnSfs*Ptr and mLogNumEltsPerSageAttnBlk* fields from TllmGenFmhaRunnerParams.
Prebuilt Kernel Artifacts
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/*
Deleted 85 Git LFS pointer files for SageAttention-specific kernel variants (QkInt8, SageQ/K/V configurations across SM100a and SM103a, static/persistent context).
Kernel Metadata Header
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/kernelMetaInfoVisualGen.h
Removed entire VisualGen kernel metadata header including version macro, extern cubin declarations for SM100/103, and static lookup table sTllmGenFmhaKernelMetaInfosVx with Sage-specific kernel configurations.
Python Extension Bindings
cpp/tensorrt_llm/nanobind/thop/bindings.cpp, cpp/tensorrt_llm/thop/attentionOp.cpp, cpp/tensorrt_llm/thop/attentionOp.h
Removed sage_attn_num_elts_per_blk_{q,k,v} and sage_attn_qk_int8 parameters from nanobind/torch extension attention function signatures and removed corresponding validation/initialization logic.
TensorRT-LLM Attention Backend
tensorrt_llm/_torch/attention_backend/trtllm.py
Removed SageAttention parameters (sage_attn_num_elts_per_blk_*, sage_attn_qk_int8) from TrtllmAttentionWrapper.run() and TrtllmAttention.forward() signatures and corresponding argument passing.
Visual Gen Configuration & Backends
tensorrt_llm/_torch/visual_gen/config.py, tensorrt_llm/_torch/visual_gen/attention_backend/trtllm.py, tensorrt_llm/_torch/visual_gen/attention_backend/utils.py
Removed SageAttentionConfig class, deleted sage_attention_config field from AttentionConfig, removed attention_metadata_state field from DiffusionModelConfig, eliminated shared metadata state parameter from attention backend initialization, simplified metadata allocation to internal tracking only.
Visual Gen Model Integration
tensorrt_llm/_torch/visual_gen/models/ltx2/transformer_ltx2.py, tensorrt_llm/_torch/visual_gen/modules/attention.py
Removed attention_config and attention_metadata_state argument passing to create_attention() calls during backend initialization.
Example Scripts & Documentation
examples/visual_gen/README.md, examples/visual_gen/visual_gen_wan_i2v.py, examples/visual_gen/visual_gen_wan_t2v.py
Removed --enable_sage_attention CLI option from WAN example scripts and deleted SageAttention documentation block from README.
Tests
tests/unittest/_torch/visual_gen/test_attention_trtllm_sage.py, tests/unittest/_torch/visual_gen/test_attention_integration.py, tests/unittest/_torch/visual_gen/test_*.py, tests/integration/test_lists/test-db/l0_b200.yml
Deleted entire SageAttention test module; removed test_sage_attention_self_attention test and all SageAttention parameter grids; removed create_attention_metadata_state() usage from multiple test files; removed SageAttention test from l0_b200 test list.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

VisualGen

Suggested reviewers

  • laikhtewari
🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is minimal, containing only a revert statement without explaining the reason, impact, or any test coverage details required by the template. Add a description explaining why the SageAttention integration is being reverted and any relevant test coverage that validates the revert.
Docstring Coverage ⚠️ Warning Docstring coverage is 52.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title is a revert statement that clearly indicates the PR reverts a previous SageAttention feature commit, making it specific and understandable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.h (1)

1-15: ⚠️ Potential issue | 🟡 Minor

Update copyright year to include 2025.

The file has been meaningfully modified (constructor signature simplified, SageAttention members removed) but the copyright year still shows 2020-2023. As per coding guidelines, the copyright header should include the year of its latest meaningful modification.

Suggested fix
 /*
- * Copyright (c) 2020-2023, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2020-2025, NVIDIA CORPORATION.  All rights reserved.
  *
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.h` around lines 1 -
15, Update the top-of-file copyright header in fmhaRunner.h to include 2025
(e.g., change "2020-2023" to "2020-2025") to reflect the latest meaningful
modifications; locate the header comment block at the top of the file and edit
the year range accordingly so it matches current guidelines.
🧹 Nitpick comments (4)
tests/unittest/_torch/visual_gen/test_attention_perf.py (1)

1-2: Update copyright year to include 2026.

The copyright header shows only 2025, but this file is being modified in 2026. Per coding guidelines, the copyright year should reflect the latest meaningful modification.

Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unittest/_torch/visual_gen/test_attention_perf.py` around lines 1 - 2,
Update the copyright header at the top of this file to include 2026 by changing
the existing year token "2025" to "2025-2026" (or another project-standard
format that includes 2026); locate the header comment lines at the top of
tests/unittest/_torch/visual_gen/test_attention_perf.py and replace the line
containing "Copyright (c) 2025" so the file reflects the latest modification
year.
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.cpp (1)

1-15: Update copyright year to include 2026.

The copyright header shows 2020-2023, but this file is being modified in 2026.

Proposed fix
 /*
- * Copyright (c) 2020-2023, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2020-2026, NVIDIA CORPORATION.  All rights reserved.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.cpp` around lines 1
- 15, Update the copyright header block at the top of fmhaRunner.cpp to include
2026 (change "2020-2023" to "2020-2026") so the file's copyright range reflects
the current modification year; edit the leading comment block containing the
license text to update the year span accordingly.
cpp/tensorrt_llm/kernels/fmhaDispatcher.cpp (1)

2-2: Update copyright year to include 2026.

The copyright header shows 2020-2024, but this file is being modified in 2026.

Proposed fix
- * Copyright (c) 2020-2024, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2020-2026, NVIDIA CORPORATION.  All rights reserved.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/tensorrt_llm/kernels/fmhaDispatcher.cpp` at line 2, The file header
currently reads "2020-2024" and must be updated to include 2026; edit the
top-of-file copyright line in fmhaDispatcher.cpp by replacing the string
"2020-2024" with "2020-2026" (search for that exact substring in the header) so
the copyright range reflects the 2026 modification.
tensorrt_llm/_torch/visual_gen/attention_backend/trtllm.py (1)

80-88: Track the last prepared batch size separately.

_cached_seq_lens.shape[0] is buffer capacity, not the last prepared batch size. After one larger batch, every later smaller batch still trips the shape check and reruns BaseTrtllmAttentionMetadata.prepare() even when seq_lens are unchanged.

♻️ Suggested fix
         self._cached_seq_lens: Optional[torch.Tensor] = None
         self._prepared = False
+        self._prepared_batch_size = 0
@@
     def _needs_prepare(self, batch_size: int, seq_lens: torch.Tensor) -> bool:
         """Check if we need to call prepare() (seq_lens changed)."""
         if not self._prepared:
             return True
         if self._cached_seq_lens is None:
             return True
-        if self._cached_seq_lens.shape[0] != batch_size:
+        if self._prepared_batch_size != batch_size:
             return True
         return not torch.equal(self._cached_seq_lens[:batch_size], seq_lens)
@@
             else:
                 self._cached_seq_lens[:batch_size].copy_(seq_lens_tensor)
             self._prepared = True
+            self._prepared_batch_size = batch_size

Also applies to: 139-143

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_torch/visual_gen/attention_backend/trtllm.py` around lines 80 -
88, The _needs_prepare method incorrectly uses self._cached_seq_lens.shape[0]
(buffer capacity) to detect a changed batch size; add and use a separate
attribute (e.g. self._last_prepared_batch_size) to record the batch size used by
the last successful prepare(), and replace checks against
_cached_seq_lens.shape[0] with this new attribute in _needs_prepare (and the
analogous check around lines 139-143). Ensure prepare() updates
self._last_prepared_batch_size when it completes so subsequent calls correctly
detect whether a re-prepare is needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h`:
- Line 2: Update the copyright year range in the header of fmhaKernels.h to
include 2026 (e.g., change "2020-2025" to "2020-2026") so the file's NVIDIA
copyright header reflects the modification year.

In `@tensorrt_llm/_torch/visual_gen/attention_backend/trtllm.py`:
- Line 1: The SPDX header at the top (the SPDX-FileCopyrightText line) lists the
copyright end year as 2025; update that year range to include 2026 (e.g., change
"2025" to "2025-2026" or the appropriate range) so the header reflects the file
modification year.
- Around line 244-255: The cross-attention fallback currently calls
self._concat_qkv(...) even when seq_len != kv_seq_len, causing mismatched row
counts and a failed torch.cat; guard that path by only using qkv concatenation
when k and v are None or when kv_seq_len == seq_len: if k is None and v is None
keep the existing flatten-to-(batch_size*seq_len) behavior, else if kv_seq_len
== seq_len use self._concat_qkv(q, k, v, batch_size, seq_len, kv_seq_len) and
pass the resulting qkv into super.forward(...), otherwise do not concatenate —
instead call super().forward with q=q, k=k, v=v (or otherwise choose an
appropriate unfused fallback) so the unequal-length cross-attention case avoids
torch.cat errors.

---

Outside diff comments:
In `@cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.h`:
- Around line 1-15: Update the top-of-file copyright header in fmhaRunner.h to
include 2025 (e.g., change "2020-2023" to "2020-2025") to reflect the latest
meaningful modifications; locate the header comment block at the top of the file
and edit the year range accordingly so it matches current guidelines.

---

Nitpick comments:
In `@cpp/tensorrt_llm/kernels/fmhaDispatcher.cpp`:
- Line 2: The file header currently reads "2020-2024" and must be updated to
include 2026; edit the top-of-file copyright line in fmhaDispatcher.cpp by
replacing the string "2020-2024" with "2020-2026" (search for that exact
substring in the header) so the copyright range reflects the 2026 modification.

In `@cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.cpp`:
- Around line 1-15: Update the copyright header block at the top of
fmhaRunner.cpp to include 2026 (change "2020-2023" to "2020-2026") so the file's
copyright range reflects the current modification year; edit the leading comment
block containing the license text to update the year span accordingly.

In `@tensorrt_llm/_torch/visual_gen/attention_backend/trtllm.py`:
- Around line 80-88: The _needs_prepare method incorrectly uses
self._cached_seq_lens.shape[0] (buffer capacity) to detect a changed batch size;
add and use a separate attribute (e.g. self._last_prepared_batch_size) to record
the batch size used by the last successful prepare(), and replace checks against
_cached_seq_lens.shape[0] with this new attribute in _needs_prepare (and the
analogous check around lines 139-143). Ensure prepare() updates
self._last_prepared_batch_size when it completes so subsequent calls correctly
detect whether a re-prepare is needed.

In `@tests/unittest/_torch/visual_gen/test_attention_perf.py`:
- Around line 1-2: Update the copyright header at the top of this file to
include 2026 by changing the existing year token "2025" to "2025-2026" (or
another project-standard format that includes 2026); locate the header comment
lines at the top of tests/unittest/_torch/visual_gen/test_attention_perf.py and
replace the line containing "Copyright (c) 2025" so the file reflects the latest
modification year.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 1294d1d5-fcbd-40b0-b5ea-dfa5a570b5f6

📥 Commits

Reviewing files that changed from the base of the PR and between eb091ac and a54b0a2.

📒 Files selected for processing (119)
  • cpp/tensorrt_llm/common/attentionOp.cpp
  • cpp/tensorrt_llm/common/attentionOp.h
  • cpp/tensorrt_llm/common/sageQuant.cu
  • cpp/tensorrt_llm/common/sageQuant.h
  • cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/fused_multihead_attention_common.h
  • cpp/tensorrt_llm/kernels/fmhaDispatcher.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/kernelMetaInfoVisualGen.h
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunner.h
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaRunnerParams.h
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/kernelParamsVisualGen.h
  • cpp/tensorrt_llm/nanobind/thop/bindings.cpp
  • cpp/tensorrt_llm/thop/attentionOp.cpp
  • cpp/tensorrt_llm/thop/attentionOp.h
  • examples/visual_gen/README.md
  • examples/visual_gen/visual_gen_wan_i2v.py
  • examples/visual_gen/visual_gen_wan_t2v.py
  • tensorrt_llm/_torch/attention_backend/trtllm.py
  • tensorrt_llm/_torch/visual_gen/attention_backend/trtllm.py
  • tensorrt_llm/_torch/visual_gen/attention_backend/utils.py
  • tensorrt_llm/_torch/visual_gen/config.py
  • tensorrt_llm/_torch/visual_gen/models/ltx2/transformer_ltx2.py
  • tensorrt_llm/_torch/visual_gen/modules/attention.py
  • tests/integration/test_lists/test-db/l0_b200.yml
  • tests/unittest/_torch/visual_gen/multi_gpu/test_flux_ulysses.py
  • tests/unittest/_torch/visual_gen/test_attention_integration.py
  • tests/unittest/_torch/visual_gen/test_attention_perf.py
  • tests/unittest/_torch/visual_gen/test_attention_trtllm_sage.py
  • tests/unittest/_torch/visual_gen/test_flux_attention.py
  • tests/unittest/_torch/visual_gen/test_ltx2_attention.py
💤 Files with no reviewable changes (101)
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • tensorrt_llm/_torch/visual_gen/modules/attention.py
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • tests/unittest/_torch/visual_gen/multi_gpu/test_flux_ulysses.py
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • tests/integration/test_lists/test-db/l0_b200.yml
  • examples/visual_gen/README.md
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/common/sageQuant.h
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • tensorrt_llm/_torch/visual_gen/attention_backend/utils.py
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/fused_multihead_attention_common.h
  • tests/unittest/_torch/visual_gen/test_attention_trtllm_sage.py
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/common/sageQuant.cu
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/kernelMetaInfoVisualGen.h
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/kernelParamsVisualGen.h
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV128SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk128HV128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • tensorrt_llm/_torch/visual_gen/config.py
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk64HV64SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • tensorrt_llm/_torch/visual_gen/models/ltx2/transformer_ltx2.py
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H128SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk64HV64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK16SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1PersistentContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkvE4m3OBfloat16H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK1SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkvE4m3OE4m3H64SeparateQkvDenseVarSeqQ128Kv128SageQ1SageK4SageV1StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm100aKernel_QkInt8VE4m3OE4m3HQk256HV256SeparateQkvDenseVarSeqQ128Kv128StaticContext_cubin.cpp
  • cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/cubin_visual_gen/FmhaSm103aKernel_QkInt8VE4m3OBfloat16HQk128HV128SeparateQkvDenseVarSeqQ128Kv128PersistentContext_cubin.cpp

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41323 [ run ] triggered by Bot. Commit: a54b0a2 Link to invocation

@zhenhuaw-me
Copy link
Copy Markdown
Member

/bot help

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 2, 2026

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@yunruis
Copy link
Copy Markdown
Contributor Author

yunruis commented Apr 2, 2026

/bot run --disable-fail-fast

@yunruis
Copy link
Copy Markdown
Contributor Author

yunruis commented Apr 2, 2026

/bot kill

@yunruis
Copy link
Copy Markdown
Contributor Author

yunruis commented Apr 2, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41398 [ run ] triggered by Bot. Commit: a54b0a2 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41323 [ run ] completed with state ABORTED. Commit: a54b0a2
LLM/main/L0_MergeRequest_PR #32272 (Blue Ocean) completed with status: ABORTED

Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41400 [ kill ] triggered by Bot. Commit: a54b0a2 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41398 [ run ] completed with state ABORTED. Commit: a54b0a2

Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41400 [ kill ] completed with state SUCCESS. Commit: a54b0a2
Successfully killed previous jobs for commit a54b0a2

Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41403 [ run ] triggered by Bot. Commit: a54b0a2 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41403 [ run ] completed with state SUCCESS. Commit: a54b0a2
/LLM/main/L0_MergeRequest_PR pipeline #32338 completed with status: 'SUCCESS'

CI Report

Link to invocation

@yuxianq yuxianq merged commit de6200d into NVIDIA:main Apr 2, 2026
8 of 12 checks passed
karen-sy pushed a commit to karen-sy/TensorRT-LLM that referenced this pull request Apr 7, 2026
…Integrate into AttentionOp API (NVIDIA#11718)" (NVIDIA#12679)

Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants