Merged
Conversation
…for v if there is possibility of confusion
Contributor
🏷️ CI GuideRuns automatically on every PR:
Extended tests (opt-in via labels):
|
Contributor
There was a problem hiding this comment.
Pull request overview
This PR updates SageAttention (vanilla + MXFP4) Triton paths to avoid int32 overflow in pointer arithmetic by promoting offsets to int64, and refactors MXFP4 quantization to use an unfused quantization path for better performance.
Changes:
- Cast key program ids / offsets to
tl.int64in Sage attention kernels to prevent overflow in pointer calculations. - Move Sage quantization logic into dedicated quant wrapper / kernel modules and switch MXFP4 wrapper to the unfused quantization path.
- Update MXFP4 benchmark to use CUDA-graph benchmarking and add a
-testflag to optionally run accuracy checks.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| op_tests/op_benchmarks/triton/bench_fav3_sage_mxfp4.py | Switches benchmark timing method and gates correctness tests behind a new CLI flag. |
| aiter/ops/triton/quant/sage_attention_quant_wrappers.py | Introduces Python-level quantization wrappers (rotation/smoothing/downcast) used by Sage attention. |
| aiter/ops/triton/attention/fav3_sage_attention_mxfp4_wrapper.py | Rewires MXFP4 forward wrapper to use the new unfused quantization wrapper. |
| aiter/ops/triton/attention/fav3_sage.py | Redirects Sage quant import to the new quant wrapper module. |
| aiter/ops/triton/_triton_kernels/quant/sage_attention_quant.py | Adds Triton kernels for Sage quantization (including int64 pid handling). |
| aiter/ops/triton/_triton_kernels/attention/fav3_sage_attention_mxfp4.py | Promotes program ids to int64 and removes in-file quantization helpers. |
| aiter/ops/triton/_triton_kernels/attention/fav3_sage_attention.py | Promotes program ids to int64 and removes in-file quantization helpers. |
| 3rdparty/composable_kernel | Updates the CK submodule revision. |
Comments suppressed due to low confidence (1)
aiter/ops/triton/attention/fav3_sage_attention_mxfp4_wrapper.py:1
- The wrapper’s public flags (
hadamard_rotation,q_smooth,R,BLOCK_R) are no longer passed into quantization, so toggling these options will not affect behavior as the API suggests. Consider either (a) plumbing these args through to a quant path that honors them (e.g., callfused_sage_quant_mxfp4when requested, or extend the unfused path to accept/implement them), or (b) explicitly disallow/raise when these flags are enabled to avoid silent misconfiguration.
# SPDX-License-Identifier: MIT
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
aiter/ops/triton/_triton_kernels/attention/fav3_sage_attention_mxfp4.py
Outdated
Show resolved
Hide resolved
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Chi-Chu319
previously approved these changes
Mar 10, 2026
jcaraban
previously approved these changes
Mar 10, 2026
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
valarLip
pushed a commit
that referenced
this pull request
Mar 18, 2026
* revert to unfused quant kernels for perf * int64 offsets to avoid bhsd overflow of int32
AMD-yanfeiwang
pushed a commit
to AMD-yanfeiwang/aiter
that referenced
this pull request
Mar 18, 2026
* revert to unfused quant kernels for perf * int64 offsets to avoid bhsd overflow of int32
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR concerns the sage attention kernels (the vanilla and the mxfp4):