Skip to content

add sage3 sparse and fp4 communication#917

Merged
helloyongyang merged 2 commits intomainfrom
yr/sage3_sparse
Mar 5, 2026
Merged

add sage3 sparse and fp4 communication#917
helloyongyang merged 2 commits intomainfrom
yr/sage3_sparse

Conversation

@STwangyingrui
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily introduces and integrates Sparse SAGE Attention (spas_sage_attn3) and FP4 communication support into the system. These enhancements aim to improve the efficiency and flexibility of distributed attention mechanisms, particularly for models utilizing sequence parallelism. The changes involve adding new attention types, extending communication protocols to handle FP4 quantization, and updating relevant model configurations and inference logic to leverage these new capabilities.

Highlights

  • Sparse SAGE Attention Integration: Added support for Sparse SAGE Attention (spas_sage_attn3) by introducing a new attention weight class and integrating it into the system.
  • FP4 Communication Support: Implemented FP4 communication capabilities within the Ulysses attention mechanism, allowing for more efficient data transfer in distributed settings.
  • Generalized Quantization Communication: Generalized the communication handling in Ulysses attention to dynamically support either FP8 or FP4 quantization based on configuration, enhancing flexibility.
  • Configuration and Inference Module Updates: Updated model configurations and various transformer inference modules to enable and utilize the newly added Sparse SAGE Attention and FP4 communication options.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • configs/seko_talk/seko_talk_25_int8_dist_fp8_comm.json
    • Updated self-attention type to "spas_sage_attn3" and cross-attention types to "sage_attn2".
    • Disabled use_31_block and enabled cpu_offload.
    • Removed resize_mode and fixed_shape configurations.
    • Adjusted seq_p_size and added seq_p_fp4_comm and seq_p_tensor_fusion parameters.
  • lightx2v/common/ops/attn/ring_attn.py
    • Added use_fp4_comm parameter to the apply function.
    • Introduced assertions to prevent use_fp4_comm with RingAttn and ensure mutual exclusivity with use_fp8_comm.
  • lightx2v/common/ops/attn/sage_attn.py
    • Imported sageattn3_sparse_blackwell and added error handling.
    • Defined SparseSageAttn3Weight class for sparse SAGE attention.
  • lightx2v/common/ops/attn/ulysses_attn.py
    • Imported FP4 quantization and dequantization functions from sageattn3_sparse.
    • Added use_fp4_comm parameter to the apply function and an assertion for mutual exclusivity with use_fp8_comm.
    • Extended communication logic to support FP4 quantization for QKV fusion and non-fusion paths.
    • Refactored variable names from _fp8_ to _quant_ for generalized quantization handling.
  • lightx2v/models/networks/hunyuan_video/infer/transformer_infer.py
    • Initialized seq_p_fp4_comm from configuration.
    • Passed use_fp4_comm to attention modules.
  • lightx2v/models/networks/ltx2/infer/transformer_infer.py
    • Initialized seq_p_fp4_comm from configuration.
    • Passed use_fp4_comm to attention modules.
  • lightx2v/models/networks/qwen_image/infer/transformer_infer.py
    • Initialized seq_p_fp4_comm from configuration.
    • Passed use_fp4_comm to attention modules.
  • lightx2v/models/networks/wan/infer/transformer_infer.py
    • Initialized seq_p_fp4_comm from configuration.
    • Passed use_fp4_comm to attention modules.
  • lightx2v/models/networks/z_image/infer/transformer_infer.py
    • Initialized seq_p_fp4_comm.
    • Passed use_fp4_comm to attention modules.
  • lightx2v/models/runners/default_runner.py
    • Modified the scheduler.prepare method to include infer_steps.
  • lightx2v/models/schedulers/wan/audio/scheduler.py
    • Initialized infer_steps from configuration.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces sparse attention (spas_sage_attn3) and FP4 communication capabilities, primarily affecting attention mechanisms and their configurations. The changes involve updating configuration files, adding new parameters to attention functions, and implementing the logic for FP4 quantization and dequantization within the ulysses_attn.py module. The addition of sageattn3_sparse_blackwell and related FP4 quantization/dequantization functions from sageattn3_sparse aligns with the PR's objective to enhance communication efficiency. However, some areas could benefit from improved clarity and consistency, particularly regarding the handling of FP4 communication in RingAttn and the use of magic numbers/strings in ulysses_attn.py and sage_attn.py.

torch.Tensor: 计算得到的注意力结果
"""
assert not enable_head_parallel, "RingAttn can't support head parallel mode."
assert not use_fp4_comm, "RingAttn don't support use_fp4_comm now."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assertion assert not use_fp4_comm, "RingAttn don't support use_fp4_comm now." indicates that RingAttn does not support FP4 communication. However, use_fp4_comm was added as a parameter in this PR. This creates a confusing situation where a feature is introduced but immediately disabled. If RingAttn truly cannot support FP4 communication, consider removing the use_fp4_comm parameter from its apply method to avoid misleading usage or potential dead code. Alternatively, if support is planned, this assert should be removed once implemented.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ring和ulysses需要使用相同的调用参数。

q, k, v = q.unsqueeze(0), k.unsqueeze(0), v.unsqueeze(0)
elif len(q.shape) == 4:
bs = q.shape[0]
x = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The parameters per_block_mean = False and topk=0.2 are magic numbers/boolean literals. It would improve readability and maintainability to define these as named constants or configurable parameters, especially topk which represents a specific threshold.

Suggested change
x = (
x = sageattn3_sparse_blackwell(q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2), is_causal=False, is_sparse=True, per_block_mean=False, topk=0.2).transpose(1, 2).reshape(bs * max_seqlen_q, -1)

img_qkv_quant = img_qkv_quant.reshape(world_size, img_qkv_len, shard_heads, 3, hidden_dims)
img_qkv_scale = img_qkv_scale.reshape(world_size, img_qkv_len, shard_heads, 3, 1)
else:
img_qkv_quant, img_qkv_scale = quant_fp4_sage3(img_qkv.reshape(world_size, -1, shard_heads, hidden_dims))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The quant_fp4_sage3 call here is missing the in_tensor_layout and out_tensor_layout arguments, which were present in other quant_fp4_sage3 calls (e.g., lines 129 and 166-168). This inconsistency could lead to unexpected behavior or make the code harder to understand and maintain. Please ensure all calls to quant_fp4_sage3 are consistent with their arguments, or provide a clear reason for the difference.

Suggested change
img_qkv_quant, img_qkv_scale = quant_fp4_sage3(img_qkv.reshape(world_size, -1, shard_heads, hidden_dims))
img_qkv_quant, img_qkv_scale = quant_fp4_sage3(img_qkv.reshape(world_size, -1, shard_heads, hidden_dims), in_tensor_layout="HND", out_tensor_layout="HND")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in_tensor_layout和out_tensor_layout默认值为NHD

Comment on lines +318 to +320
img_q_quant, img_q_scale = quant_fp4_sage3(img_q)
img_k_quant, img_k_scale = quant_fp4_sage3(img_k)
img_v_quant, img_v_scale = quant_fp4_sage3(img_v)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the previous comment, these quant_fp4_sage3 calls are missing the in_tensor_layout and out_tensor_layout arguments. Please ensure consistency across all quant_fp4_sage3 calls or document the reason for any intentional differences.

Suggested change
img_q_quant, img_q_scale = quant_fp4_sage3(img_q)
img_k_quant, img_k_scale = quant_fp4_sage3(img_k)
img_v_quant, img_v_scale = quant_fp4_sage3(img_v)
img_q_quant, img_q_scale = quant_fp4_sage3(img_q, in_tensor_layout="HND", out_tensor_layout="HND")
img_k_quant, img_k_scale = quant_fp4_sage3(img_k, in_tensor_layout="HND", out_tensor_layout="HND")
img_v_quant, img_v_scale = quant_fp4_sage3(img_v, in_tensor_layout="HND", out_tensor_layout="HND")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in_tensor_layout和out_tensor_layout默认值为NHD

@STwangyingrui STwangyingrui marked this pull request as draft March 5, 2026 03:45
@helloyongyang helloyongyang marked this pull request as ready for review March 5, 2026 06:54
@helloyongyang helloyongyang merged commit 1f1a9da into main Mar 5, 2026
2 checks passed
@helloyongyang helloyongyang deleted the yr/sage3_sparse branch March 5, 2026 06:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants