Skip to content

rename dcu to hygon_dcu#609

Merged
helloyongyang merged 6 commits intoModelTC:mainfrom
fuheaven:dcu
Dec 15, 2025
Merged

rename dcu to hygon_dcu#609
helloyongyang merged 6 commits intoModelTC:mainfrom
fuheaven:dcu

Conversation

@fuheaven
Copy link
Copy Markdown
Contributor

  1. rename dcu to hygon_dcu
  2. fix flash attention bug

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @fuheaven, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on enhancing the platform's support for Hygon DCU devices. It systematically renames all dcu-related components to hygon_dcu to accurately reflect the specific hardware, improving code readability and maintainability. Concurrently, it integrates an optimized Flash Attention mechanism for Hygon DCU, which dynamically utilizes the high-performance ROCm Flash Attention library if present, or gracefully falls back to PyTorch's native SDPA, ensuring efficient attention computations across different environments.

Highlights

  • DCU Renaming: All instances of "dcu" have been systematically renamed to "hygon_dcu" for improved clarity and specificity, affecting file names, class names, registry entries, and documentation across the platform.
  • Hygon DCU Flash Attention: A new Flash Attention implementation has been introduced specifically for Hygon DCU. This leverages the ROCm version of Flash Attention 2.6.1 when available, with a robust fallback to PyTorch's native Scaled Dot Product Attention (SDPA) if Flash Attention is not installed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request primarily renames the dcu platform to hygon_dcu and updates the flash attention implementation for it. While the renaming changes are applied consistently, I've identified several critical issues with the new flash attention logic. The implementation incorrectly handles variable-length sequences, which will cause runtime errors in both the primary execution path and the SDPA fallback. Additionally, it appears that support for matrix multiplication (mm) operations for the hygon_dcu platform may have been unintentionally removed during the refactoring. These issues should be addressed before merging.

)

# Reshape to [B*max_seqlen_q, num_heads * head_dim]
bs = cu_seqlens_q.shape[0] - 1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The output tensor from flash_attn_varlen_func is being reshaped incorrectly. flash_attn_varlen_func returns a packed tensor of shape (total_tokens, num_heads, head_dim), where total_tokens is the sum of sequence lengths in the batch. Reshaping it to (bs * max_seqlen_q, -1) is only correct if all sequences have length max_seqlen_q (i.e., no padding). For variable length sequences, this will raise a RuntimeError because the number of elements will not match.

The output should be reshaped based on the total number of tokens, which is the first dimension of the output tensor.

Suggested change
bs = cu_seqlens_q.shape[0] - 1
output = output.reshape(-1, output.shape[-2] * output.shape[-1])

Comment on lines +144 to +146
# Reshape q, k, v to [B, L, Nq, C]
q = q.reshape(bs, max_seqlen_q, q.shape[-2], q.shape[-1])
k = k.reshape(bs, max_seqlen_q, k.shape[-2], k.shape[-1])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The reshape operation here is incorrect for variable-length sequences. The input tensors q, k, and v are packed (flattened), with a shape like (total_tokens, num_heads, head_dim). Reshaping them to (bs, max_seqlen_q, ...) will fail if there's any padding in the batch, because total_tokens will not be equal to bs * max_seqlen_q.

To use torch.nn.functional.scaled_dot_product_attention, you must first convert the packed tensors into padded, batched tensors. This requires more than a simple reshape. After the attention call, the resulting padded tensor must then be converted back to a packed format. The current implementation of this fallback function is broken for variable-length inputs.

from .mm.dcu import *
# Check if running on Hygon DCU platform
if os.getenv("PLATFORM") == "hygon_dcu":
from .attn.hygon_dcu import *
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The refactoring from dcu to hygon_dcu appears to have removed the import for mm (matrix multiplication) operations. The previous dcu platform implementation imported from .mm.dcu, but this has been omitted for hygon_dcu.

If hygon_dcu is intended to support these mm operations, this change constitutes a feature regression. Was this removal intentional? If not, please ensure the necessary mm operations are implemented and imported for the hygon_dcu platform.

@helloyongyang helloyongyang merged commit 3f5241a into ModelTC:main Dec 15, 2025
1 check passed
helloyongyang pushed a commit that referenced this pull request Mar 6, 2026
1. rename dcu to hygon_dcu
2. fix flash attention bug
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants