-
Notifications
You must be signed in to change notification settings - Fork 40
[FEATURE SUPPORT] Centralize dynamic mask creation for FDMA #197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
4417b16
a06bff1
510ef4d
7d4cf23
a2b5309
0dbd673
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,108 @@ | ||||||||||||||||||||||||||||||
| # Copyright 2025 Jingze Shi and Liangdong Wang. All rights reserved. | ||||||||||||||||||||||||||||||
| # | ||||||||||||||||||||||||||||||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||||||||||||||||||||||||||||||
| # you may not use this file except in compliance with the License. | ||||||||||||||||||||||||||||||
| # You may obtain a copy of the License at | ||||||||||||||||||||||||||||||
| # | ||||||||||||||||||||||||||||||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||||||||||||||||||||||||||||||
| # | ||||||||||||||||||||||||||||||
| # Unless required by applicable law or agreed to in writing, software | ||||||||||||||||||||||||||||||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||||||||||||||||||||||||||||||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||||||||||||||||||||||||||||||
| # See the License for the specific language governing permissions and | ||||||||||||||||||||||||||||||
| # limitations under the License. | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| from typing import Optional | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| import torch | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| def dynamic_mask( | ||||||||||||||||||||||||||||||
| attention_bias: torch.Tensor, | ||||||||||||||||||||||||||||||
| attention_mask: Optional[torch.Tensor], | ||||||||||||||||||||||||||||||
| window_size: int, | ||||||||||||||||||||||||||||||
| min_dtype: float, | ||||||||||||||||||||||||||||||
| ): | ||||||||||||||||||||||||||||||
| r""" | ||||||||||||||||||||||||||||||
| This function generates a dynamic mask based on the top-k attention bias. | ||||||||||||||||||||||||||||||
| Args: | ||||||||||||||||||||||||||||||
| attention_bias (torch.Tensor): The attention bias tensor of shape | ||||||||||||||||||||||||||||||
| ({batch_size|1}, {num_heads|num_kv_heads|1}, {query_len|1}, key_len). | ||||||||||||||||||||||||||||||
| attention_mask (Optional[torch.Tensor]): The attention mask boolean tensor of shape | ||||||||||||||||||||||||||||||
| ({batch_size|1}, {num_heads|num_kv_heads|1}, {query_len|1}, key_len). | ||||||||||||||||||||||||||||||
| window_size (int): The number of top elements to consider for the mask. | ||||||||||||||||||||||||||||||
| min_dtype (float): The minimum value to use for masking. | ||||||||||||||||||||||||||||||
| Returns: | ||||||||||||||||||||||||||||||
| attention_mask (Tensor): The attention mask tensor of shape | ||||||||||||||||||||||||||||||
| ({batch_size|1}, {num_heads|num_kv_heads|1}, {query_len|1}, key_len). | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| attention_bias = attention_bias.masked_fill(~attention_mask, min_dtype) if attention_mask is not None else attention_bias | ||||||||||||||||||||||||||||||
| topk_values, topk_indices = torch.topk( | ||||||||||||||||||||||||||||||
| attention_bias.detach(), | ||||||||||||||||||||||||||||||
| window_size, dim=-1, largest=True, sorted=False | ||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||
| attention_mask = torch.zeros_like( | ||||||||||||||||||||||||||||||
| attention_bias, dtype=torch.bool, device=attention_bias.device | ||||||||||||||||||||||||||||||
|
Comment on lines
+41
to
+47
|
||||||||||||||||||||||||||||||
| attention_bias = attention_bias.masked_fill(~attention_mask, min_dtype) if attention_mask is not None else attention_bias | |
| topk_values, topk_indices = torch.topk( | |
| attention_bias.detach(), | |
| window_size, dim=-1, largest=True, sorted=False | |
| ) | |
| attention_mask = torch.zeros_like( | |
| attention_bias, dtype=torch.bool, device=attention_bias.device | |
| masked_attention_bias = attention_bias.masked_fill(~attention_mask, min_dtype) if attention_mask is not None else attention_bias | |
| topk_values, topk_indices = torch.topk( | |
| masked_attention_bias.detach(), | |
| window_size, dim=-1, largest=True, sorted=False | |
| ) | |
| attention_mask = torch.zeros_like( | |
| masked_attention_bias, dtype=torch.bool, device=masked_attention_bias.device |
Copilot
AI
Oct 23, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Padding mask is initialized with torch.ones (all True), which marks padded positions as valid. This contradicts the typical attention mask convention where False/0 indicates invalid positions. Consider using torch.zeros instead to mark padding as invalid.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -167,4 +167,4 @@ def upad_input( | |
| indices_q, | ||
| (cu_seqlens_q, cu_seqlens_k), | ||
| (max_seqlen_in_batch_q, max_seqlen_in_batch_k), | ||
| ) | ||
| ) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the extra article 'the' before 'kwargs' in line 501. Should read 'all necessary kwargs' instead of 'all necessary the kwargs'.