Skip to content

Commit

Permalink
Remove confusing warning message from SDPA about mask alignment (pyto…
Browse files Browse the repository at this point in the history
…rch#114909)

# Summary
Users have reported that this warning message leads to confusion about the correctness of the mask even though it is only concerned with performance.

Pull Request resolved: pytorch#114909
Approved by: https://github.com/Chillee
  • Loading branch information
drisspg authored and dmenig committed Dec 21, 2023
1 parent 1311305 commit 345349c
Showing 1 changed file with 0 additions and 7 deletions.
7 changes: 0 additions & 7 deletions aten/src/ATen/native/transformers/attention.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -548,13 +548,6 @@ at::Tensor preprocess_mask(
constexpr int mem_eff_alignment = 8;
at::Tensor result_mask = mask;
if (!aligned_tensor<mem_eff_alignment>(mask)) {
TORCH_WARN_ONCE(
"Memory Efficient Attention requires the attn_mask to be aligned to, ",
mem_eff_alignment,
" elements. "
"Prior to calling SDPA, pad the last dimension of the attn_mask "
"to be at least a multiple of ", mem_eff_alignment,
" and then slice the attn_mask to the original size.");
result_mask = pad_bias<mem_eff_alignment>(mask);
}
return result_mask.expand_symint(
Expand Down

0 comments on commit 345349c

Please sign in to comment.