-
Notifications
You must be signed in to change notification settings - Fork 12k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[x86][codegen] Poor AVX512VBMI codegen when using intrinsics #77459
Comments
@llvm/issue-subscribers-clang-codegen Author: Guillaume Chatelet (gchatelet)
The following snippet byte-reverses two `zmm` and returns matching elements into a mask
GCC 13.2 compiles it down to
Whether clang 17.0.1 compiles it down to
The issue comes from
into
The Godbolt link : https://godbolt.org/z/5PjzaTr51 llvm-mca latency for the vector version : https://godbolt.org/z/K36K4r16K Is there any way to prevent the transformation and stick to the vector intrinsics ? |
Benchmark on a sapphirerapid machine, the GPR version is 70% slower than the vector version.
|
@llvm/issue-subscribers-backend-x86 Author: Guillaume Chatelet (gchatelet)
The following snippet byte-reverses two `zmm` and returns matching elements into a mask
GCC 13.2 compiles it down to
Whether clang 17.0.1 compiles it down to
The issue comes from
into
The Godbolt link : https://godbolt.org/z/5PjzaTr51 llvm-mca latency for the vector version : https://godbolt.org/z/K36K4r16K Is there any way to prevent the transformation and stick to the vector intrinsics ? |
We can get most of the way with a DAG peephole to fold the bitreverse back into a v64i1 shuffle: define i64 @reverse_v64i1(<8 x i64> %max, <8 x i64> %value) {
entry:
%0 = bitcast <8 x i64> %max to <64 x i8>
%1 = bitcast <8 x i64> %value to <64 x i8>
%2 = icmp eq <64 x i8> %0, %1
%3 = shufflevector <64 x i1> %2, <64 x i1> poison, <64 x i32> <i32 63, i32 62, i32 61, i32 60, i32 59, i32 58, i32 57, i32 56, i32 55, i32 54, i32 53, i32 52, i32 51, i32 50, i32 49, i32 48, i32 47, i32 46, i32 45, i32 44, i32 43, i32 42, i32 41, i32 40, i32 39, i32 38, i32 37, i32 36, i32 35, i32 34, i32 33, i32 32, i32 31, i32 30, i32 29, i32 28, i32 27, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 17, i32 16, i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
%4 = bitcast <64 x i1> %3 to i64
ret i64 %4
} |
We may want to make sure that the 256-bit version is fixed as well
Although it can be rewritten in a form where the bitreverse optimization doesn't kick in
AFAIK this version cannot be written with avx512 but I might be wrong. |
Closing for now - please reopen (or create a new ticket) if you think its worth trying to pre-shuffle the comparison arguments |
Thx a lot for the quick fix @RKSimon 🙏 |
…,permute(y)) for 32/64-bit element vectors Noticed in #77459 - for wider element types, its usually better to pre-shuffle the comparison arguments if we can, like we already for broadcasts
X86 doesn't have a BITREVERSE instruction, so if we're working with a casted boolean vector, we're better off shuffling the vector instead if we have PSHUFB (SSSE3 or later) Fixes llvm#77459
Missed these in 3210ce2 for the llvm#77459 fix
…,permute(y)) for 32/64-bit element vectors Noticed in llvm#77459 - for wider element types, its usually better to pre-shuffle the comparison arguments if we can, like we already for broadcasts
The following snippet byte-reverses two
zmm
and returns matching elements into a maskGCC 13.2 compiles it down to
Whether clang 17.0.1 compiles it down to
The issue comes from
InstCombine
transforminginto
The
@llvm.bitreverse.i64
operation is then done using GPRs instead of vectors.Godbolt link : https://godbolt.org/z/5PjzaTr51
llvm-mca latency for the vector version : https://godbolt.org/z/K36K4r16K
llvm-mca latency for the GRP version : https://godbolt.org/z/7qcceYvM8
Is there any way to prevent the transformation and stick to the vector intrinsics ?
The text was updated successfully, but these errors were encountered: