-
Notifications
You must be signed in to change notification settings - Fork 10.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DAG] foldABSToABD - support abs(*ext(x) - *ext(y)) -> zext(abd*(x, y)) from different extension source types #71670
Conversation
@llvm/pr-subscribers-backend-aarch64 @llvm/pr-subscribers-backend-x86 Author: Simon Pilgrim (RKSimon) ChangesWe currently limit the fold to cases where we're extending from the same source type, but we can safely perform this using the wider of mismatching source types (we're really just interested in having extension bits on both sources). I've limited this to cases where the truncations to the wider type are free to avoid us introducing further extension/truncation instructions. In most cases the TRUNCATE nodes will never get created as it will be folded inside the getNode() call. Full diff: https://github.com/llvm/llvm-project/pull/71670.diff 3 Files Affected:
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index bee50d58c73c32c..59ea0c1ca647fde 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -10923,11 +10923,12 @@ SDValue DAGCombiner::foldABSToABD(SDNode *N) {
// fold abs(sext(x) - sext(y)) -> zext(abds(x, y))
// fold abs(zext(x) - zext(y)) -> zext(abdu(x, y))
- // NOTE: Extensions must be equivalent.
- if (VT1 == VT2 && hasOperation(ABDOpcode, VT1)) {
- Op0 = Op0.getOperand(0);
- Op1 = Op1.getOperand(0);
- SDValue ABD = DAG.getNode(ABDOpcode, DL, VT1, Op0, Op1);
+ EVT MaxVT = VT1.bitsGT(VT2) ? VT1 : VT2;
+ if ((VT1 == VT2 || TLI.isTruncateFree(VT, MaxVT)) &&
+ hasOperation(ABDOpcode, MaxVT)) {
+ SDValue ABD = DAG.getNode(ABDOpcode, DL, MaxVT,
+ DAG.getNode(ISD::TRUNCATE, DL, MaxVT, Op0),
+ DAG.getNode(ISD::TRUNCATE, DL, MaxVT, Op1));
ABD = DAG.getNode(ISD::ZERO_EXTEND, DL, VT, ABD);
return DAG.getZExtOrTrunc(ABD, DL, SrcVT);
}
diff --git a/llvm/test/CodeGen/X86/abds.ll b/llvm/test/CodeGen/X86/abds.ll
index c5be2474c84d405..39ac47e99e6e989 100644
--- a/llvm/test/CodeGen/X86/abds.ll
+++ b/llvm/test/CodeGen/X86/abds.ll
@@ -50,16 +50,13 @@ define i8 @abd_ext_i8_i16(i8 %a, i16 %b) nounwind {
;
; X64-LABEL: abd_ext_i8_i16:
; X64: # %bb.0:
-; X64-NEXT: # kill: def $esi killed $esi def $rsi
-; X64-NEXT: # kill: def $edi killed $edi def $rdi
-; X64-NEXT: movsbq %dil, %rax
-; X64-NEXT: movswq %si, %rcx
-; X64-NEXT: movq %rax, %rdx
-; X64-NEXT: subq %rcx, %rdx
-; X64-NEXT: negq %rdx
-; X64-NEXT: subq %rcx, %rax
-; X64-NEXT: cmovleq %rdx, %rax
-; X64-NEXT: # kill: def $al killed $al killed $rax
+; X64-NEXT: movswl %si, %eax
+; X64-NEXT: movsbl %dil, %ecx
+; X64-NEXT: subl %eax, %ecx
+; X64-NEXT: movl %ecx, %eax
+; X64-NEXT: negl %eax
+; X64-NEXT: cmovsl %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
; X64-NEXT: retq
%aext = sext i8 %a to i64
%bext = sext i16 %b to i64
@@ -132,32 +129,25 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
define i16 @abd_ext_i16_i32(i16 %a, i32 %b) nounwind {
; X86-LABEL: abd_ext_i16_i32:
; X86: # %bb.0:
-; X86-NEXT: pushl %esi
; X86-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X86-NEXT: movl %ecx, %edx
-; X86-NEXT: sarl $31, %edx
; X86-NEXT: movswl {{[0-9]+}}(%esp), %eax
-; X86-NEXT: movl %eax, %esi
-; X86-NEXT: sarl $31, %esi
+; X86-NEXT: movl %eax, %edx
+; X86-NEXT: subl %ecx, %edx
+; X86-NEXT: negl %edx
; X86-NEXT: subl %ecx, %eax
-; X86-NEXT: sbbl %edx, %esi
-; X86-NEXT: sarl $31, %esi
-; X86-NEXT: xorl %esi, %eax
-; X86-NEXT: subl %esi, %eax
+; X86-NEXT: cmovlel %edx, %eax
; X86-NEXT: # kill: def $ax killed $ax killed $eax
-; X86-NEXT: popl %esi
; X86-NEXT: retl
;
; X64-LABEL: abd_ext_i16_i32:
; X64: # %bb.0:
-; X64-NEXT: # kill: def $edi killed $edi def $rdi
-; X64-NEXT: movswq %di, %rax
-; X64-NEXT: movslq %esi, %rcx
-; X64-NEXT: movq %rax, %rdx
-; X64-NEXT: subq %rcx, %rdx
-; X64-NEXT: negq %rdx
-; X64-NEXT: subq %rcx, %rax
-; X64-NEXT: cmovleq %rdx, %rax
+; X64-NEXT: movslq %esi, %rax
+; X64-NEXT: movswl %di, %ecx
+; X64-NEXT: movslq %ecx, %rcx
+; X64-NEXT: subq %rax, %rcx
+; X64-NEXT: movq %rcx, %rax
+; X64-NEXT: negq %rax
+; X64-NEXT: cmovsq %rcx, %rax
; X64-NEXT: # kill: def $ax killed $ax killed $rax
; X64-NEXT: retq
%aext = sext i16 %a to i64
@@ -231,31 +221,24 @@ define i32 @abd_ext_i32(i32 %a, i32 %b) nounwind {
define i32 @abd_ext_i32_i16(i32 %a, i16 %b) nounwind {
; X86-LABEL: abd_ext_i32_i16:
; X86: # %bb.0:
-; X86-NEXT: pushl %esi
; X86-NEXT: movswl {{[0-9]+}}(%esp), %ecx
-; X86-NEXT: movl %ecx, %edx
-; X86-NEXT: sarl $31, %edx
; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X86-NEXT: movl %eax, %esi
-; X86-NEXT: sarl $31, %esi
+; X86-NEXT: movl %eax, %edx
+; X86-NEXT: subl %ecx, %edx
+; X86-NEXT: negl %edx
; X86-NEXT: subl %ecx, %eax
-; X86-NEXT: sbbl %edx, %esi
-; X86-NEXT: sarl $31, %esi
-; X86-NEXT: xorl %esi, %eax
-; X86-NEXT: subl %esi, %eax
-; X86-NEXT: popl %esi
+; X86-NEXT: cmovlel %edx, %eax
; X86-NEXT: retl
;
; X64-LABEL: abd_ext_i32_i16:
; X64: # %bb.0:
-; X64-NEXT: # kill: def $esi killed $esi def $rsi
-; X64-NEXT: movslq %edi, %rax
-; X64-NEXT: movswq %si, %rcx
-; X64-NEXT: movq %rax, %rdx
-; X64-NEXT: subq %rcx, %rdx
-; X64-NEXT: negq %rdx
-; X64-NEXT: subq %rcx, %rax
-; X64-NEXT: cmovleq %rdx, %rax
+; X64-NEXT: movslq %edi, %rcx
+; X64-NEXT: movswl %si, %eax
+; X64-NEXT: cltq
+; X64-NEXT: subq %rax, %rcx
+; X64-NEXT: movq %rcx, %rax
+; X64-NEXT: negq %rax
+; X64-NEXT: cmovsq %rcx, %rax
; X64-NEXT: # kill: def $eax killed $eax killed $rax
; X64-NEXT: retq
%aext = sext i32 %a to i64
diff --git a/llvm/test/CodeGen/X86/abdu.ll b/llvm/test/CodeGen/X86/abdu.ll
index fe805528c435a52..11719be4ab5cd0c 100644
--- a/llvm/test/CodeGen/X86/abdu.ll
+++ b/llvm/test/CodeGen/X86/abdu.ll
@@ -50,14 +50,13 @@ define i8 @abd_ext_i8_i16(i8 %a, i16 %b) nounwind {
;
; X64-LABEL: abd_ext_i8_i16:
; X64: # %bb.0:
-; X64-NEXT: movzbl %dil, %eax
-; X64-NEXT: movzwl %si, %ecx
-; X64-NEXT: movq %rax, %rdx
-; X64-NEXT: subq %rcx, %rdx
-; X64-NEXT: negq %rdx
-; X64-NEXT: subq %rcx, %rax
-; X64-NEXT: cmovbeq %rdx, %rax
-; X64-NEXT: # kill: def $al killed $al killed $rax
+; X64-NEXT: movzwl %si, %eax
+; X64-NEXT: movzbl %dil, %ecx
+; X64-NEXT: subl %eax, %ecx
+; X64-NEXT: movl %ecx, %eax
+; X64-NEXT: negl %eax
+; X64-NEXT: cmovsl %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
; X64-NEXT: retq
%aext = zext i8 %a to i64
%bext = zext i16 %b to i64
@@ -130,25 +129,24 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
define i16 @abd_ext_i16_i32(i16 %a, i32 %b) nounwind {
; X86-LABEL: abd_ext_i16_i32:
; X86: # %bb.0:
+; X86-NEXT: movl {{[0-9]+}}(%esp), %ecx
; X86-NEXT: movzwl {{[0-9]+}}(%esp), %eax
-; X86-NEXT: xorl %ecx, %ecx
-; X86-NEXT: subl {{[0-9]+}}(%esp), %eax
-; X86-NEXT: sbbl %ecx, %ecx
-; X86-NEXT: sarl $31, %ecx
-; X86-NEXT: xorl %ecx, %eax
+; X86-NEXT: movl %eax, %edx
+; X86-NEXT: subl %ecx, %edx
+; X86-NEXT: negl %edx
; X86-NEXT: subl %ecx, %eax
+; X86-NEXT: cmovbel %edx, %eax
; X86-NEXT: # kill: def $ax killed $ax killed $eax
; X86-NEXT: retl
;
; X64-LABEL: abd_ext_i16_i32:
; X64: # %bb.0:
-; X64-NEXT: movzwl %di, %eax
-; X64-NEXT: movl %esi, %ecx
-; X64-NEXT: movq %rax, %rdx
-; X64-NEXT: subq %rcx, %rdx
-; X64-NEXT: negq %rdx
-; X64-NEXT: subq %rcx, %rax
-; X64-NEXT: cmovbeq %rdx, %rax
+; X64-NEXT: movl %esi, %eax
+; X64-NEXT: movzwl %di, %ecx
+; X64-NEXT: subq %rax, %rcx
+; X64-NEXT: movq %rcx, %rax
+; X64-NEXT: negq %rax
+; X64-NEXT: cmovsq %rcx, %rax
; X64-NEXT: # kill: def $ax killed $ax killed $rax
; X64-NEXT: retq
%aext = zext i16 %a to i64
@@ -224,23 +222,21 @@ define i32 @abd_ext_i32_i16(i32 %a, i16 %b) nounwind {
; X86: # %bb.0:
; X86-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X86-NEXT: xorl %edx, %edx
+; X86-NEXT: movl %eax, %edx
+; X86-NEXT: subl %ecx, %edx
+; X86-NEXT: negl %edx
; X86-NEXT: subl %ecx, %eax
-; X86-NEXT: sbbl %edx, %edx
-; X86-NEXT: sarl $31, %edx
-; X86-NEXT: xorl %edx, %eax
-; X86-NEXT: subl %edx, %eax
+; X86-NEXT: cmovbel %edx, %eax
; X86-NEXT: retl
;
; X64-LABEL: abd_ext_i32_i16:
; X64: # %bb.0:
-; X64-NEXT: movl %edi, %eax
-; X64-NEXT: movzwl %si, %ecx
-; X64-NEXT: movq %rax, %rdx
-; X64-NEXT: subq %rcx, %rdx
-; X64-NEXT: negq %rdx
-; X64-NEXT: subq %rcx, %rax
-; X64-NEXT: cmovbeq %rdx, %rax
+; X64-NEXT: movl %edi, %ecx
+; X64-NEXT: movzwl %si, %eax
+; X64-NEXT: subq %rax, %rcx
+; X64-NEXT: movq %rcx, %rax
+; X64-NEXT: negq %rax
+; X64-NEXT: cmovsq %rcx, %rax
; X64-NEXT: # kill: def $eax killed $eax killed $rax
; X64-NEXT: retq
%aext = zext i32 %a to i64
|
Would this replace a |
I'm happy to try that, I was being conservative as initially I only need to cover a few cases (it came up when looking at D152928) - maybe we just need a oneuse check? |
Ah, is that where this came from. It looks like there might be an extra SVE tests that needs updating. With that this LGTM. |
…)) from different extension source types We currently limit the fold to cases where we're extending from the same source type, but we can safely perform this using the wider of mismatching source types (we're really just interested in having extension bits on both sources), ensuring we don't create additional extensions/truncations.
…)) from different extension source types (llvm#71670) We currently limit the fold to cases where we're extending from the same source type, but we can safely perform this using the wider of mismatching source types (we're really just interested in having extension bits on both sources), ensuring we don't create additional extensions/truncations.
We currently limit the fold to cases where we're extending from the same source type, but we can safely perform this using the wider of mismatching source types (we're really just interested in having extension bits on both sources).
I've limited this to cases where the truncations to the wider type are free to avoid us introducing further extension/truncation instructions. In most cases the TRUNCATE nodes will never get created as it will be folded inside the getNode() call.