Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CGP] Avoid replacing a free ext with multiple other exts. #77094

Merged
merged 3 commits into from
Jan 18, 2024

Conversation

fhahn
Copy link
Contributor

@fhahn fhahn commented Jan 5, 2024

Replacing a free extension with 2 or more extensions unnecessarily increases the number of IR instructions without providing any benefits. It also unnecessarily causes operations to be performed on wider types than necessary.

In some cases, the extra extensions also pessimize codegen (see bfis-in-loop.ll).

The changes in arm64-codegen-prepare-extload.ll also show that we avoid promotions that should only be performed in stress mode.

Replacing a free extension with 2 or more extensions unnecessarily
increases the number of IR instructions without providing any benefits.
It also unnecessarily causes operations to be performed on wider types
than necessary.

In some cases, the extra extensions also pessimize codegen (see
bfis-in-loop.ll).

The changes in arm64-codegen-prepare-extload.ll also show that we avoid
promotions that should only be performed in stress mode.
@llvmbot
Copy link
Collaborator

llvmbot commented Jan 5, 2024

@llvm/pr-subscribers-backend-x86

Author: Florian Hahn (fhahn)

Changes

Replacing a free extension with 2 or more extensions unnecessarily increases the number of IR instructions without providing any benefits. It also unnecessarily causes operations to be performed on wider types than necessary.

In some cases, the extra extensions also pessimize codegen (see bfis-in-loop.ll).

The changes in arm64-codegen-prepare-extload.ll also show that we avoid promotions that should only be performed in stress mode.


Full diff: https://github.com/llvm/llvm-project/pull/77094.diff

5 Files Affected:

  • (modified) llvm/lib/CodeGen/CodeGenPrepare.cpp (+5-2)
  • (modified) llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll (+15-7)
  • (modified) llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/bfis-in-loop.ll (+24-26)
  • (modified) llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll (+31-32)
diff --git a/llvm/lib/CodeGen/CodeGenPrepare.cpp b/llvm/lib/CodeGen/CodeGenPrepare.cpp
index 5bd4c6b067d796..606946ceffd4f3 100644
--- a/llvm/lib/CodeGen/CodeGenPrepare.cpp
+++ b/llvm/lib/CodeGen/CodeGenPrepare.cpp
@@ -5965,7 +5965,9 @@ bool CodeGenPrepare::tryToPromoteExts(
     // cut this search path, because it means we degrade the code quality.
     // With exactly 2, the transformation is neutral, because we will merge
     // one extension but leave one. However, we optimistically keep going,
-    // because the new extension may be removed too.
+    // because the new extension may be removed too. Also avoid replacing a
+    // single free extension with multiple extensions, as this increases the
+    // number of IR instructions while providing any savings.
     long long TotalCreatedInstsCost = CreatedInstsCost + NewCreatedInstsCost;
     // FIXME: It would be possible to propagate a negative value instead of
     // conservatively ceiling it to 0.
@@ -5973,7 +5975,8 @@ bool CodeGenPrepare::tryToPromoteExts(
         std::max((long long)0, (TotalCreatedInstsCost - ExtCost));
     if (!StressExtLdPromotion &&
         (TotalCreatedInstsCost > 1 ||
-         !isPromotedInstructionLegal(*TLI, *DL, PromotedVal))) {
+         !isPromotedInstructionLegal(*TLI, *DL, PromotedVal) ||
+         (ExtCost == 0 && NewExts.size() > 1))) {
       // This promotion is not profitable, rollback to the previous state, and
       // save the current extension in ProfitablyMovedExts as the latest
       // speculative promotion turned out to be unprofitable.
diff --git a/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll b/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll
index 23cbad0d15b4c1..646f988f574813 100644
--- a/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll
@@ -528,10 +528,14 @@ entry:
 ; OPTALL: [[LD:%[a-zA-Z_0-9-]+]] = load i8, ptr %p
 ;
 ; This transformation should really happen only for stress mode.
-; OPT-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
-; OPT-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
-; OPT-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
-; OPT-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = trunc i64 [[IDX64]] to i32
+; STRESS-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
+; STRESS-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
+; STRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
+; STRESS-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = trunc i64 [[IDX64]] to i32
+;
+; NONSTRESS-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
+; NONSTRESS-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
+; NONSTRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = zext i32 [[RES32]] to i64
 ;
 ; DISABLE-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
 ; DISABLE-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
@@ -583,9 +587,13 @@ entry:
 ; OPTALL: [[LD:%[a-zA-Z_0-9-]+]] = load i8, ptr %p
 ;
 ; This transformation should really happen only for stress mode.
-; OPT-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
-; OPT-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
-; OPT-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
+; STRESS-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
+; STRESS-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
+; STRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
+;
+; NONSTRESS-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
+; NONSTRESS-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
+; NONSTRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = zext i32 [[RES32]] to i64
 ;
 ; DISABLE-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
 ; DISABLE-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
diff --git a/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll b/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll
index 35f871e504ca82..8f195531927e06 100644
--- a/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll
+++ b/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll
@@ -24,9 +24,9 @@ define void @avoid_promotion_1_and(ptr nocapture noundef %arg, ptr %p) {
 ; CHECK-NEXT:    ldr w11, [x1, #76]
 ; CHECK-NEXT:    ldr w12, [x1]
 ; CHECK-NEXT:    eor w10, w10, w11
-; CHECK-NEXT:    and x10, x10, x12
+; CHECK-NEXT:    and w10, w10, w12
 ; CHECK-NEXT:    str w10, [x0, #32]
-; CHECK-NEXT:    strh w9, [x1, x10, lsl #1]
+; CHECK-NEXT:    strh w9, [x1, w10, uxtw #1]
 ; CHECK-NEXT:    b LBB0_1
 bb:
   %gep = getelementptr inbounds %struct.zot, ptr %arg, i64 0, i32 9
@@ -81,13 +81,13 @@ define void @avoid_promotion_2_and(ptr nocapture noundef %arg) {
 ; CHECK-NEXT:    ldrb w11, [x11, x12]
 ; CHECK-NEXT:    eor w10, w10, w11
 ; CHECK-NEXT:    ldur w11, [x8, #-24]
-; CHECK-NEXT:    and x10, x10, x14
+; CHECK-NEXT:    and w10, w10, w14
 ; CHECK-NEXT:    ldp x15, x14, [x8, #-16]
-; CHECK-NEXT:    lsl x13, x10, #1
+; CHECK-NEXT:    ubfiz x13, x10, #1, #32
 ; CHECK-NEXT:    str w10, [x8]
-; CHECK-NEXT:    and x10, x11, x12
+; CHECK-NEXT:    and w10, w11, w12
 ; CHECK-NEXT:    ldrh w11, [x14, x13]
-; CHECK-NEXT:    strh w11, [x15, x10, lsl #1]
+; CHECK-NEXT:    strh w11, [x15, w10, uxtw #1]
 ; CHECK-NEXT:    strh w12, [x14, x13]
 ; CHECK-NEXT:    b LBB1_1
 ; CHECK-NEXT:  LBB1_4: ; %exit
diff --git a/llvm/test/CodeGen/AArch64/bfis-in-loop.ll b/llvm/test/CodeGen/AArch64/bfis-in-loop.ll
index b66b149bd643fa..6b12d954b9d1ca 100644
--- a/llvm/test/CodeGen/AArch64/bfis-in-loop.ll
+++ b/llvm/test/CodeGen/AArch64/bfis-in-loop.ll
@@ -20,19 +20,18 @@ define i64 @bfis_in_loop_zero() {
 ; CHECK-NEXT: 	ldr	x8, [x8]
 ; CHECK-NEXT: .LBB0_1:                                // %midblock
 ; CHECK-NEXT:   // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: 	ldrh	w10, [x8, #72]
-; CHECK-NEXT: 	ldr	x13, [x8, #8]
-; CHECK-NEXT: 	ubfx	x11, x10, #8, #24
-; CHECK-NEXT: 	cmp	w10, #0
-; CHECK-NEXT: 	and	x10, x10, #0xff
-; CHECK-NEXT: 	cset	w12, ne
-; CHECK-NEXT: 	ldr	x8, [x13, #16]
-; CHECK-NEXT: 	csel	w9, w9, w11, eq
-; CHECK-NEXT: 	and	x11, x0, #0xffffffff00000000
-; CHECK-NEXT: 	orr	x10, x10, x9, lsl #8
-; CHECK-NEXT: 	orr	x11, x11, x12, lsl #16
-; CHECK-NEXT: 	orr	x0, x11, x10
-; CHECK-NEXT: 	cbnz	x13, .LBB0_1
+; CHECK-NEXT:	ldrh	w10, [x8, #72]
+; CHECK-NEXT:	ldr	x13, [x8, #8]
+; CHECK-NEXT:	lsr	w11, w10, #8
+; CHECK-NEXT:	cmp	w10, #0
+; CHECK-NEXT:	ldr	x8, [x13, #16]
+; CHECK-NEXT:	cset	w12, ne
+; CHECK-NEXT:	csel	w9, w9, w11, eq
+; CHECK-NEXT:	and	x11, x0, #0xffffffff00000000
+; CHECK-NEXT:	bfi	w10, w9, #8, #24
+; CHECK-NEXT:	orr	x11, x11, x12, lsl #16
+; CHECK-NEXT:	orr	x0, x11, x10
+; CHECK-NEXT:	cbnz	x13, .LBB0_1
 ; CHECK-NEXT:  // %bb.2: // %exit
 ; CHECK-NEXT:    ret
 entry:
@@ -88,19 +87,18 @@ define i64 @bfis_in_loop_undef() {
 ; CHECK-NEXT: 	ldr	x9, [x9]
 ; CHECK-NEXT: .LBB1_1:                                // %midblock
 ; CHECK-NEXT:                                         // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: 	ldrh	w10, [x9, #72]
-; CHECK-NEXT: 	ldr	x13, [x9, #8]
-; CHECK-NEXT: 	ubfx	x11, x10, #8, #24
-; CHECK-NEXT: 	cmp	w10, #0
-; CHECK-NEXT: 	and	x10, x10, #0xff
-; CHECK-NEXT: 	cset	w12, ne
-; CHECK-NEXT: 	ldr	x9, [x13, #16]
-; CHECK-NEXT: 	csel	w8, w8, w11, eq
-; CHECK-NEXT: 	and	x11, x0, #0xffffffff00000000
-; CHECK-NEXT: 	orr	x10, x10, x8, lsl #8
-; CHECK-NEXT: 	orr	x11, x11, x12, lsl #16
-; CHECK-NEXT: 	orr	x0, x11, x10
-; CHECK-NEXT: 	cbnz	x13, .LBB1_1
+; CHECK-NEXT:	ldrh	w10, [x9, #72]
+; CHECK-NEXT:	ldr	x13, [x9, #8]
+; CHECK-NEXT:	lsr	w11, w10, #8
+; CHECK-NEXT:	cmp	w10, #0
+; CHECK-NEXT:	ldr	x9, [x13, #16]
+; CHECK-NEXT:	cset	w12, ne
+; CHECK-NEXT:	csel	w8, w8, w11, eq
+; CHECK-NEXT:	and	x11, x0, #0xffffffff00000000
+; CHECK-NEXT:	bfi	w10, w8, #8, #24
+; CHECK-NEXT:	orr	x11, x11, x12, lsl #16
+; CHECK-NEXT:	orr	x0, x11, x10
+; CHECK-NEXT:	cbnz	x13, .LBB1_1
 ; CHECK-NEXT:  // %bb.2: // %exit
 ; CHECK-NEXT:    ret
 entry:
diff --git a/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll b/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll
index 8b8500ef724866..0250b1b4a7f861 100644
--- a/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll
+++ b/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll
@@ -20,57 +20,56 @@ define i32 @decode_sb(ptr %t, i32 %bl, i32 %_msprop1966, i32 %sub.i, i64 %idxpro
 ; CHECK-NEXT:    pushq %r13
 ; CHECK-NEXT:    pushq %r12
 ; CHECK-NEXT:    pushq %rbx
-; CHECK-NEXT:    subq $24, %rsp
+; CHECK-NEXT:    pushq %rax
 ; CHECK-NEXT:    .cfi_offset %rbx, -56
 ; CHECK-NEXT:    .cfi_offset %r12, -48
 ; CHECK-NEXT:    .cfi_offset %r13, -40
 ; CHECK-NEXT:    .cfi_offset %r14, -32
 ; CHECK-NEXT:    .cfi_offset %r15, -24
 ; CHECK-NEXT:    movl %r9d, %ebx
+; CHECK-NEXT:    # kill: def $edx killed $edx def $rdx
 ; CHECK-NEXT:    movabsq $87960930222080, %r15 # imm = 0x500000000000
-; CHECK-NEXT:    movl 0, %r13d
+; CHECK-NEXT:    movl 0, %r11d
 ; CHECK-NEXT:    movl %esi, %r12d
-; CHECK-NEXT:    # implicit-def: $eax
-; CHECK-NEXT:    movq %rax, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
+; CHECK-NEXT:    # implicit-def: $r13d
 ; CHECK-NEXT:    testb $1, %bl
 ; CHECK-NEXT:    jne .LBB0_7
 ; CHECK-NEXT:  # %bb.1: # %if.else
 ; CHECK-NEXT:    movq %r8, %r14
-; CHECK-NEXT:    movl %ecx, %eax
-; CHECK-NEXT:    andl $1, %eax
-; CHECK-NEXT:    movq %rax, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
-; CHECK-NEXT:    movzbl 544(%rax), %eax
-; CHECK-NEXT:    andl $1, %eax
+; CHECK-NEXT:    movl %ecx, %r13d
+; CHECK-NEXT:    andl $1, %r13d
+; CHECK-NEXT:    movzbl 544(%r13), %r8d
+; CHECK-NEXT:    andl $1, %r8d
 ; CHECK-NEXT:    movl %r15d, %r9d
 ; CHECK-NEXT:    andl $1, %r9d
 ; CHECK-NEXT:    movl %r14d, %r10d
 ; CHECK-NEXT:    andl $1, %r10d
-; CHECK-NEXT:    movl %esi, %r11d
+; CHECK-NEXT:    movabsq $17592186044416, %rax # imm = 0x100000000000
+; CHECK-NEXT:    orq %r10, %rax
+; CHECK-NEXT:    movl %esi, %r10d
 ; CHECK-NEXT:    # kill: def $cl killed $cl killed $ecx
-; CHECK-NEXT:    shrl %cl, %r11d
-; CHECK-NEXT:    movabsq $17592186044416, %r8 # imm = 0x100000000000
-; CHECK-NEXT:    orq %r10, %r8
-; CHECK-NEXT:    andl $2, %r11d
+; CHECK-NEXT:    shrl %cl, %r10d
+; CHECK-NEXT:    andl $2, %r10d
 ; CHECK-NEXT:    testb $1, %bl
-; CHECK-NEXT:    cmoveq %r9, %r8
-; CHECK-NEXT:    movl %edx, %ecx
-; CHECK-NEXT:    orq %rax, %rcx
-; CHECK-NEXT:    movq %r13, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
-; CHECK-NEXT:    orq $1, %r13
-; CHECK-NEXT:    orl %esi, %r11d
-; CHECK-NEXT:    movl $1, %edx
+; CHECK-NEXT:    cmoveq %r9, %rax
+; CHECK-NEXT:    orl %r8d, %edx
+; CHECK-NEXT:    movq %r11, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
+; CHECK-NEXT:    movq %r11, %rcx
+; CHECK-NEXT:    orq $1, %rcx
+; CHECK-NEXT:    orl %esi, %r10d
+; CHECK-NEXT:    movl $1, %r8d
 ; CHECK-NEXT:    je .LBB0_3
 ; CHECK-NEXT:  # %bb.2: # %if.else
-; CHECK-NEXT:    movl (%r8), %edx
+; CHECK-NEXT:    movl (%rax), %r8d
 ; CHECK-NEXT:  .LBB0_3: # %if.else
-; CHECK-NEXT:    shlq $5, %rcx
-; CHECK-NEXT:    movq %r12, %rsi
-; CHECK-NEXT:    shlq $7, %rsi
-; CHECK-NEXT:    addq %rcx, %rsi
+; CHECK-NEXT:    shlq $5, %rdx
+; CHECK-NEXT:    movq %r12, %rax
+; CHECK-NEXT:    shlq $7, %rax
+; CHECK-NEXT:    leaq (%rax,%rdx), %rsi
 ; CHECK-NEXT:    addq $1248, %rsi # imm = 0x4E0
-; CHECK-NEXT:    movq %r13, 0
+; CHECK-NEXT:    movq %rcx, 0
 ; CHECK-NEXT:    movq %rdi, %r15
-; CHECK-NEXT:    movl %edx, (%rdi)
+; CHECK-NEXT:    movl %r8d, (%rdi)
 ; CHECK-NEXT:    xorl %eax, %eax
 ; CHECK-NEXT:    xorl %edi, %edi
 ; CHECK-NEXT:    xorl %edx, %edx
@@ -86,10 +85,10 @@ define i32 @decode_sb(ptr %t, i32 %bl, i32 %_msprop1966, i32 %sub.i, i64 %idxpro
 ; CHECK-NEXT:    testb $1, %bl
 ; CHECK-NEXT:    movq %r15, %rdi
 ; CHECK-NEXT:    movabsq $87960930222080, %r15 # imm = 0x500000000000
-; CHECK-NEXT:    movq {{[-0-9]+}}(%r{{[sb]}}p), %r13 # 8-byte Reload
+; CHECK-NEXT:    movq {{[-0-9]+}}(%r{{[sb]}}p), %r11 # 8-byte Reload
 ; CHECK-NEXT:    jne .LBB0_8
 ; CHECK-NEXT:  .LBB0_7: # %if.end69
-; CHECK-NEXT:    movl %r13d, 0
+; CHECK-NEXT:    movl %r11d, 0
 ; CHECK-NEXT:    xorl %eax, %eax
 ; CHECK-NEXT:    xorl %esi, %esi
 ; CHECK-NEXT:    xorl %edx, %edx
@@ -97,12 +96,12 @@ define i32 @decode_sb(ptr %t, i32 %bl, i32 %_msprop1966, i32 %sub.i, i64 %idxpro
 ; CHECK-NEXT:    xorl %r8d, %r8d
 ; CHECK-NEXT:    callq *%rax
 ; CHECK-NEXT:    xorq %r15, %r12
-; CHECK-NEXT:    movslq {{[-0-9]+}}(%r{{[sb]}}p), %rax # 4-byte Folded Reload
+; CHECK-NEXT:    movslq %r13d, %rax
 ; CHECK-NEXT:    movzbl (%r12), %ecx
 ; CHECK-NEXT:    movb %cl, 544(%rax)
 ; CHECK-NEXT:  .LBB0_8: # %land.lhs.true56
 ; CHECK-NEXT:    xorl %eax, %eax
-; CHECK-NEXT:    addq $24, %rsp
+; CHECK-NEXT:    addq $8, %rsp
 ; CHECK-NEXT:    popq %rbx
 ; CHECK-NEXT:    popq %r12
 ; CHECK-NEXT:    popq %r13

@llvmbot
Copy link
Collaborator

llvmbot commented Jan 5, 2024

@llvm/pr-subscribers-backend-aarch64

Author: Florian Hahn (fhahn)

Changes

Replacing a free extension with 2 or more extensions unnecessarily increases the number of IR instructions without providing any benefits. It also unnecessarily causes operations to be performed on wider types than necessary.

In some cases, the extra extensions also pessimize codegen (see bfis-in-loop.ll).

The changes in arm64-codegen-prepare-extload.ll also show that we avoid promotions that should only be performed in stress mode.


Full diff: https://github.com/llvm/llvm-project/pull/77094.diff

5 Files Affected:

  • (modified) llvm/lib/CodeGen/CodeGenPrepare.cpp (+5-2)
  • (modified) llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll (+15-7)
  • (modified) llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/bfis-in-loop.ll (+24-26)
  • (modified) llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll (+31-32)
diff --git a/llvm/lib/CodeGen/CodeGenPrepare.cpp b/llvm/lib/CodeGen/CodeGenPrepare.cpp
index 5bd4c6b067d796..606946ceffd4f3 100644
--- a/llvm/lib/CodeGen/CodeGenPrepare.cpp
+++ b/llvm/lib/CodeGen/CodeGenPrepare.cpp
@@ -5965,7 +5965,9 @@ bool CodeGenPrepare::tryToPromoteExts(
     // cut this search path, because it means we degrade the code quality.
     // With exactly 2, the transformation is neutral, because we will merge
     // one extension but leave one. However, we optimistically keep going,
-    // because the new extension may be removed too.
+    // because the new extension may be removed too. Also avoid replacing a
+    // single free extension with multiple extensions, as this increases the
+    // number of IR instructions while providing any savings.
     long long TotalCreatedInstsCost = CreatedInstsCost + NewCreatedInstsCost;
     // FIXME: It would be possible to propagate a negative value instead of
     // conservatively ceiling it to 0.
@@ -5973,7 +5975,8 @@ bool CodeGenPrepare::tryToPromoteExts(
         std::max((long long)0, (TotalCreatedInstsCost - ExtCost));
     if (!StressExtLdPromotion &&
         (TotalCreatedInstsCost > 1 ||
-         !isPromotedInstructionLegal(*TLI, *DL, PromotedVal))) {
+         !isPromotedInstructionLegal(*TLI, *DL, PromotedVal) ||
+         (ExtCost == 0 && NewExts.size() > 1))) {
       // This promotion is not profitable, rollback to the previous state, and
       // save the current extension in ProfitablyMovedExts as the latest
       // speculative promotion turned out to be unprofitable.
diff --git a/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll b/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll
index 23cbad0d15b4c1..646f988f574813 100644
--- a/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-codegen-prepare-extload.ll
@@ -528,10 +528,14 @@ entry:
 ; OPTALL: [[LD:%[a-zA-Z_0-9-]+]] = load i8, ptr %p
 ;
 ; This transformation should really happen only for stress mode.
-; OPT-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
-; OPT-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
-; OPT-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
-; OPT-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = trunc i64 [[IDX64]] to i32
+; STRESS-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
+; STRESS-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
+; STRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
+; STRESS-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = trunc i64 [[IDX64]] to i32
+;
+; NONSTRESS-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
+; NONSTRESS-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
+; NONSTRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = zext i32 [[RES32]] to i64
 ;
 ; DISABLE-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
 ; DISABLE-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
@@ -583,9 +587,13 @@ entry:
 ; OPTALL: [[LD:%[a-zA-Z_0-9-]+]] = load i8, ptr %p
 ;
 ; This transformation should really happen only for stress mode.
-; OPT-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
-; OPT-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
-; OPT-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
+; STRESS-NEXT: [[ZEXT64:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i64
+; STRESS-NEXT: [[ZEXTB:%[a-zA-Z_0-9-]+]] = zext i32 %b to i64
+; STRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = add nuw i64 [[ZEXT64]], [[ZEXTB]]
+;
+; NONSTRESS-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
+; NONSTRESS-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
+; NONSTRESS-NEXT: [[IDX64:%[a-zA-Z_0-9-]+]] = zext i32 [[RES32]] to i64
 ;
 ; DISABLE-NEXT: [[ZEXT32:%[a-zA-Z_0-9-]+]] = zext i8 [[LD]] to i32
 ; DISABLE-NEXT: [[RES32:%[a-zA-Z_0-9-]+]] = add nuw i32 [[ZEXT32]], %b
diff --git a/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll b/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll
index 35f871e504ca82..8f195531927e06 100644
--- a/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll
+++ b/llvm/test/CodeGen/AArch64/avoid-free-ext-promotion.ll
@@ -24,9 +24,9 @@ define void @avoid_promotion_1_and(ptr nocapture noundef %arg, ptr %p) {
 ; CHECK-NEXT:    ldr w11, [x1, #76]
 ; CHECK-NEXT:    ldr w12, [x1]
 ; CHECK-NEXT:    eor w10, w10, w11
-; CHECK-NEXT:    and x10, x10, x12
+; CHECK-NEXT:    and w10, w10, w12
 ; CHECK-NEXT:    str w10, [x0, #32]
-; CHECK-NEXT:    strh w9, [x1, x10, lsl #1]
+; CHECK-NEXT:    strh w9, [x1, w10, uxtw #1]
 ; CHECK-NEXT:    b LBB0_1
 bb:
   %gep = getelementptr inbounds %struct.zot, ptr %arg, i64 0, i32 9
@@ -81,13 +81,13 @@ define void @avoid_promotion_2_and(ptr nocapture noundef %arg) {
 ; CHECK-NEXT:    ldrb w11, [x11, x12]
 ; CHECK-NEXT:    eor w10, w10, w11
 ; CHECK-NEXT:    ldur w11, [x8, #-24]
-; CHECK-NEXT:    and x10, x10, x14
+; CHECK-NEXT:    and w10, w10, w14
 ; CHECK-NEXT:    ldp x15, x14, [x8, #-16]
-; CHECK-NEXT:    lsl x13, x10, #1
+; CHECK-NEXT:    ubfiz x13, x10, #1, #32
 ; CHECK-NEXT:    str w10, [x8]
-; CHECK-NEXT:    and x10, x11, x12
+; CHECK-NEXT:    and w10, w11, w12
 ; CHECK-NEXT:    ldrh w11, [x14, x13]
-; CHECK-NEXT:    strh w11, [x15, x10, lsl #1]
+; CHECK-NEXT:    strh w11, [x15, w10, uxtw #1]
 ; CHECK-NEXT:    strh w12, [x14, x13]
 ; CHECK-NEXT:    b LBB1_1
 ; CHECK-NEXT:  LBB1_4: ; %exit
diff --git a/llvm/test/CodeGen/AArch64/bfis-in-loop.ll b/llvm/test/CodeGen/AArch64/bfis-in-loop.ll
index b66b149bd643fa..6b12d954b9d1ca 100644
--- a/llvm/test/CodeGen/AArch64/bfis-in-loop.ll
+++ b/llvm/test/CodeGen/AArch64/bfis-in-loop.ll
@@ -20,19 +20,18 @@ define i64 @bfis_in_loop_zero() {
 ; CHECK-NEXT: 	ldr	x8, [x8]
 ; CHECK-NEXT: .LBB0_1:                                // %midblock
 ; CHECK-NEXT:   // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: 	ldrh	w10, [x8, #72]
-; CHECK-NEXT: 	ldr	x13, [x8, #8]
-; CHECK-NEXT: 	ubfx	x11, x10, #8, #24
-; CHECK-NEXT: 	cmp	w10, #0
-; CHECK-NEXT: 	and	x10, x10, #0xff
-; CHECK-NEXT: 	cset	w12, ne
-; CHECK-NEXT: 	ldr	x8, [x13, #16]
-; CHECK-NEXT: 	csel	w9, w9, w11, eq
-; CHECK-NEXT: 	and	x11, x0, #0xffffffff00000000
-; CHECK-NEXT: 	orr	x10, x10, x9, lsl #8
-; CHECK-NEXT: 	orr	x11, x11, x12, lsl #16
-; CHECK-NEXT: 	orr	x0, x11, x10
-; CHECK-NEXT: 	cbnz	x13, .LBB0_1
+; CHECK-NEXT:	ldrh	w10, [x8, #72]
+; CHECK-NEXT:	ldr	x13, [x8, #8]
+; CHECK-NEXT:	lsr	w11, w10, #8
+; CHECK-NEXT:	cmp	w10, #0
+; CHECK-NEXT:	ldr	x8, [x13, #16]
+; CHECK-NEXT:	cset	w12, ne
+; CHECK-NEXT:	csel	w9, w9, w11, eq
+; CHECK-NEXT:	and	x11, x0, #0xffffffff00000000
+; CHECK-NEXT:	bfi	w10, w9, #8, #24
+; CHECK-NEXT:	orr	x11, x11, x12, lsl #16
+; CHECK-NEXT:	orr	x0, x11, x10
+; CHECK-NEXT:	cbnz	x13, .LBB0_1
 ; CHECK-NEXT:  // %bb.2: // %exit
 ; CHECK-NEXT:    ret
 entry:
@@ -88,19 +87,18 @@ define i64 @bfis_in_loop_undef() {
 ; CHECK-NEXT: 	ldr	x9, [x9]
 ; CHECK-NEXT: .LBB1_1:                                // %midblock
 ; CHECK-NEXT:                                         // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: 	ldrh	w10, [x9, #72]
-; CHECK-NEXT: 	ldr	x13, [x9, #8]
-; CHECK-NEXT: 	ubfx	x11, x10, #8, #24
-; CHECK-NEXT: 	cmp	w10, #0
-; CHECK-NEXT: 	and	x10, x10, #0xff
-; CHECK-NEXT: 	cset	w12, ne
-; CHECK-NEXT: 	ldr	x9, [x13, #16]
-; CHECK-NEXT: 	csel	w8, w8, w11, eq
-; CHECK-NEXT: 	and	x11, x0, #0xffffffff00000000
-; CHECK-NEXT: 	orr	x10, x10, x8, lsl #8
-; CHECK-NEXT: 	orr	x11, x11, x12, lsl #16
-; CHECK-NEXT: 	orr	x0, x11, x10
-; CHECK-NEXT: 	cbnz	x13, .LBB1_1
+; CHECK-NEXT:	ldrh	w10, [x9, #72]
+; CHECK-NEXT:	ldr	x13, [x9, #8]
+; CHECK-NEXT:	lsr	w11, w10, #8
+; CHECK-NEXT:	cmp	w10, #0
+; CHECK-NEXT:	ldr	x9, [x13, #16]
+; CHECK-NEXT:	cset	w12, ne
+; CHECK-NEXT:	csel	w8, w8, w11, eq
+; CHECK-NEXT:	and	x11, x0, #0xffffffff00000000
+; CHECK-NEXT:	bfi	w10, w8, #8, #24
+; CHECK-NEXT:	orr	x11, x11, x12, lsl #16
+; CHECK-NEXT:	orr	x0, x11, x10
+; CHECK-NEXT:	cbnz	x13, .LBB1_1
 ; CHECK-NEXT:  // %bb.2: // %exit
 ; CHECK-NEXT:    ret
 entry:
diff --git a/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll b/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll
index 8b8500ef724866..0250b1b4a7f861 100644
--- a/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll
+++ b/llvm/test/CodeGen/X86/inline-spiller-impdef-on-implicit-def-regression.ll
@@ -20,57 +20,56 @@ define i32 @decode_sb(ptr %t, i32 %bl, i32 %_msprop1966, i32 %sub.i, i64 %idxpro
 ; CHECK-NEXT:    pushq %r13
 ; CHECK-NEXT:    pushq %r12
 ; CHECK-NEXT:    pushq %rbx
-; CHECK-NEXT:    subq $24, %rsp
+; CHECK-NEXT:    pushq %rax
 ; CHECK-NEXT:    .cfi_offset %rbx, -56
 ; CHECK-NEXT:    .cfi_offset %r12, -48
 ; CHECK-NEXT:    .cfi_offset %r13, -40
 ; CHECK-NEXT:    .cfi_offset %r14, -32
 ; CHECK-NEXT:    .cfi_offset %r15, -24
 ; CHECK-NEXT:    movl %r9d, %ebx
+; CHECK-NEXT:    # kill: def $edx killed $edx def $rdx
 ; CHECK-NEXT:    movabsq $87960930222080, %r15 # imm = 0x500000000000
-; CHECK-NEXT:    movl 0, %r13d
+; CHECK-NEXT:    movl 0, %r11d
 ; CHECK-NEXT:    movl %esi, %r12d
-; CHECK-NEXT:    # implicit-def: $eax
-; CHECK-NEXT:    movq %rax, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
+; CHECK-NEXT:    # implicit-def: $r13d
 ; CHECK-NEXT:    testb $1, %bl
 ; CHECK-NEXT:    jne .LBB0_7
 ; CHECK-NEXT:  # %bb.1: # %if.else
 ; CHECK-NEXT:    movq %r8, %r14
-; CHECK-NEXT:    movl %ecx, %eax
-; CHECK-NEXT:    andl $1, %eax
-; CHECK-NEXT:    movq %rax, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
-; CHECK-NEXT:    movzbl 544(%rax), %eax
-; CHECK-NEXT:    andl $1, %eax
+; CHECK-NEXT:    movl %ecx, %r13d
+; CHECK-NEXT:    andl $1, %r13d
+; CHECK-NEXT:    movzbl 544(%r13), %r8d
+; CHECK-NEXT:    andl $1, %r8d
 ; CHECK-NEXT:    movl %r15d, %r9d
 ; CHECK-NEXT:    andl $1, %r9d
 ; CHECK-NEXT:    movl %r14d, %r10d
 ; CHECK-NEXT:    andl $1, %r10d
-; CHECK-NEXT:    movl %esi, %r11d
+; CHECK-NEXT:    movabsq $17592186044416, %rax # imm = 0x100000000000
+; CHECK-NEXT:    orq %r10, %rax
+; CHECK-NEXT:    movl %esi, %r10d
 ; CHECK-NEXT:    # kill: def $cl killed $cl killed $ecx
-; CHECK-NEXT:    shrl %cl, %r11d
-; CHECK-NEXT:    movabsq $17592186044416, %r8 # imm = 0x100000000000
-; CHECK-NEXT:    orq %r10, %r8
-; CHECK-NEXT:    andl $2, %r11d
+; CHECK-NEXT:    shrl %cl, %r10d
+; CHECK-NEXT:    andl $2, %r10d
 ; CHECK-NEXT:    testb $1, %bl
-; CHECK-NEXT:    cmoveq %r9, %r8
-; CHECK-NEXT:    movl %edx, %ecx
-; CHECK-NEXT:    orq %rax, %rcx
-; CHECK-NEXT:    movq %r13, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
-; CHECK-NEXT:    orq $1, %r13
-; CHECK-NEXT:    orl %esi, %r11d
-; CHECK-NEXT:    movl $1, %edx
+; CHECK-NEXT:    cmoveq %r9, %rax
+; CHECK-NEXT:    orl %r8d, %edx
+; CHECK-NEXT:    movq %r11, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
+; CHECK-NEXT:    movq %r11, %rcx
+; CHECK-NEXT:    orq $1, %rcx
+; CHECK-NEXT:    orl %esi, %r10d
+; CHECK-NEXT:    movl $1, %r8d
 ; CHECK-NEXT:    je .LBB0_3
 ; CHECK-NEXT:  # %bb.2: # %if.else
-; CHECK-NEXT:    movl (%r8), %edx
+; CHECK-NEXT:    movl (%rax), %r8d
 ; CHECK-NEXT:  .LBB0_3: # %if.else
-; CHECK-NEXT:    shlq $5, %rcx
-; CHECK-NEXT:    movq %r12, %rsi
-; CHECK-NEXT:    shlq $7, %rsi
-; CHECK-NEXT:    addq %rcx, %rsi
+; CHECK-NEXT:    shlq $5, %rdx
+; CHECK-NEXT:    movq %r12, %rax
+; CHECK-NEXT:    shlq $7, %rax
+; CHECK-NEXT:    leaq (%rax,%rdx), %rsi
 ; CHECK-NEXT:    addq $1248, %rsi # imm = 0x4E0
-; CHECK-NEXT:    movq %r13, 0
+; CHECK-NEXT:    movq %rcx, 0
 ; CHECK-NEXT:    movq %rdi, %r15
-; CHECK-NEXT:    movl %edx, (%rdi)
+; CHECK-NEXT:    movl %r8d, (%rdi)
 ; CHECK-NEXT:    xorl %eax, %eax
 ; CHECK-NEXT:    xorl %edi, %edi
 ; CHECK-NEXT:    xorl %edx, %edx
@@ -86,10 +85,10 @@ define i32 @decode_sb(ptr %t, i32 %bl, i32 %_msprop1966, i32 %sub.i, i64 %idxpro
 ; CHECK-NEXT:    testb $1, %bl
 ; CHECK-NEXT:    movq %r15, %rdi
 ; CHECK-NEXT:    movabsq $87960930222080, %r15 # imm = 0x500000000000
-; CHECK-NEXT:    movq {{[-0-9]+}}(%r{{[sb]}}p), %r13 # 8-byte Reload
+; CHECK-NEXT:    movq {{[-0-9]+}}(%r{{[sb]}}p), %r11 # 8-byte Reload
 ; CHECK-NEXT:    jne .LBB0_8
 ; CHECK-NEXT:  .LBB0_7: # %if.end69
-; CHECK-NEXT:    movl %r13d, 0
+; CHECK-NEXT:    movl %r11d, 0
 ; CHECK-NEXT:    xorl %eax, %eax
 ; CHECK-NEXT:    xorl %esi, %esi
 ; CHECK-NEXT:    xorl %edx, %edx
@@ -97,12 +96,12 @@ define i32 @decode_sb(ptr %t, i32 %bl, i32 %_msprop1966, i32 %sub.i, i64 %idxpro
 ; CHECK-NEXT:    xorl %r8d, %r8d
 ; CHECK-NEXT:    callq *%rax
 ; CHECK-NEXT:    xorq %r15, %r12
-; CHECK-NEXT:    movslq {{[-0-9]+}}(%r{{[sb]}}p), %rax # 4-byte Folded Reload
+; CHECK-NEXT:    movslq %r13d, %rax
 ; CHECK-NEXT:    movzbl (%r12), %ecx
 ; CHECK-NEXT:    movb %cl, 544(%rax)
 ; CHECK-NEXT:  .LBB0_8: # %land.lhs.true56
 ; CHECK-NEXT:    xorl %eax, %eax
-; CHECK-NEXT:    addq $24, %rsp
+; CHECK-NEXT:    addq $8, %rsp
 ; CHECK-NEXT:    popq %rbx
 ; CHECK-NEXT:    popq %r12
 ; CHECK-NEXT:    popq %r13

@fhahn fhahn requested a review from RKSimon January 5, 2024 13:40
Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - cheers

@fhahn fhahn merged commit 40d952b into llvm:main Jan 18, 2024
3 of 4 checks passed
@fhahn fhahn deleted the cgp-free-exts branch January 18, 2024 10:48
fhahn added a commit to fhahn/llvm-project that referenced this pull request Jan 18, 2024
Replacing a free extension with 2 or more extensions unnecessarily
increases the number of IR instructions without providing any benefits.
It also unnecessarily causes operations to be performed on wider types
than necessary.

In some cases, the extra extensions also pessimize codegen (see
bfis-in-loop.ll).

The changes in arm64-codegen-prepare-extload.ll also show that we avoid
promotions that should only be performed in stress mode.

PR: llvm#77094

(cherry-picked from 40d952b)
ampandey-1995 pushed a commit to ampandey-1995/llvm-project that referenced this pull request Jan 19, 2024
Replacing a free extension with 2 or more extensions unnecessarily
increases the number of IR instructions without providing any benefits.
It also unnecessarily causes operations to be performed on wider types
than necessary.

In some cases, the extra extensions also pessimize codegen (see
bfis-in-loop.ll).

The changes in arm64-codegen-prepare-extload.ll also show that we avoid
promotions that should only be performed in stress mode.

PR: llvm#77094
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants