Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMDGPU] Use correct number of bits needed for div/rem shrinking #80622

Merged
merged 1 commit into from
Feb 6, 2024

Conversation

choikwa
Copy link
Contributor

@choikwa choikwa commented Feb 4, 2024

There was an error where dividend of type i64 and actual used number of bits of 32 fell into path that assumes only 24 bits being used. Check that AtLeast field is used correctly when using computeNumSignBits and add necessary extend/trunc for 32 bits path.

Regolden and update testcases.

@jrbyrnes @bcahoon @arsenm @rampitec

Copy link

github-actions bot commented Feb 4, 2024

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be
notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write
permissions for the repository. In which case you can instead tag reviewers by
name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review
by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate
is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Collaborator

llvmbot commented Feb 4, 2024

@llvm/pr-subscribers-llvm-globalisel

@llvm/pr-subscribers-backend-amdgpu

Author: choikwa (choikwa)

Changes

There was an error where dividend of type i64 and actual used number of bits of 32 fell into path that assumes only 24 bits being used. Check that AtLeast field is used correctly when using computeNumSignBits and add necessary extend/trunc for 32 bits path.

Regolden and update testcases.

@jrbyrnes @bcahoon @arsenm @rampitec


Patch is 97.13 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/80622.diff

13 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp (+12-9)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/sdiv.i32.ll (+39-57)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/srem.i32.ll (+45-51)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/udiv.i32.ll (+31-55)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/urem.i32.ll (+39-51)
  • (modified) llvm/test/CodeGen/AMDGPU/bypass-div.ll (+36-17)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv.ll (+74-123)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv64.ll (+156-86)
  • (modified) llvm/test/CodeGen/AMDGPU/sdivrem24.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/srem64.ll (+142-98)
  • (modified) llvm/test/CodeGen/AMDGPU/udiv.ll (+81-127)
  • (modified) llvm/test/CodeGen/AMDGPU/udiv64.ll (+102-48)
  • (modified) llvm/test/CodeGen/AMDGPU/urem64.ll (+232-156)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
index 87b1957c799e2..cfe996121bf26 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
@@ -1213,7 +1213,10 @@ Value *AMDGPUCodeGenPrepareImpl::expandDivRem24(IRBuilder<> &Builder,
                                                 BinaryOperator &I, Value *Num,
                                                 Value *Den, bool IsDiv,
                                                 bool IsSigned) const {
-  int DivBits = getDivNumBits(I, Num, Den, 9, IsSigned);
+  unsigned SSBits = Num->getType()->getScalarSizeInBits();
+  // If Num bits <= 24, assume 0 signbits.
+  unsigned AtLeast = (SSBits <= 24) ? 0 : (SSBits - 24);
+  int DivBits = getDivNumBits(I, Num, Den, AtLeast, IsSigned);
   if (DivBits == -1)
     return nullptr;
   return expandDivRem24Impl(Builder, I, Num, Den, DivBits, IsDiv, IsSigned);
@@ -1385,13 +1388,13 @@ Value *AMDGPUCodeGenPrepareImpl::expandDivRem32(IRBuilder<> &Builder,
   Type *I32Ty = Builder.getInt32Ty();
   Type *F32Ty = Builder.getFloatTy();
 
-  if (Ty->getScalarSizeInBits() < 32) {
+  if (Ty->getScalarSizeInBits() != 32) {
     if (IsSigned) {
-      X = Builder.CreateSExt(X, I32Ty);
-      Y = Builder.CreateSExt(Y, I32Ty);
+      X = Builder.CreateSExtOrTrunc(X, I32Ty);
+      Y = Builder.CreateSExtOrTrunc(Y, I32Ty);
     } else {
-      X = Builder.CreateZExt(X, I32Ty);
-      Y = Builder.CreateZExt(Y, I32Ty);
+      X = Builder.CreateZExtOrTrunc(X, I32Ty);
+      Y = Builder.CreateZExtOrTrunc(Y, I32Ty);
     }
   }
 
@@ -1482,10 +1485,10 @@ Value *AMDGPUCodeGenPrepareImpl::expandDivRem32(IRBuilder<> &Builder,
   if (IsSigned) {
     Res = Builder.CreateXor(Res, Sign);
     Res = Builder.CreateSub(Res, Sign);
+    Res = Builder.CreateSExtOrTrunc(Res, Ty);
+  } else {
+    Res = Builder.CreateZExtOrTrunc(Res, Ty);
   }
-
-  Res = Builder.CreateTrunc(Res, Ty);
-
   return Res;
 }
 
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/sdiv.i32.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/sdiv.i32.ll
index 1061f0003bd48..ffb3239806728 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/sdiv.i32.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/sdiv.i32.ll
@@ -741,25 +741,17 @@ define i32 @v_sdiv_i32_24bit(i32 %num, i32 %den) {
 ; CGP-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; CGP-NEXT:    v_and_b32_e32 v0, 0xffffff, v0
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
-; CGP-NEXT:    v_cvt_f32_u32_e32 v2, v1
-; CGP-NEXT:    v_sub_i32_e32 v3, vcc, 0, v1
-; CGP-NEXT:    v_rcp_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_f32_e32 v2, 0x4f7ffffe, v2
-; CGP-NEXT:    v_cvt_u32_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_lo_u32 v3, v3, v2
-; CGP-NEXT:    v_mul_hi_u32 v3, v2, v3
-; CGP-NEXT:    v_add_i32_e32 v2, vcc, v2, v3
-; CGP-NEXT:    v_mul_hi_u32 v2, v0, v2
-; CGP-NEXT:    v_mul_lo_u32 v3, v2, v1
-; CGP-NEXT:    v_add_i32_e32 v4, vcc, 1, v2
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v3
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v2, v2, v4, vcc
-; CGP-NEXT:    v_sub_i32_e64 v3, s[4:5], v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v3, vcc
-; CGP-NEXT:    v_add_i32_e32 v3, vcc, 1, v2
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v2, v3, vcc
+; CGP-NEXT:    v_cvt_f32_i32_e32 v0, v0
+; CGP-NEXT:    v_cvt_f32_i32_e32 v1, v1
+; CGP-NEXT:    v_rcp_f32_e32 v2, v1
+; CGP-NEXT:    v_mul_f32_e32 v2, v0, v2
+; CGP-NEXT:    v_trunc_f32_e32 v2, v2
+; CGP-NEXT:    v_fma_f32 v0, -v2, v1, v0
+; CGP-NEXT:    v_cvt_i32_f32_e32 v2, v2
+; CGP-NEXT:    v_cmp_ge_f32_e64 s[4:5], |v0|, |v1|
+; CGP-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s[4:5]
+; CGP-NEXT:    v_add_i32_e32 v0, vcc, v2, v0
+; CGP-NEXT:    v_bfe_i32 v0, v0, 0, 25
 ; CGP-NEXT:    s_setpc_b64 s[30:31]
   %num.mask = and i32 %num, 16777215
   %den.mask = and i32 %den, 16777215
@@ -840,44 +832,34 @@ define <2 x i32> @v_sdiv_v2i32_24bit(<2 x i32> %num, <2 x i32> %den) {
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
 ; CGP-NEXT:    v_and_b32_e32 v2, 0xffffff, v2
 ; CGP-NEXT:    v_and_b32_e32 v3, 0xffffff, v3
-; CGP-NEXT:    v_cvt_f32_u32_e32 v4, v2
-; CGP-NEXT:    v_sub_i32_e32 v5, vcc, 0, v2
-; CGP-NEXT:    v_cvt_f32_u32_e32 v6, v3
-; CGP-NEXT:    v_sub_i32_e32 v7, vcc, 0, v3
-; CGP-NEXT:    v_rcp_f32_e32 v4, v4
-; CGP-NEXT:    v_rcp_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_f32_e32 v4, 0x4f7ffffe, v4
-; CGP-NEXT:    v_mul_f32_e32 v6, 0x4f7ffffe, v6
-; CGP-NEXT:    v_cvt_u32_f32_e32 v4, v4
-; CGP-NEXT:    v_cvt_u32_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_lo_u32 v5, v5, v4
-; CGP-NEXT:    v_mul_lo_u32 v7, v7, v6
-; CGP-NEXT:    v_mul_hi_u32 v5, v4, v5
-; CGP-NEXT:    v_mul_hi_u32 v7, v6, v7
-; CGP-NEXT:    v_add_i32_e32 v4, vcc, v4, v5
-; CGP-NEXT:    v_add_i32_e32 v5, vcc, v6, v7
-; CGP-NEXT:    v_mul_hi_u32 v4, v0, v4
-; CGP-NEXT:    v_mul_hi_u32 v5, v1, v5
-; CGP-NEXT:    v_mul_lo_u32 v6, v4, v2
-; CGP-NEXT:    v_add_i32_e32 v7, vcc, 1, v4
-; CGP-NEXT:    v_mul_lo_u32 v8, v5, v3
-; CGP-NEXT:    v_add_i32_e32 v9, vcc, 1, v5
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v6
-; CGP-NEXT:    v_sub_i32_e32 v1, vcc, v1, v8
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v2
-; CGP-NEXT:    v_cndmask_b32_e32 v4, v4, v7, vcc
-; CGP-NEXT:    v_sub_i32_e64 v6, s[4:5], v0, v2
-; CGP-NEXT:    v_cmp_ge_u32_e64 s[4:5], v1, v3
-; CGP-NEXT:    v_cndmask_b32_e64 v5, v5, v9, s[4:5]
-; CGP-NEXT:    v_sub_i32_e64 v7, s[6:7], v1, v3
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v6, vcc
-; CGP-NEXT:    v_add_i32_e32 v6, vcc, 1, v4
-; CGP-NEXT:    v_cndmask_b32_e64 v1, v1, v7, s[4:5]
-; CGP-NEXT:    v_add_i32_e32 v7, vcc, 1, v5
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v2
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v4, v6, vcc
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v1, v3
-; CGP-NEXT:    v_cndmask_b32_e32 v1, v5, v7, vcc
+; CGP-NEXT:    v_xor_b32_e32 v4, v0, v2
+; CGP-NEXT:    v_cvt_f32_i32_e32 v0, v0
+; CGP-NEXT:    v_cvt_f32_i32_e32 v2, v2
+; CGP-NEXT:    v_xor_b32_e32 v5, v1, v3
+; CGP-NEXT:    v_cvt_f32_i32_e32 v1, v1
+; CGP-NEXT:    v_cvt_f32_i32_e32 v3, v3
+; CGP-NEXT:    v_ashrrev_i32_e32 v4, 30, v4
+; CGP-NEXT:    v_rcp_f32_e32 v6, v2
+; CGP-NEXT:    v_ashrrev_i32_e32 v5, 30, v5
+; CGP-NEXT:    v_rcp_f32_e32 v7, v3
+; CGP-NEXT:    v_or_b32_e32 v4, 1, v4
+; CGP-NEXT:    v_mul_f32_e32 v6, v0, v6
+; CGP-NEXT:    v_or_b32_e32 v5, 1, v5
+; CGP-NEXT:    v_mul_f32_e32 v7, v1, v7
+; CGP-NEXT:    v_trunc_f32_e32 v6, v6
+; CGP-NEXT:    v_trunc_f32_e32 v7, v7
+; CGP-NEXT:    v_fma_f32 v0, -v6, v2, v0
+; CGP-NEXT:    v_cvt_i32_f32_e32 v6, v6
+; CGP-NEXT:    v_fma_f32 v1, -v7, v3, v1
+; CGP-NEXT:    v_cvt_i32_f32_e32 v7, v7
+; CGP-NEXT:    v_cmp_ge_f32_e64 vcc, |v0|, |v2|
+; CGP-NEXT:    v_cndmask_b32_e32 v0, 0, v4, vcc
+; CGP-NEXT:    v_cmp_ge_f32_e64 vcc, |v1|, |v3|
+; CGP-NEXT:    v_cndmask_b32_e32 v1, 0, v5, vcc
+; CGP-NEXT:    v_add_i32_e32 v0, vcc, v6, v0
+; CGP-NEXT:    v_add_i32_e32 v1, vcc, v7, v1
+; CGP-NEXT:    v_bfe_i32 v0, v0, 0, 25
+; CGP-NEXT:    v_bfe_i32 v1, v1, 0, 25
 ; CGP-NEXT:    s_setpc_b64 s[30:31]
   %num.mask = and <2 x i32> %num, <i32 16777215, i32 16777215>
   %den.mask = and <2 x i32> %den, <i32 16777215, i32 16777215>
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/srem.i32.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/srem.i32.ll
index 1bb606f36e48d..783809a11a5f9 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/srem.i32.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/srem.i32.ll
@@ -679,23 +679,19 @@ define i32 @v_srem_i32_24bit(i32 %num, i32 %den) {
 ; CGP-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; CGP-NEXT:    v_and_b32_e32 v0, 0xffffff, v0
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
-; CGP-NEXT:    v_cvt_f32_u32_e32 v2, v1
-; CGP-NEXT:    v_sub_i32_e32 v3, vcc, 0, v1
-; CGP-NEXT:    v_rcp_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_f32_e32 v2, 0x4f7ffffe, v2
-; CGP-NEXT:    v_cvt_u32_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_lo_u32 v3, v3, v2
-; CGP-NEXT:    v_mul_hi_u32 v3, v2, v3
-; CGP-NEXT:    v_add_i32_e32 v2, vcc, v2, v3
-; CGP-NEXT:    v_mul_hi_u32 v2, v0, v2
-; CGP-NEXT:    v_mul_lo_u32 v2, v2, v1
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v2
-; CGP-NEXT:    v_sub_i32_e32 v2, vcc, v0, v1
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v2, vcc
-; CGP-NEXT:    v_sub_i32_e32 v2, vcc, v0, v1
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v2, vcc
+; CGP-NEXT:    v_cvt_f32_i32_e32 v2, v0
+; CGP-NEXT:    v_cvt_f32_i32_e32 v3, v1
+; CGP-NEXT:    v_rcp_f32_e32 v4, v3
+; CGP-NEXT:    v_mul_f32_e32 v4, v2, v4
+; CGP-NEXT:    v_trunc_f32_e32 v4, v4
+; CGP-NEXT:    v_fma_f32 v2, -v4, v3, v2
+; CGP-NEXT:    v_cvt_i32_f32_e32 v4, v4
+; CGP-NEXT:    v_cmp_ge_f32_e64 s[4:5], |v2|, |v3|
+; CGP-NEXT:    v_cndmask_b32_e64 v2, 0, 1, s[4:5]
+; CGP-NEXT:    v_add_i32_e32 v2, vcc, v4, v2
+; CGP-NEXT:    v_mul_lo_u32 v1, v2, v1
+; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v1
+; CGP-NEXT:    v_bfe_i32 v0, v0, 0, 25
 ; CGP-NEXT:    s_setpc_b64 s[30:31]
   %num.mask = and i32 %num, 16777215
   %den.mask = and i32 %den, 16777215
@@ -770,40 +766,38 @@ define <2 x i32> @v_srem_v2i32_24bit(<2 x i32> %num, <2 x i32> %den) {
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
 ; CGP-NEXT:    v_and_b32_e32 v2, 0xffffff, v2
 ; CGP-NEXT:    v_and_b32_e32 v3, 0xffffff, v3
-; CGP-NEXT:    v_cvt_f32_u32_e32 v4, v2
-; CGP-NEXT:    v_sub_i32_e32 v5, vcc, 0, v2
-; CGP-NEXT:    v_cvt_f32_u32_e32 v6, v3
-; CGP-NEXT:    v_sub_i32_e32 v7, vcc, 0, v3
-; CGP-NEXT:    v_rcp_f32_e32 v4, v4
-; CGP-NEXT:    v_rcp_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_f32_e32 v4, 0x4f7ffffe, v4
-; CGP-NEXT:    v_mul_f32_e32 v6, 0x4f7ffffe, v6
-; CGP-NEXT:    v_cvt_u32_f32_e32 v4, v4
-; CGP-NEXT:    v_cvt_u32_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_lo_u32 v5, v5, v4
-; CGP-NEXT:    v_mul_lo_u32 v7, v7, v6
-; CGP-NEXT:    v_mul_hi_u32 v5, v4, v5
-; CGP-NEXT:    v_mul_hi_u32 v7, v6, v7
-; CGP-NEXT:    v_add_i32_e32 v4, vcc, v4, v5
-; CGP-NEXT:    v_add_i32_e32 v5, vcc, v6, v7
-; CGP-NEXT:    v_mul_hi_u32 v4, v0, v4
-; CGP-NEXT:    v_mul_hi_u32 v5, v1, v5
-; CGP-NEXT:    v_mul_lo_u32 v4, v4, v2
-; CGP-NEXT:    v_mul_lo_u32 v5, v5, v3
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v4
-; CGP-NEXT:    v_sub_i32_e32 v1, vcc, v1, v5
-; CGP-NEXT:    v_sub_i32_e32 v4, vcc, v0, v2
-; CGP-NEXT:    v_sub_i32_e32 v5, vcc, v1, v3
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v2
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v4, vcc
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v1, v3
-; CGP-NEXT:    v_cndmask_b32_e32 v1, v1, v5, vcc
-; CGP-NEXT:    v_sub_i32_e32 v4, vcc, v0, v2
-; CGP-NEXT:    v_sub_i32_e32 v5, vcc, v1, v3
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v2
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v4, vcc
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v1, v3
-; CGP-NEXT:    v_cndmask_b32_e32 v1, v1, v5, vcc
+; CGP-NEXT:    v_xor_b32_e32 v4, v0, v2
+; CGP-NEXT:    v_cvt_f32_i32_e32 v5, v0
+; CGP-NEXT:    v_cvt_f32_i32_e32 v6, v2
+; CGP-NEXT:    v_xor_b32_e32 v7, v1, v3
+; CGP-NEXT:    v_cvt_f32_i32_e32 v8, v1
+; CGP-NEXT:    v_cvt_f32_i32_e32 v9, v3
+; CGP-NEXT:    v_ashrrev_i32_e32 v4, 30, v4
+; CGP-NEXT:    v_rcp_f32_e32 v10, v6
+; CGP-NEXT:    v_ashrrev_i32_e32 v7, 30, v7
+; CGP-NEXT:    v_rcp_f32_e32 v11, v9
+; CGP-NEXT:    v_or_b32_e32 v4, 1, v4
+; CGP-NEXT:    v_mul_f32_e32 v10, v5, v10
+; CGP-NEXT:    v_or_b32_e32 v7, 1, v7
+; CGP-NEXT:    v_mul_f32_e32 v11, v8, v11
+; CGP-NEXT:    v_trunc_f32_e32 v10, v10
+; CGP-NEXT:    v_trunc_f32_e32 v11, v11
+; CGP-NEXT:    v_fma_f32 v5, -v10, v6, v5
+; CGP-NEXT:    v_cvt_i32_f32_e32 v10, v10
+; CGP-NEXT:    v_fma_f32 v8, -v11, v9, v8
+; CGP-NEXT:    v_cvt_i32_f32_e32 v11, v11
+; CGP-NEXT:    v_cmp_ge_f32_e64 vcc, |v5|, |v6|
+; CGP-NEXT:    v_cndmask_b32_e32 v4, 0, v4, vcc
+; CGP-NEXT:    v_cmp_ge_f32_e64 vcc, |v8|, |v9|
+; CGP-NEXT:    v_cndmask_b32_e32 v5, 0, v7, vcc
+; CGP-NEXT:    v_add_i32_e32 v4, vcc, v10, v4
+; CGP-NEXT:    v_add_i32_e32 v5, vcc, v11, v5
+; CGP-NEXT:    v_mul_lo_u32 v2, v4, v2
+; CGP-NEXT:    v_mul_lo_u32 v3, v5, v3
+; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v2
+; CGP-NEXT:    v_sub_i32_e32 v1, vcc, v1, v3
+; CGP-NEXT:    v_bfe_i32 v0, v0, 0, 25
+; CGP-NEXT:    v_bfe_i32 v1, v1, 0, 25
 ; CGP-NEXT:    s_setpc_b64 s[30:31]
   %num.mask = and <2 x i32> %num, <i32 16777215, i32 16777215>
   %den.mask = and <2 x i32> %den, <i32 16777215, i32 16777215>
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/udiv.i32.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/udiv.i32.ll
index 6588112973f4c..cd01148fa7dd7 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/udiv.i32.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/udiv.i32.ll
@@ -415,25 +415,17 @@ define i32 @v_udiv_i32_24bit(i32 %num, i32 %den) {
 ; CGP-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; CGP-NEXT:    v_and_b32_e32 v0, 0xffffff, v0
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
-; CGP-NEXT:    v_cvt_f32_u32_e32 v2, v1
-; CGP-NEXT:    v_sub_i32_e32 v3, vcc, 0, v1
-; CGP-NEXT:    v_rcp_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_f32_e32 v2, 0x4f7ffffe, v2
+; CGP-NEXT:    v_cvt_f32_u32_e32 v0, v0
+; CGP-NEXT:    v_cvt_f32_u32_e32 v1, v1
+; CGP-NEXT:    v_rcp_f32_e32 v2, v1
+; CGP-NEXT:    v_mul_f32_e32 v2, v0, v2
+; CGP-NEXT:    v_trunc_f32_e32 v2, v2
+; CGP-NEXT:    v_fma_f32 v0, -v2, v1, v0
 ; CGP-NEXT:    v_cvt_u32_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_lo_u32 v3, v3, v2
-; CGP-NEXT:    v_mul_hi_u32 v3, v2, v3
-; CGP-NEXT:    v_add_i32_e32 v2, vcc, v2, v3
-; CGP-NEXT:    v_mul_hi_u32 v2, v0, v2
-; CGP-NEXT:    v_mul_lo_u32 v3, v2, v1
-; CGP-NEXT:    v_add_i32_e32 v4, vcc, 1, v2
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v3
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v2, v2, v4, vcc
-; CGP-NEXT:    v_sub_i32_e64 v3, s[4:5], v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v3, vcc
-; CGP-NEXT:    v_add_i32_e32 v3, vcc, 1, v2
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v2, v3, vcc
+; CGP-NEXT:    v_cmp_ge_f32_e64 s[4:5], |v0|, v1
+; CGP-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s[4:5]
+; CGP-NEXT:    v_add_i32_e32 v0, vcc, v2, v0
+; CGP-NEXT:    v_and_b32_e32 v0, 0xffffff, v0
 ; CGP-NEXT:    s_setpc_b64 s[30:31]
   %num.mask = and i32 %num, 16777215
   %den.mask = and i32 %den, 16777215
@@ -496,44 +488,28 @@ define <2 x i32> @v_udiv_v2i32_24bit(<2 x i32> %num, <2 x i32> %den) {
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
 ; CGP-NEXT:    v_and_b32_e32 v2, 0xffffff, v2
 ; CGP-NEXT:    v_and_b32_e32 v3, 0xffffff, v3
-; CGP-NEXT:    v_cvt_f32_u32_e32 v4, v2
-; CGP-NEXT:    v_sub_i32_e32 v5, vcc, 0, v2
-; CGP-NEXT:    v_cvt_f32_u32_e32 v6, v3
-; CGP-NEXT:    v_sub_i32_e32 v7, vcc, 0, v3
-; CGP-NEXT:    v_rcp_f32_e32 v4, v4
-; CGP-NEXT:    v_rcp_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_f32_e32 v4, 0x4f7ffffe, v4
-; CGP-NEXT:    v_mul_f32_e32 v6, 0x4f7ffffe, v6
+; CGP-NEXT:    v_cvt_f32_u32_e32 v0, v0
+; CGP-NEXT:    v_cvt_f32_u32_e32 v2, v2
+; CGP-NEXT:    v_cvt_f32_u32_e32 v1, v1
+; CGP-NEXT:    v_cvt_f32_u32_e32 v3, v3
+; CGP-NEXT:    v_rcp_f32_e32 v4, v2
+; CGP-NEXT:    v_rcp_f32_e32 v5, v3
+; CGP-NEXT:    v_mul_f32_e32 v4, v0, v4
+; CGP-NEXT:    v_mul_f32_e32 v5, v1, v5
+; CGP-NEXT:    v_trunc_f32_e32 v4, v4
+; CGP-NEXT:    v_trunc_f32_e32 v5, v5
+; CGP-NEXT:    v_fma_f32 v0, -v4, v2, v0
 ; CGP-NEXT:    v_cvt_u32_f32_e32 v4, v4
-; CGP-NEXT:    v_cvt_u32_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_lo_u32 v5, v5, v4
-; CGP-NEXT:    v_mul_lo_u32 v7, v7, v6
-; CGP-NEXT:    v_mul_hi_u32 v5, v4, v5
-; CGP-NEXT:    v_mul_hi_u32 v7, v6, v7
-; CGP-NEXT:    v_add_i32_e32 v4, vcc, v4, v5
-; CGP-NEXT:    v_add_i32_e32 v5, vcc, v6, v7
-; CGP-NEXT:    v_mul_hi_u32 v4, v0, v4
-; CGP-NEXT:    v_mul_hi_u32 v5, v1, v5
-; CGP-NEXT:    v_mul_lo_u32 v6, v4, v2
-; CGP-NEXT:    v_add_i32_e32 v7, vcc, 1, v4
-; CGP-NEXT:    v_mul_lo_u32 v8, v5, v3
-; CGP-NEXT:    v_add_i32_e32 v9, vcc, 1, v5
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v6
-; CGP-NEXT:    v_sub_i32_e32 v1, vcc, v1, v8
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v2
-; CGP-NEXT:    v_cndmask_b32_e32 v4, v4, v7, vcc
-; CGP-NEXT:    v_sub_i32_e64 v6, s[4:5], v0, v2
-; CGP-NEXT:    v_cmp_ge_u32_e64 s[4:5], v1, v3
-; CGP-NEXT:    v_cndmask_b32_e64 v5, v5, v9, s[4:5]
-; CGP-NEXT:    v_sub_i32_e64 v7, s[6:7], v1, v3
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v6, vcc
-; CGP-NEXT:    v_add_i32_e32 v6, vcc, 1, v4
-; CGP-NEXT:    v_cndmask_b32_e64 v1, v1, v7, s[4:5]
-; CGP-NEXT:    v_add_i32_e32 v7, vcc, 1, v5
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v2
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v4, v6, vcc
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v1, v3
-; CGP-NEXT:    v_cndmask_b32_e32 v1, v5, v7, vcc
+; CGP-NEXT:    v_fma_f32 v1, -v5, v3, v1
+; CGP-NEXT:    v_cvt_u32_f32_e32 v5, v5
+; CGP-NEXT:    v_cmp_ge_f32_e64 s[4:5], |v0|, v2
+; CGP-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s[4:5]
+; CGP-NEXT:    v_cmp_ge_f32_e64 s[4:5], |v1|, v3
+; CGP-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s[4:5]
+; CGP-NEXT:    v_add_i32_e32 v0, vcc, v4, v0
+; CGP-NEXT:    v_add_i32_e32 v1, vcc, v5, v1
+; CGP-NEXT:    v_and_b32_e32 v0, 0xffffff, v0
+; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
 ; CGP-NEXT:    s_setpc_b64 s[30:31]
   %num.mask = and <2 x i32> %num, <i32 16777215, i32 16777215>
   %den.mask = and <2 x i32> %den, <i32 16777215, i32 16777215>
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/urem.i32.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/urem.i32.ll
index 158403644607a..31f61b9968b8b 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/urem.i32.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/urem.i32.ll
@@ -445,23 +445,19 @@ define i32 @v_urem_i32_24bit(i32 %num, i32 %den) {
 ; CGP-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; CGP-NEXT:    v_and_b32_e32 v0, 0xffffff, v0
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
-; CGP-NEXT:    v_cvt_f32_u32_e32 v2, v1
-; CGP-NEXT:    v_sub_i32_e32 v3, vcc, 0, v1
-; CGP-NEXT:    v_rcp_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_f32_e32 v2, 0x4f7ffffe, v2
-; CGP-NEXT:    v_cvt_u32_f32_e32 v2, v2
-; CGP-NEXT:    v_mul_lo_u32 v3, v3, v2
-; CGP-NEXT:    v_mul_hi_u32 v3, v2, v3
-; CGP-NEXT:    v_add_i32_e32 v2, vcc, v2, v3
-; CGP-NEXT:    v_mul_hi_u32 v2, v0, v2
-; CGP-NEXT:    v_mul_lo_u32 v2, v2, v1
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v2
-; CGP-NEXT:    v_sub_i32_e32 v2, vcc, v0, v1
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v2, vcc
-; CGP-NEXT:    v_sub_i32_e32 v2, vcc, v0, v1
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v1
-; CGP-NEXT:    v_cndmask_b32_e32 v0, v0, v2, vcc
+; CGP-NEXT:    v_cvt_f32_u32_e32 v2, v0
+; CGP-NEXT:    v_cvt_f32_u32_e32 v3, v1
+; CGP-NEXT:    v_rcp_f32_e32 v4, v3
+; CGP-NEXT:    v_mul_f32_e32 v4, v2, v4
+; CGP-NEXT:    v_trunc_f32_e32 v4, v4
+; CGP-NEXT:    v_fma_f32 v2, -v4, v3, v2
+; CGP-NEXT:    v_cvt_u32_f32_e32 v4, v4
+; CGP-NEXT:    v_cmp_ge_f32_e64 s[4:5], |v2|, v3
+; CGP-NEXT:    v_cndmask_b32_e64 v2, 0, 1, s[4:5]
+; CGP-NEXT:    v_add_i32_e32 v2, vcc, v4, v2
+; CGP-NEXT:    v_mul_lo_u32 v1, v2, v1
+; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v1
+; CGP-NEXT:    v_and_b32_e32 v0, 0xffffff, v0
 ; CGP-NEXT:    s_setpc_b64 s[30:31]
   %num.mask = and i32 %num, 16777215
   %den.mask = and i32 %den, 16777215
@@ -520,40 +516,32 @@ define <2 x i32> @v_urem_v2i32_24bit(<2 x i32> %num, <2 x i32> %den) {
 ; CGP-NEXT:    v_and_b32_e32 v1, 0xffffff, v1
 ; CGP-NEXT:    v_and_b32_e32 v2, 0xffffff, v2
 ; CGP-NEXT:    v_and_b32_e32 v3, 0xffffff, v3
-; CGP-NEXT:    v_cvt_f32_u32_e32 v4, v2
-; CGP-NEXT:    v_sub_i32_e32 v5, vcc, 0, v2
-; CGP-NEXT:    v_cvt_f32_u32_e32 v6, v3
-; CGP-NEXT:    v_sub_i32_e32 v7, vcc, 0, v3
-; CGP-NEXT:    v_rcp_f32_e32 v4, v4
-; CGP-NEXT:    v_rcp_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_f32_e32 v4, 0x4f7ffffe, v4
-; CGP-NEXT:    v_mul_f32_e32 v6, 0x4f7ffffe, v6
-; CGP-NEXT:    v_cvt_u32_f32_e32 v4, v4
-; CGP-NEXT:    v_cvt_u32_f32_e32 v6, v6
-; CGP-NEXT:    v_mul_lo_u32 v5, v5, v4
-; CGP-NEXT:    v_mul_lo_u32 v7, v7, v6
-; CGP-NEXT:    v_mul_hi_u32 v5, v4, v5
-; CGP-NEXT:    v_mul_hi_u32 v7, v6, v7
-; CGP-NEXT:    v_add_i32_e32 v4, vcc, v4, v5
-; CGP-NEXT:    v_add_i32_e32 v5, vcc, v6, v7
-; CGP-NEXT:    v_mul_hi_u32 v4, v0, v4
-; CGP-NEXT:    v_mul_hi_u32 v5, v1, v5
-; CGP-NEXT:    v_mul_lo_u32 v4, v4, v2
-; CGP-NEXT:    v_mul_lo_u32 v5, v5, v3
-; CGP-NEXT:    v_sub_i32_e32 v0, vcc, v0, v4
-; CGP-NEXT:    v_sub_i32_e32 v1, vcc, v1, v5
-; CGP-NEXT:    v_sub_i32_e32 v4, vcc, v0, v2
-; CGP-NEXT:    v_sub_i32_e32 v5, vcc, v1, v3
-; CGP-NEXT:    v_cmp_ge_u32_e32 vcc, v0, v2
-; CGP-NEXT:    v_c...
[truncated]

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm with nit

llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp Outdated Show resolved Hide resolved
There was an error where dividend of type i64 and actual used number
of bits of 32 fell into path that assumes only 24 bits being used.
Check that AtLeast field is used correctly when using
computeNumSignBits and add necessary extend/trunc for 32 bits path.

Increment AtLeast if IsSigned

Regolden and update testcases.
@arsenm arsenm merged commit e5638c5 into llvm:main Feb 6, 2024
3 of 4 checks passed
Copy link

github-actions bot commented Feb 6, 2024

@choikwa Congratulations on having your first Pull Request (PR) merged into the LLVM Project!

Your changes will be combined with recent changes from other authors, then tested
by our build bots. If there is a problem with a build, you may recieve a report in an email or a comment on this PR.

Please check whether problems have been caused by your change specifically, as
the builds can include changes from many authors. It is not uncommon for your
change to be included in a build that fails due to someone else's changes, or
infrastructure issues.

How to do this, and the rest of the post-merge process, is covered in detail here.

If your change does cause a problem, it may be reverted, or you can revert it yourself.
This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again.

If you don't get any reports, no action is required from you. Your changes are working as expected, well done!

rocm-ci pushed a commit to ROCm/llvm-project that referenced this pull request Apr 16, 2024
…m#80622)

There was an error where dividend of type i64 and actual used number of
bits of 32 fell into path that assumes only 24 bits being used. Check
that AtLeast field is used correctly when using computeNumSignBits and
add necessary extend/trunc for 32 bits path.

Regolden and update testcases.

@jrbyrnes @bcahoon @arsenm @rampitec

Change-Id: I07b0fe8a27b4107242121d66d9536683bcac1cc0
searlmc1 pushed a commit to ROCm/llvm-project that referenced this pull request Apr 23, 2024
…m#80622)

There was an error where dividend of type i64 and actual used number of
bits of 32 fell into path that assumes only 24 bits being used. Check
that AtLeast field is used correctly when using computeNumSignBits and
add necessary extend/trunc for 32 bits path.

Regolden and update testcases.

@jrbyrnes @bcahoon @arsenm @rampitec

Change-Id: Ib9ba7b1520aa20c09d9b619fd10f58a64dad7f04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants