Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[InstCombine] Infer disjoint flag on Or instructions. #72912

Merged
merged 3 commits into from
Dec 2, 2023

Conversation

topperc
Copy link
Collaborator

@topperc topperc commented Nov 20, 2023

Stacked on #72702 and #72583

@topperc topperc requested a review from nikic as a code owner November 20, 2023 20:01
Copy link

github-actions bot commented Nov 20, 2023

✅ With the latest revision this PR passed the C/C++ code formatter.

@topperc topperc force-pushed the pr/instcombine-infer-disjoint branch from 3eb1a2a to ebcd021 Compare November 20, 2023 22:33
@nikic
Copy link
Contributor

nikic commented Nov 27, 2023

Needs a rebase. Also you'll want to do this inside SimplifyDemanded to reuse existing KnownBits information.

@topperc
Copy link
Collaborator Author

topperc commented Nov 27, 2023

Also you'll want to do this inside SimplifyDemanded to reuse existing KnownBits information.

Do we do that for other flags already? I based this off Add/Sub wrap flags.

@topperc topperc force-pushed the pr/instcombine-infer-disjoint branch from ebcd021 to 4b87774 Compare November 28, 2023 06:42
@llvmbot llvmbot added clang Clang issues not falling into any other category llvm:transforms labels Nov 28, 2023
@llvmbot
Copy link
Collaborator

llvmbot commented Nov 28, 2023

@llvm/pr-subscribers-llvm-transforms

Author: Craig Topper (topperc)

Changes

Stacked on #72702 and #72583


Patch is 153.23 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/72912.diff

60 Files Affected:

  • (modified) clang/test/Headers/__clang_hip_math.hip (+2-2)
  • (modified) llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp (+9)
  • (modified) llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/add.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/and-or-not.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/and-or.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/and.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/apint-shift.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/binop-and-shifts.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll (+5-5)
  • (modified) llvm/test/Transforms/InstCombine/bitreverse.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/bswap.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/cast-mul-select.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/cast.ll (+12-12)
  • (modified) llvm/test/Transforms/InstCombine/funnel.ll (+7-7)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul-and.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/icmp-of-xor-x.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/icmp.ll (+16-16)
  • (modified) llvm/test/Transforms/InstCombine/logical-select.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/masked-merge-or.ll (+17-17)
  • (modified) llvm/test/Transforms/InstCombine/masked-merge-xor.ll (+17-17)
  • (modified) llvm/test/Transforms/InstCombine/memcpy-from-global.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/mul-masked-bits.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/mul_full_32.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/mul_full_64.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/or-concat.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/or-shifted-masks.ll (+15-15)
  • (modified) llvm/test/Transforms/InstCombine/or.ll (+7-7)
  • (modified) llvm/test/Transforms/InstCombine/pr32686.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/select-ctlz-to-cttz.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/select-icmp-and.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/select-with-bitwise-ops.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/select.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/shift-shift.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/shift.ll (+6-6)
  • (modified) llvm/test/Transforms/InstCombine/sub-of-negatible-inseltpoison.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/sub-of-negatible.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/trunc-demand.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/trunc-inseltpoison.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/trunc.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/unfold-masked-merge-with-const-mask-scalar.ll (+9-9)
  • (modified) llvm/test/Transforms/InstCombine/unfold-masked-merge-with-const-mask-vector.ll (+10-10)
  • (modified) llvm/test/Transforms/InstCombine/xor.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/xor2.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/zext-or-icmp.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-interleaved-accesses.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-interleaved-masked-accesses.ll (+9-9)
  • (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-reductions.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/SystemZ/addressing.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/interleaving.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/small-size.ll (+12-12)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/x86-interleaved-accesses-masked-group.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/x86-interleaved-store-accesses-with-gaps.ll (+4-4)
  • (modified) llvm/test/Transforms/LoopVectorize/consecutive-ptr-uniforms.ll (+6-6)
  • (modified) llvm/test/Transforms/LoopVectorize/interleaved-accesses.ll (+2-2)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/SROA-after-final-loop-unrolling-2.ll (+1-1)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/loadcombine.ll (+44-44)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/pixel-splat.ll (+3-3)
  • (modified) llvm/test/Transforms/SLPVectorizer/AArch64/loadorder.ll (+2-2)
diff --git a/clang/test/Headers/__clang_hip_math.hip b/clang/test/Headers/__clang_hip_math.hip
index b57c38d70b14c79..c82b7bce060f617 100644
--- a/clang/test/Headers/__clang_hip_math.hip
+++ b/clang/test/Headers/__clang_hip_math.hip
@@ -2451,7 +2451,7 @@ extern "C" __device__ double test_modf(double x, double* y) {
 // CHECK-NEXT:    [[RETVAL_0_I_I:%.*]] = phi i64 [ 0, [[CLEANUP_I_I_I]] ], [ [[__R_0_I_I_I]], [[WHILE_COND_I_I_I]] ], [ 0, [[CLEANUP_I36_I_I]] ], [ [[__R_0_I32_I_I]], [[WHILE_COND_I30_I_I]] ], [ 0, [[CLEANUP_I20_I_I]] ], [ [[__R_0_I16_I_I]], [[WHILE_COND_I14_I_I]] ]
 // CHECK-NEXT:    [[CONV_I:%.*]] = trunc i64 [[RETVAL_0_I_I]] to i32
 // CHECK-NEXT:    [[BF_VALUE_I:%.*]] = and i32 [[CONV_I]], 4194303
-// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or i32 [[BF_VALUE_I]], 2143289344
+// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or disjoint i32 [[BF_VALUE_I]], 2143289344
 // CHECK-NEXT:    [[TMP10:%.*]] = bitcast i32 [[BF_SET9_I]] to float
 // CHECK-NEXT:    ret float [[TMP10]]
 //
@@ -2549,7 +2549,7 @@ extern "C" __device__ float test_nanf(const char *tag) {
 // CHECK:       _ZL3nanPKc.exit:
 // CHECK-NEXT:    [[RETVAL_0_I_I:%.*]] = phi i64 [ 0, [[CLEANUP_I_I_I]] ], [ [[__R_0_I_I_I]], [[WHILE_COND_I_I_I]] ], [ 0, [[CLEANUP_I36_I_I]] ], [ [[__R_0_I32_I_I]], [[WHILE_COND_I30_I_I]] ], [ 0, [[CLEANUP_I20_I_I]] ], [ [[__R_0_I16_I_I]], [[WHILE_COND_I14_I_I]] ]
 // CHECK-NEXT:    [[BF_VALUE_I:%.*]] = and i64 [[RETVAL_0_I_I]], 2251799813685247
-// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or i64 [[BF_VALUE_I]], 9221120237041090560
+// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or disjoint i64 [[BF_VALUE_I]], 9221120237041090560
 // CHECK-NEXT:    [[TMP10:%.*]] = bitcast i64 [[BF_SET9_I]] to double
 // CHECK-NEXT:    ret double [[TMP10]]
 //
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
index 02881109f17d29f..e60e65e1eddea68 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
@@ -3844,6 +3844,15 @@ Instruction *InstCombinerImpl::visitOr(BinaryOperator &I) {
     }
   }
 
+  // Try to infer the disjoint flag.
+  if (!cast<PossiblyDisjointInst>(I).isDisjoint()) {
+    WithCache<const Value *> LHSCache(Op0), RHSCache(Op1);
+    if (haveNoCommonBitsSet(LHSCache, RHSCache, SQ.getWithInstruction(&I))) {
+      cast<PossiblyDisjointInst>(I).setIsDisjoint(true);
+      return &I;
+    }
+  }
+
   return nullptr;
 }
 
diff --git a/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll b/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll
index 3081baa2db281e4..ccbafbb197b6661 100644
--- a/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll
+++ b/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll
@@ -33,9 +33,9 @@ define i8 @foo(i8 %arg, i8 %arg1) {
 ; CHECK-NEXT:    [[T4:%.*]] = and i8 [[ARG1]], 33
 ; CHECK-NEXT:    [[T5:%.*]] = sub nsw i8 40, [[T2]]
 ; CHECK-NEXT:    [[T6:%.*]] = and i8 [[T5]], 84
-; CHECK-NEXT:    [[T7:%.*]] = or i8 [[T4]], [[T6]]
+; CHECK-NEXT:    [[T7:%.*]] = or disjoint i8 [[T4]], [[T6]]
 ; CHECK-NEXT:    [[T8:%.*]] = xor i8 [[T]], [[T3]]
-; CHECK-NEXT:    [[T9:%.*]] = or i8 [[T7]], [[T8]]
+; CHECK-NEXT:    [[T9:%.*]] = or disjoint i8 [[T7]], [[T8]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = lshr i8 [[T8]], 2
 ; CHECK-NEXT:    [[T11:%.*]] = and i8 [[TMP1]], 32
 ; CHECK-NEXT:    [[T12:%.*]] = xor i8 [[T11]], [[T9]]
diff --git a/llvm/test/Transforms/InstCombine/add.ll b/llvm/test/Transforms/InstCombine/add.ll
index c35d2af42a4beae..35f540ab4471639 100644
--- a/llvm/test/Transforms/InstCombine/add.ll
+++ b/llvm/test/Transforms/InstCombine/add.ll
@@ -764,7 +764,7 @@ define i32 @test29(i32 %x, i32 %y) {
 ; CHECK-NEXT:    [[TMP_2:%.*]] = sub i32 [[X:%.*]], [[Y:%.*]]
 ; CHECK-NEXT:    [[TMP_7:%.*]] = and i32 [[X]], 63
 ; CHECK-NEXT:    [[TMP_9:%.*]] = and i32 [[TMP_2]], -64
-; CHECK-NEXT:    [[TMP_10:%.*]] = or i32 [[TMP_7]], [[TMP_9]]
+; CHECK-NEXT:    [[TMP_10:%.*]] = or disjoint i32 [[TMP_7]], [[TMP_9]]
 ; CHECK-NEXT:    ret i32 [[TMP_10]]
 ;
   %tmp.2 = sub i32 %x, %y
@@ -1499,7 +1499,7 @@ define i8 @add_like_or_n1(i8 %x) {
 define i8 @add_like_or_t2_extrause(i8 %x) {
 ; CHECK-LABEL: @add_like_or_t2_extrause(
 ; CHECK-NEXT:    [[I0:%.*]] = shl i8 [[X:%.*]], 4
-; CHECK-NEXT:    [[I1:%.*]] = or i8 [[I0]], 15
+; CHECK-NEXT:    [[I1:%.*]] = or disjoint i8 [[I0]], 15
 ; CHECK-NEXT:    call void @use(i8 [[I1]])
 ; CHECK-NEXT:    [[R:%.*]] = add i8 [[I0]], 57
 ; CHECK-NEXT:    ret i8 [[R]]
@@ -2361,7 +2361,7 @@ define { i64, i64 } @PR57576(i64 noundef %x, i64 noundef %y, i64 noundef %z, i64
 ; CHECK-NEXT:    [[ZY:%.*]] = zext i64 [[Y:%.*]] to i128
 ; CHECK-NEXT:    [[ZZ:%.*]] = zext i64 [[Z:%.*]] to i128
 ; CHECK-NEXT:    [[SHY:%.*]] = shl nuw i128 [[ZY]], 64
-; CHECK-NEXT:    [[XY:%.*]] = or i128 [[SHY]], [[ZX]]
+; CHECK-NEXT:    [[XY:%.*]] = or disjoint i128 [[SHY]], [[ZX]]
 ; CHECK-NEXT:    [[SUB:%.*]] = sub i128 [[XY]], [[ZZ]]
 ; CHECK-NEXT:    [[T:%.*]] = trunc i128 [[SUB]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = lshr i128 [[SUB]], 64
diff --git a/llvm/test/Transforms/InstCombine/and-or-not.ll b/llvm/test/Transforms/InstCombine/and-or-not.ll
index 32a12199020f0cf..c896c8f100380fc 100644
--- a/llvm/test/Transforms/InstCombine/and-or-not.ll
+++ b/llvm/test/Transforms/InstCombine/and-or-not.ll
@@ -553,7 +553,7 @@ define i32 @or_to_nxor_multiuse(i32 %a, i32 %b) {
 ; CHECK-NEXT:    [[AND:%.*]] = and i32 [[A:%.*]], [[B:%.*]]
 ; CHECK-NEXT:    [[OR:%.*]] = or i32 [[A]], [[B]]
 ; CHECK-NEXT:    [[NOTOR:%.*]] = xor i32 [[OR]], -1
-; CHECK-NEXT:    [[OR2:%.*]] = or i32 [[AND]], [[NOTOR]]
+; CHECK-NEXT:    [[OR2:%.*]] = or disjoint i32 [[AND]], [[NOTOR]]
 ; CHECK-NEXT:    [[MUL1:%.*]] = mul i32 [[AND]], [[NOTOR]]
 ; CHECK-NEXT:    [[MUL2:%.*]] = mul i32 [[MUL1]], [[OR2]]
 ; CHECK-NEXT:    ret i32 [[MUL2]]
diff --git a/llvm/test/Transforms/InstCombine/and-or.ll b/llvm/test/Transforms/InstCombine/and-or.ll
index 631da498e6644ef..b4ef27607121d29 100644
--- a/llvm/test/Transforms/InstCombine/and-or.ll
+++ b/llvm/test/Transforms/InstCombine/and-or.ll
@@ -217,7 +217,7 @@ define i8 @or_and2_or2(i8 %x) {
 ; CHECK-NEXT:    [[X2:%.*]] = and i8 [[O2]], 66
 ; CHECK-NEXT:    call void @use(i8 [[X2]])
 ; CHECK-NEXT:    [[BITFIELD:%.*]] = and i8 [[X]], -8
-; CHECK-NEXT:    [[R:%.*]] = or i8 [[BITFIELD]], 3
+; CHECK-NEXT:    [[R:%.*]] = or disjoint i8 [[BITFIELD]], 3
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
   %o1 = or i8 %x, 1
@@ -243,7 +243,7 @@ define <2 x i8> @or_and2_or2_splat(<2 x i8> %x) {
 ; CHECK-NEXT:    [[X2:%.*]] = and <2 x i8> [[O2]], <i8 66, i8 66>
 ; CHECK-NEXT:    call void @use_vec(<2 x i8> [[X2]])
 ; CHECK-NEXT:    [[BITFIELD:%.*]] = and <2 x i8> [[X]], <i8 -8, i8 -8>
-; CHECK-NEXT:    [[R:%.*]] = or <2 x i8> [[BITFIELD]], <i8 3, i8 3>
+; CHECK-NEXT:    [[R:%.*]] = or disjoint <2 x i8> [[BITFIELD]], <i8 3, i8 3>
 ; CHECK-NEXT:    ret <2 x i8> [[R]]
 ;
   %o1 = or <2 x i8> %x, <i8 1, i8 1>
@@ -355,7 +355,7 @@ define i64 @or_or_and_complex(i64 %i) {
 ; CHECK-NEXT:    [[TMP2:%.*]] = shl i64 [[I]], 8
 ; CHECK-NEXT:    [[TMP3:%.*]] = and i64 [[TMP1]], 71777214294589695
 ; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP2]], -71777214294589696
-; CHECK-NEXT:    [[OR27:%.*]] = or i64 [[TMP3]], [[TMP4]]
+; CHECK-NEXT:    [[OR27:%.*]] = or disjoint i64 [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    ret i64 [[OR27]]
 ;
   %1 = lshr i64 %i, 8
diff --git a/llvm/test/Transforms/InstCombine/and.ll b/llvm/test/Transforms/InstCombine/and.ll
index 386ee3807050140..79857f3efbc18bc 100644
--- a/llvm/test/Transforms/InstCombine/and.ll
+++ b/llvm/test/Transforms/InstCombine/and.ll
@@ -2433,7 +2433,7 @@ define i8 @negate_lowbitmask_use2(i8 %x, i8 %y) {
 define i64 @test_and_or_constexpr_infloop() {
 ; CHECK-LABEL: @test_and_or_constexpr_infloop(
 ; CHECK-NEXT:    [[AND:%.*]] = and i64 ptrtoint (ptr @g to i64), -8
-; CHECK-NEXT:    [[OR:%.*]] = or i64 [[AND]], 1
+; CHECK-NEXT:    [[OR:%.*]] = or disjoint i64 [[AND]], 1
 ; CHECK-NEXT:    ret i64 [[OR]]
 ;
   %and = and i64 ptrtoint (ptr @g to i64), -8
diff --git a/llvm/test/Transforms/InstCombine/apint-shift.ll b/llvm/test/Transforms/InstCombine/apint-shift.ll
index 377cc9978c5b766..05c3db70ce1ca91 100644
--- a/llvm/test/Transforms/InstCombine/apint-shift.ll
+++ b/llvm/test/Transforms/InstCombine/apint-shift.ll
@@ -273,7 +273,7 @@ define i18 @test13(i18 %x) {
 define i35 @test14(i35 %A) {
 ; CHECK-LABEL: @test14(
 ; CHECK-NEXT:    [[B:%.*]] = and i35 [[A:%.*]], -19760
-; CHECK-NEXT:    [[C:%.*]] = or i35 [[B]], 19744
+; CHECK-NEXT:    [[C:%.*]] = or disjoint i35 [[B]], 19744
 ; CHECK-NEXT:    ret i35 [[C]]
 ;
   %B = lshr i35 %A, 4
diff --git a/llvm/test/Transforms/InstCombine/binop-and-shifts.ll b/llvm/test/Transforms/InstCombine/binop-and-shifts.ll
index 45fd87be3c33189..148963894b89fb2 100644
--- a/llvm/test/Transforms/InstCombine/binop-and-shifts.ll
+++ b/llvm/test/Transforms/InstCombine/binop-and-shifts.ll
@@ -365,7 +365,7 @@ define i8 @lshr_xor_or_good_mask(i8 %x, i8 %y) {
 ; CHECK-LABEL: @lshr_xor_or_good_mask(
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Y:%.*]], [[X:%.*]]
 ; CHECK-NEXT:    [[TMP2:%.*]] = lshr i8 [[TMP1]], 4
-; CHECK-NEXT:    [[BW1:%.*]] = or i8 [[TMP2]], 48
+; CHECK-NEXT:    [[BW1:%.*]] = or disjoint i8 [[TMP2]], 48
 ; CHECK-NEXT:    ret i8 [[BW1]]
 ;
   %shift1 = lshr i8 %x, 4
diff --git a/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll b/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll
index 78f4550464681e5..c86dfde6ddece99 100644
--- a/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll
+++ b/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll
@@ -271,7 +271,7 @@ define i8 @mismatched_shifts(i8 %x) {
 ; CHECK-NEXT:    [[SHIFT:%.*]] = shl i8 16, [[X]]
 ; CHECK-NEXT:    [[ADD:%.*]] = add i8 [[X]], 1
 ; CHECK-NEXT:    [[SHIFT2:%.*]] = lshr i8 3, [[ADD]]
-; CHECK-NEXT:    [[BINOP:%.*]] = or i8 [[SHIFT]], [[SHIFT2]]
+; CHECK-NEXT:    [[BINOP:%.*]] = or disjoint i8 [[SHIFT]], [[SHIFT2]]
 ; CHECK-NEXT:    ret i8 [[BINOP]]
 ;
   %shift = shl i8 16, %x
diff --git a/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll b/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll
index b99111580277d6a..410a441f7778e66 100644
--- a/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll
+++ b/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll
@@ -17,7 +17,7 @@ define i16 @insert0_v2i8(i16 %x, i8 %y) {
 ; LE-LABEL: @insert0_v2i8(
 ; LE-NEXT:    [[TMP1:%.*]] = and i16 [[X:%.*]], -256
 ; LE-NEXT:    [[TMP2:%.*]] = zext i8 [[Y:%.*]] to i16
-; LE-NEXT:    [[R:%.*]] = or i16 [[TMP1]], [[TMP2]]
+; LE-NEXT:    [[R:%.*]] = or disjoint i16 [[TMP1]], [[TMP2]]
 ; LE-NEXT:    ret i16 [[R]]
 ;
   %v = bitcast i16 %x to <2 x i8>
@@ -33,7 +33,7 @@ define i16 @insert1_v2i8(i16 %x, i8 %y) {
 ; BE-LABEL: @insert1_v2i8(
 ; BE-NEXT:    [[TMP1:%.*]] = and i16 [[X:%.*]], -256
 ; BE-NEXT:    [[TMP2:%.*]] = zext i8 [[Y:%.*]] to i16
-; BE-NEXT:    [[R:%.*]] = or i16 [[TMP1]], [[TMP2]]
+; BE-NEXT:    [[R:%.*]] = or disjoint i16 [[TMP1]], [[TMP2]]
 ; BE-NEXT:    ret i16 [[R]]
 ;
 ; LE-LABEL: @insert1_v2i8(
@@ -61,7 +61,7 @@ define i32 @insert0_v4i8(i32 %x, i8 %y) {
 ; LE-LABEL: @insert0_v4i8(
 ; LE-NEXT:    [[TMP1:%.*]] = and i32 [[X:%.*]], -256
 ; LE-NEXT:    [[TMP2:%.*]] = zext i8 [[Y:%.*]] to i32
-; LE-NEXT:    [[R:%.*]] = or i32 [[TMP1]], [[TMP2]]
+; LE-NEXT:    [[R:%.*]] = or disjoint i32 [[TMP1]], [[TMP2]]
 ; LE-NEXT:    ret i32 [[R]]
 ;
   %v = bitcast i32 %x to <4 x i8>
@@ -100,7 +100,7 @@ define i64 @insert0_v4i16(i64 %x, i16 %y) {
 ; LE-LABEL: @insert0_v4i16(
 ; LE-NEXT:    [[TMP1:%.*]] = and i64 [[X:%.*]], -65536
 ; LE-NEXT:    [[TMP2:%.*]] = zext i16 [[Y:%.*]] to i64
-; LE-NEXT:    [[R:%.*]] = or i64 [[TMP1]], [[TMP2]]
+; LE-NEXT:    [[R:%.*]] = or disjoint i64 [[TMP1]], [[TMP2]]
 ; LE-NEXT:    ret i64 [[R]]
 ;
   %v = bitcast i64 %x to <4 x i16>
@@ -131,7 +131,7 @@ define i64 @insert3_v4i16(i64 %x, i16 %y) {
 ; BE-LABEL: @insert3_v4i16(
 ; BE-NEXT:    [[TMP1:%.*]] = and i64 [[X:%.*]], -65536
 ; BE-NEXT:    [[TMP2:%.*]] = zext i16 [[Y:%.*]] to i64
-; BE-NEXT:    [[R:%.*]] = or i64 [[TMP1]], [[TMP2]]
+; BE-NEXT:    [[R:%.*]] = or disjoint i64 [[TMP1]], [[TMP2]]
 ; BE-NEXT:    ret i64 [[R]]
 ;
 ; LE-LABEL: @insert3_v4i16(
diff --git a/llvm/test/Transforms/InstCombine/bitreverse.ll b/llvm/test/Transforms/InstCombine/bitreverse.ll
index dca52e2c545e1e1..bf09ffe14101242 100644
--- a/llvm/test/Transforms/InstCombine/bitreverse.ll
+++ b/llvm/test/Transforms/InstCombine/bitreverse.ll
@@ -243,7 +243,7 @@ define i8 @rev8_mul_and_lshr(i8 %0) {
 ; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], 139536
 ; CHECK-NEXT:    [[TMP5:%.*]] = mul nuw nsw i64 [[TMP2]], 32800
 ; CHECK-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], 558144
-; CHECK-NEXT:    [[TMP7:%.*]] = or i64 [[TMP4]], [[TMP6]]
+; CHECK-NEXT:    [[TMP7:%.*]] = or disjoint i64 [[TMP4]], [[TMP6]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = mul nuw nsw i64 [[TMP7]], 65793
 ; CHECK-NEXT:    [[TMP9:%.*]] = lshr i64 [[TMP8]], 16
 ; CHECK-NEXT:    [[TMP10:%.*]] = trunc i64 [[TMP9]] to i8
diff --git a/llvm/test/Transforms/InstCombine/bswap.ll b/llvm/test/Transforms/InstCombine/bswap.ll
index 631d02ad8d806c3..756e898b18ebac9 100644
--- a/llvm/test/Transforms/InstCombine/bswap.ll
+++ b/llvm/test/Transforms/InstCombine/bswap.ll
@@ -42,7 +42,7 @@ define i16 @test1_trunc(i32 %i) {
 ; CHECK-NEXT:    [[T1:%.*]] = lshr i32 [[I:%.*]], 24
 ; CHECK-NEXT:    [[T3:%.*]] = lshr i32 [[I]], 8
 ; CHECK-NEXT:    [[T4:%.*]] = and i32 [[T3]], 65280
-; CHECK-NEXT:    [[T5:%.*]] = or i32 [[T1]], [[T4]]
+; CHECK-NEXT:    [[T5:%.*]] = or disjoint i32 [[T1]], [[T4]]
 ; CHECK-NEXT:    [[T13:%.*]] = trunc i32 [[T5]] to i16
 ; CHECK-NEXT:    ret i16 [[T13]]
 ;
@@ -59,7 +59,7 @@ define i16 @test1_trunc_extra_use(i32 %i) {
 ; CHECK-NEXT:    [[T1:%.*]] = lshr i32 [[I:%.*]], 24
 ; CHECK-NEXT:    [[T3:%.*]] = lshr i32 [[I]], 8
 ; CHECK-NEXT:    [[T4:%.*]] = and i32 [[T3]], 65280
-; CHECK-NEXT:    [[T5:%.*]] = or i32 [[T1]], [[T4]]
+; CHECK-NEXT:    [[T5:%.*]] = or disjoint i32 [[T1]], [[T4]]
 ; CHECK-NEXT:    call void @extra_use(i32 [[T5]])
 ; CHECK-NEXT:    [[T13:%.*]] = trunc i32 [[T5]] to i16
 ; CHECK-NEXT:    ret i16 [[T13]]
@@ -605,7 +605,7 @@ define i64 @bswap_and_mask_1(i64 %0) {
 ; CHECK-NEXT:    [[TMP2:%.*]] = lshr i64 [[TMP0:%.*]], 56
 ; CHECK-NEXT:    [[TMP3:%.*]] = lshr i64 [[TMP0]], 40
 ; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], 65280
-; CHECK-NEXT:    [[TMP5:%.*]] = or i64 [[TMP4]], [[TMP2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = or disjoint i64 [[TMP4]], [[TMP2]]
 ; CHECK-NEXT:    ret i64 [[TMP5]]
 ;
   %2 = lshr i64 %0, 56
@@ -781,7 +781,7 @@ define i16 @trunc_bswap_i160(ptr %a0) {
 ; CHECK-NEXT:    [[SH_DIFF:%.*]] = lshr i160 [[LOAD]], 120
 ; CHECK-NEXT:    [[TR_SH_DIFF:%.*]] = trunc i160 [[SH_DIFF]] to i16
 ; CHECK-NEXT:    [[SHL:%.*]] = and i16 [[TR_SH_DIFF]], -256
-; CHECK-NEXT:    [[OR:%.*]] = or i16 [[AND1]], [[SHL]]
+; CHECK-NEXT:    [[OR:%.*]] = or disjoint i16 [[AND1]], [[SHL]]
 ; CHECK-NEXT:    ret i16 [[OR]]
 ;
   %load = load i160, ptr %a0, align 4
diff --git a/llvm/test/Transforms/InstCombine/cast-mul-select.ll b/llvm/test/Transforms/InstCombine/cast-mul-select.ll
index 454522b85a1e843..975d7a34db36c1b 100644
--- a/llvm/test/Transforms/InstCombine/cast-mul-select.ll
+++ b/llvm/test/Transforms/InstCombine/cast-mul-select.ll
@@ -145,7 +145,7 @@ define i32 @eval_sext_multi_use_in_one_inst(i32 %x) {
 ; CHECK-NEXT:    [[T:%.*]] = trunc i32 [[X:%.*]] to i16
 ; CHECK-NEXT:    [[A:%.*]] = and i16 [[T]], 14
 ; CHECK-NEXT:    [[M:%.*]] = mul nuw nsw i16 [[A]], [[A]]
-; CHECK-NEXT:    [[O:%.*]] = or i16 [[M]], -32768
+; CHECK-NEXT:    [[O:%.*]] = or disjoint i16 [[M]], -32768
 ; CHECK-NEXT:    [[R:%.*]] = sext i16 [[O]] to i32
 ; CHECK-NEXT:    ret i32 [[R]]
 ;
@@ -156,7 +156,7 @@ define i32 @eval_sext_multi_use_in_one_inst(i32 %x) {
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i16 [[A]], metadata [[META77:![0-9]+]], metadata !DIExpression()), !dbg [[DBG82]]
 ; DBGINFO-NEXT:    [[M:%.*]] = mul nuw nsw i16 [[A]], [[A]], !dbg [[DBG83:![0-9]+]]
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i16 [[M]], metadata [[META78:![0-9]+]], metadata !DIExpression()), !dbg [[DBG83]]
-; DBGINFO-NEXT:    [[O:%.*]] = or i16 [[M]], -32768, !dbg [[DBG84:![0-9]+]]
+; DBGINFO-NEXT:    [[O:%.*]] = or disjoint i16 [[M]], -32768, !dbg [[DBG84:![0-9]+]]
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i16 [[O]], metadata [[META79:![0-9]+]], metadata !DIExpression()), !dbg [[DBG84]]
 ; DBGINFO-NEXT:    [[R:%.*]] = sext i16 [[O]] to i32, !dbg [[DBG85:![0-9]+]]
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i32 [[R]], metadata [[META80:![0-9]+]], metadata !DIExpression()), !dbg [[DBG85]]
diff --git a/llvm/test/Transforms/InstCombine/cast.ll b/llvm/test/Transforms/InstCombine/cast.ll
index afa7ac45e96dcb4..1cda0e503ee9393 100644
--- a/llvm/test/Transforms/InstCombine/cast.ll
+++ b/llvm/test/Transforms/InstCombine/cast.ll
@@ -467,7 +467,7 @@ define i16 @test40(i16 %a) {
 ; ALL-LABEL: @test40(
 ; ALL-NEXT:    [[T21:%.*]] = lshr i16 [[A:%.*]], 9
 ; ALL-NEXT:    [[T5:%.*]] = shl i16 [[A]], 8
-; ALL-NEXT:    [[T32:%.*]] = or i16 [[T21]], [[T5]]
+; ALL-NEXT:    [[T32:%.*]] = or disjoint i16 [[T21]], [[T5]]
 ; ALL-NEXT:    ret i16 [[T32]]
 ;
   %t = zext i16 %a to i32
@@ -482,7 +482,7 @@ define <2 x i16> @test40vec(<2 x i16> %a) {
 ; ALL-LABEL: @test40vec(
 ; ALL-NEXT:    [[T21:%.*]] = lshr <2 x i16> [[A:%.*]], <i16 9, i16 9>
 ; ALL-NEXT:    [[T5:%.*]] = shl <2 x i16> [[A]], <i16 8, i16 8>
-; ALL-NEXT:    [[T32:%.*]] = or <2 x i16> [[T21]], [[T5]]
+; ALL-NEXT:    [[T32:%.*]] = or disjoint <2 x i16> [[T21]], [[T5]]
 ; ALL-NEXT:    ret <2 x i16> [[T32]]
 ;
   %t = zext <2 x i16> %a to <2 x i32>
@@ -497,7 +497,7 @@ define <2 x i16> @test40vec_nonuniform(<2 x i16> %a) {
 ; ALL-LABEL: @test40vec_nonuniform(
 ; ALL-NEXT:    [[T21:%.*]] = lshr <2 x i16> [[A:%.*]], <i16 9, i16 10>
 ; ALL-NEXT:    [[T5:%.*]] = shl <2 x i16> [[A]], <i16 8, i16 9>
-; ALL-NEXT:    [[T32:%.*]] = or <2 x i16> [[T21]], [[T5]]
+; ALL-NEXT:    [[T32:%.*]] = or disjoint <2 x i16> [[T21]], [[T5]]
 ; ALL-NEXT:    ret <2 x i16> [[T32]]
 ;
   %t = zext <2 x i16> %a to <2 x i32>
@@ -646,7 +646,7 @@ define i64 @test48(i8 %A1, i8 %a2) {
 ; ALL-LABEL: @test48(
 ; ALL-NEXT:    [[Z2:%.*]] = zext i8 [[A1:%.*]] to i32
 ; ALL-NEXT:    [[C:%.*]] = shl nuw nsw i32 [[Z2]], 8
-; ALL-NEXT:    [[D:%.*]] = or i32 [[C]], [[Z2]]
+; ALL-NEXT:    [[D:%.*]] = or disjoint i32 [[C]], [[Z2]]
 ; ALL-NEXT:    [[E:%.*]] = zext nneg i32 [[D]] to i64
 ; ALL-NEXT:    ret i64 [[E]]
 ;
@@ -690,7 +690,7 @@ define i64 @test51(i64 %A, i1 %cond) {
 ; ALL-NEXT:    [[C:%.*]] = and i64 [[A:%.*]], 4294967294
 ; ALL-NEXT:    [[NOT_COND:%.*]] = xor i1 [[COND:%.*]], true
 ; ALL-NEXT:    [[MASKSEL:%.*]] = zext i1 [[NOT_COND]] to i64
-; ALL-NEXT:    [[E:%.*]] = or i64 [[C]], [[MASKSEL]]
+; ALL-NEXT:    [[E:%.*]] = or disjoint i64 [[C]], [[MASKSEL]]
 ; ALL-NEXT:    [[SEXT:%.*]] = shl nuw i64 [[E]], 32
 ; ALL-NEXT:    [[F:%.*]] = ashr exact i64 [[SEXT]], 32
 ; ALL-NEXT:    ret i64 [[F]]
@@ -707,7 +707,7 @@ define i32 @test52(i64 %A) {
 ; ALL-LABEL: @test52(
 ; ALL-NEXT:    [[B:%.*]] = trunc i64 [[A:%.*]] to i32
 ; ALL-NEXT:    [[C:%.*]] = and i32 [[B]], 7224
-; ALL-NEXT:    [[D:%.*]] = or i32 [[C]], 32962
+; ALL-NEXT:    [[D:%.*]] = or disjoint i32 [[C]], 32962
 ; ALL-NEXT:    ret i32 [[D]]
 ;
   %B = trunc i64 %A to i16
@@ -720,7 +720,7 @@ define i32 @test52(i64 %A) {
 define i64 @test53(i32 %A) {
 ; ALL-LABEL: @test53(
 ; ALL-NEXT:    [[TMP1:%.*]] = and i32 [[A:%.*]], 7224
-; ALL-NEXT:    [[TMP2:%.*]] = or i32 [[TMP1]], 32962
+; ALL-NEXT:    [[TMP2:%.*]] = or disjoint i32 [[TMP1]], 32962
 ; ALL-NEXT:    [[D:%.*]] = zext nneg i32 [[TMP2]] to i64
 ; ALL-NEXT:    ret i64 [[D]]
 ;
@@ -735,7 +735,7 @@ define i32 @test54(i64 %A) {
 ; ALL-LABEL: @test54(
 ; ALL-NEXT:    [[B:%.*]] = trunc i64 [[A:%.*]] to i32
 ; ALL-NEXT:    [[C:%.*]] = and i32 [[B]], 7224
-; ALL-NEXT:    [[D:%.*]] = or i32 [[C]], -32574
+; ALL-NEXT:    [[D:%.*]] = or disjoint i32 [[C]], -32574
 ; ALL-NEXT:    ret i32 [[D]]
 ;
   %B = trunc i64 %A to i16
@@ -749,7 +749,7 @@ define i64 @test55(i32 %A) {
 ; ALL-LABEL: @test55(
 ; ALL-NEXT:    [[TMP1:%.*]] = and i32 [[A:%.*]], 7224
 ; ALL-NEXT:    [[C:%.*]] = zext nneg i32 [[TMP1]] to i64
-; ALL-NEXT:    [[D:%.*]] = or i64 [[C]], -32574
+; ALL-NEXT:    [[D:%.*]] = or disjoint i64 [[C]], -32574
 ; ALL-NEXT:    ret i64 [[D]]
 ;
   %B = trunc i32 %A to i16
@@ ...
[truncated]

@llvmbot
Copy link
Collaborator

llvmbot commented Nov 28, 2023

@llvm/pr-subscribers-clang

Author: Craig Topper (topperc)

Changes

Stacked on #72702 and #72583


Patch is 153.23 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/72912.diff

60 Files Affected:

  • (modified) clang/test/Headers/__clang_hip_math.hip (+2-2)
  • (modified) llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp (+9)
  • (modified) llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/add.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/and-or-not.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/and-or.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/and.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/apint-shift.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/binop-and-shifts.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll (+5-5)
  • (modified) llvm/test/Transforms/InstCombine/bitreverse.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/bswap.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/cast-mul-select.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/cast.ll (+12-12)
  • (modified) llvm/test/Transforms/InstCombine/funnel.ll (+7-7)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul-and.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/icmp-of-xor-x.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/icmp.ll (+16-16)
  • (modified) llvm/test/Transforms/InstCombine/logical-select.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/masked-merge-or.ll (+17-17)
  • (modified) llvm/test/Transforms/InstCombine/masked-merge-xor.ll (+17-17)
  • (modified) llvm/test/Transforms/InstCombine/memcpy-from-global.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/mul-masked-bits.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/mul_full_32.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/mul_full_64.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/or-concat.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/or-shifted-masks.ll (+15-15)
  • (modified) llvm/test/Transforms/InstCombine/or.ll (+7-7)
  • (modified) llvm/test/Transforms/InstCombine/pr32686.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/select-ctlz-to-cttz.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/select-icmp-and.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/select-with-bitwise-ops.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/select.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/shift-shift.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/shift.ll (+6-6)
  • (modified) llvm/test/Transforms/InstCombine/sub-of-negatible-inseltpoison.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/sub-of-negatible.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/trunc-demand.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/trunc-inseltpoison.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/trunc.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/unfold-masked-merge-with-const-mask-scalar.ll (+9-9)
  • (modified) llvm/test/Transforms/InstCombine/unfold-masked-merge-with-const-mask-vector.ll (+10-10)
  • (modified) llvm/test/Transforms/InstCombine/xor.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/xor2.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/zext-or-icmp.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-interleaved-accesses.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-interleaved-masked-accesses.ll (+9-9)
  • (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-reductions.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/SystemZ/addressing.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/interleaving.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/small-size.ll (+12-12)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/x86-interleaved-accesses-masked-group.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/x86-interleaved-store-accesses-with-gaps.ll (+4-4)
  • (modified) llvm/test/Transforms/LoopVectorize/consecutive-ptr-uniforms.ll (+6-6)
  • (modified) llvm/test/Transforms/LoopVectorize/interleaved-accesses.ll (+2-2)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/SROA-after-final-loop-unrolling-2.ll (+1-1)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/loadcombine.ll (+44-44)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/pixel-splat.ll (+3-3)
  • (modified) llvm/test/Transforms/SLPVectorizer/AArch64/loadorder.ll (+2-2)
diff --git a/clang/test/Headers/__clang_hip_math.hip b/clang/test/Headers/__clang_hip_math.hip
index b57c38d70b14c79..c82b7bce060f617 100644
--- a/clang/test/Headers/__clang_hip_math.hip
+++ b/clang/test/Headers/__clang_hip_math.hip
@@ -2451,7 +2451,7 @@ extern "C" __device__ double test_modf(double x, double* y) {
 // CHECK-NEXT:    [[RETVAL_0_I_I:%.*]] = phi i64 [ 0, [[CLEANUP_I_I_I]] ], [ [[__R_0_I_I_I]], [[WHILE_COND_I_I_I]] ], [ 0, [[CLEANUP_I36_I_I]] ], [ [[__R_0_I32_I_I]], [[WHILE_COND_I30_I_I]] ], [ 0, [[CLEANUP_I20_I_I]] ], [ [[__R_0_I16_I_I]], [[WHILE_COND_I14_I_I]] ]
 // CHECK-NEXT:    [[CONV_I:%.*]] = trunc i64 [[RETVAL_0_I_I]] to i32
 // CHECK-NEXT:    [[BF_VALUE_I:%.*]] = and i32 [[CONV_I]], 4194303
-// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or i32 [[BF_VALUE_I]], 2143289344
+// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or disjoint i32 [[BF_VALUE_I]], 2143289344
 // CHECK-NEXT:    [[TMP10:%.*]] = bitcast i32 [[BF_SET9_I]] to float
 // CHECK-NEXT:    ret float [[TMP10]]
 //
@@ -2549,7 +2549,7 @@ extern "C" __device__ float test_nanf(const char *tag) {
 // CHECK:       _ZL3nanPKc.exit:
 // CHECK-NEXT:    [[RETVAL_0_I_I:%.*]] = phi i64 [ 0, [[CLEANUP_I_I_I]] ], [ [[__R_0_I_I_I]], [[WHILE_COND_I_I_I]] ], [ 0, [[CLEANUP_I36_I_I]] ], [ [[__R_0_I32_I_I]], [[WHILE_COND_I30_I_I]] ], [ 0, [[CLEANUP_I20_I_I]] ], [ [[__R_0_I16_I_I]], [[WHILE_COND_I14_I_I]] ]
 // CHECK-NEXT:    [[BF_VALUE_I:%.*]] = and i64 [[RETVAL_0_I_I]], 2251799813685247
-// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or i64 [[BF_VALUE_I]], 9221120237041090560
+// CHECK-NEXT:    [[BF_SET9_I:%.*]] = or disjoint i64 [[BF_VALUE_I]], 9221120237041090560
 // CHECK-NEXT:    [[TMP10:%.*]] = bitcast i64 [[BF_SET9_I]] to double
 // CHECK-NEXT:    ret double [[TMP10]]
 //
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
index 02881109f17d29f..e60e65e1eddea68 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
@@ -3844,6 +3844,15 @@ Instruction *InstCombinerImpl::visitOr(BinaryOperator &I) {
     }
   }
 
+  // Try to infer the disjoint flag.
+  if (!cast<PossiblyDisjointInst>(I).isDisjoint()) {
+    WithCache<const Value *> LHSCache(Op0), RHSCache(Op1);
+    if (haveNoCommonBitsSet(LHSCache, RHSCache, SQ.getWithInstruction(&I))) {
+      cast<PossiblyDisjointInst>(I).setIsDisjoint(true);
+      return &I;
+    }
+  }
+
   return nullptr;
 }
 
diff --git a/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll b/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll
index 3081baa2db281e4..ccbafbb197b6661 100644
--- a/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll
+++ b/llvm/test/Transforms/InstCombine/2010-11-01-lshr-mask.ll
@@ -33,9 +33,9 @@ define i8 @foo(i8 %arg, i8 %arg1) {
 ; CHECK-NEXT:    [[T4:%.*]] = and i8 [[ARG1]], 33
 ; CHECK-NEXT:    [[T5:%.*]] = sub nsw i8 40, [[T2]]
 ; CHECK-NEXT:    [[T6:%.*]] = and i8 [[T5]], 84
-; CHECK-NEXT:    [[T7:%.*]] = or i8 [[T4]], [[T6]]
+; CHECK-NEXT:    [[T7:%.*]] = or disjoint i8 [[T4]], [[T6]]
 ; CHECK-NEXT:    [[T8:%.*]] = xor i8 [[T]], [[T3]]
-; CHECK-NEXT:    [[T9:%.*]] = or i8 [[T7]], [[T8]]
+; CHECK-NEXT:    [[T9:%.*]] = or disjoint i8 [[T7]], [[T8]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = lshr i8 [[T8]], 2
 ; CHECK-NEXT:    [[T11:%.*]] = and i8 [[TMP1]], 32
 ; CHECK-NEXT:    [[T12:%.*]] = xor i8 [[T11]], [[T9]]
diff --git a/llvm/test/Transforms/InstCombine/add.ll b/llvm/test/Transforms/InstCombine/add.ll
index c35d2af42a4beae..35f540ab4471639 100644
--- a/llvm/test/Transforms/InstCombine/add.ll
+++ b/llvm/test/Transforms/InstCombine/add.ll
@@ -764,7 +764,7 @@ define i32 @test29(i32 %x, i32 %y) {
 ; CHECK-NEXT:    [[TMP_2:%.*]] = sub i32 [[X:%.*]], [[Y:%.*]]
 ; CHECK-NEXT:    [[TMP_7:%.*]] = and i32 [[X]], 63
 ; CHECK-NEXT:    [[TMP_9:%.*]] = and i32 [[TMP_2]], -64
-; CHECK-NEXT:    [[TMP_10:%.*]] = or i32 [[TMP_7]], [[TMP_9]]
+; CHECK-NEXT:    [[TMP_10:%.*]] = or disjoint i32 [[TMP_7]], [[TMP_9]]
 ; CHECK-NEXT:    ret i32 [[TMP_10]]
 ;
   %tmp.2 = sub i32 %x, %y
@@ -1499,7 +1499,7 @@ define i8 @add_like_or_n1(i8 %x) {
 define i8 @add_like_or_t2_extrause(i8 %x) {
 ; CHECK-LABEL: @add_like_or_t2_extrause(
 ; CHECK-NEXT:    [[I0:%.*]] = shl i8 [[X:%.*]], 4
-; CHECK-NEXT:    [[I1:%.*]] = or i8 [[I0]], 15
+; CHECK-NEXT:    [[I1:%.*]] = or disjoint i8 [[I0]], 15
 ; CHECK-NEXT:    call void @use(i8 [[I1]])
 ; CHECK-NEXT:    [[R:%.*]] = add i8 [[I0]], 57
 ; CHECK-NEXT:    ret i8 [[R]]
@@ -2361,7 +2361,7 @@ define { i64, i64 } @PR57576(i64 noundef %x, i64 noundef %y, i64 noundef %z, i64
 ; CHECK-NEXT:    [[ZY:%.*]] = zext i64 [[Y:%.*]] to i128
 ; CHECK-NEXT:    [[ZZ:%.*]] = zext i64 [[Z:%.*]] to i128
 ; CHECK-NEXT:    [[SHY:%.*]] = shl nuw i128 [[ZY]], 64
-; CHECK-NEXT:    [[XY:%.*]] = or i128 [[SHY]], [[ZX]]
+; CHECK-NEXT:    [[XY:%.*]] = or disjoint i128 [[SHY]], [[ZX]]
 ; CHECK-NEXT:    [[SUB:%.*]] = sub i128 [[XY]], [[ZZ]]
 ; CHECK-NEXT:    [[T:%.*]] = trunc i128 [[SUB]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = lshr i128 [[SUB]], 64
diff --git a/llvm/test/Transforms/InstCombine/and-or-not.ll b/llvm/test/Transforms/InstCombine/and-or-not.ll
index 32a12199020f0cf..c896c8f100380fc 100644
--- a/llvm/test/Transforms/InstCombine/and-or-not.ll
+++ b/llvm/test/Transforms/InstCombine/and-or-not.ll
@@ -553,7 +553,7 @@ define i32 @or_to_nxor_multiuse(i32 %a, i32 %b) {
 ; CHECK-NEXT:    [[AND:%.*]] = and i32 [[A:%.*]], [[B:%.*]]
 ; CHECK-NEXT:    [[OR:%.*]] = or i32 [[A]], [[B]]
 ; CHECK-NEXT:    [[NOTOR:%.*]] = xor i32 [[OR]], -1
-; CHECK-NEXT:    [[OR2:%.*]] = or i32 [[AND]], [[NOTOR]]
+; CHECK-NEXT:    [[OR2:%.*]] = or disjoint i32 [[AND]], [[NOTOR]]
 ; CHECK-NEXT:    [[MUL1:%.*]] = mul i32 [[AND]], [[NOTOR]]
 ; CHECK-NEXT:    [[MUL2:%.*]] = mul i32 [[MUL1]], [[OR2]]
 ; CHECK-NEXT:    ret i32 [[MUL2]]
diff --git a/llvm/test/Transforms/InstCombine/and-or.ll b/llvm/test/Transforms/InstCombine/and-or.ll
index 631da498e6644ef..b4ef27607121d29 100644
--- a/llvm/test/Transforms/InstCombine/and-or.ll
+++ b/llvm/test/Transforms/InstCombine/and-or.ll
@@ -217,7 +217,7 @@ define i8 @or_and2_or2(i8 %x) {
 ; CHECK-NEXT:    [[X2:%.*]] = and i8 [[O2]], 66
 ; CHECK-NEXT:    call void @use(i8 [[X2]])
 ; CHECK-NEXT:    [[BITFIELD:%.*]] = and i8 [[X]], -8
-; CHECK-NEXT:    [[R:%.*]] = or i8 [[BITFIELD]], 3
+; CHECK-NEXT:    [[R:%.*]] = or disjoint i8 [[BITFIELD]], 3
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
   %o1 = or i8 %x, 1
@@ -243,7 +243,7 @@ define <2 x i8> @or_and2_or2_splat(<2 x i8> %x) {
 ; CHECK-NEXT:    [[X2:%.*]] = and <2 x i8> [[O2]], <i8 66, i8 66>
 ; CHECK-NEXT:    call void @use_vec(<2 x i8> [[X2]])
 ; CHECK-NEXT:    [[BITFIELD:%.*]] = and <2 x i8> [[X]], <i8 -8, i8 -8>
-; CHECK-NEXT:    [[R:%.*]] = or <2 x i8> [[BITFIELD]], <i8 3, i8 3>
+; CHECK-NEXT:    [[R:%.*]] = or disjoint <2 x i8> [[BITFIELD]], <i8 3, i8 3>
 ; CHECK-NEXT:    ret <2 x i8> [[R]]
 ;
   %o1 = or <2 x i8> %x, <i8 1, i8 1>
@@ -355,7 +355,7 @@ define i64 @or_or_and_complex(i64 %i) {
 ; CHECK-NEXT:    [[TMP2:%.*]] = shl i64 [[I]], 8
 ; CHECK-NEXT:    [[TMP3:%.*]] = and i64 [[TMP1]], 71777214294589695
 ; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP2]], -71777214294589696
-; CHECK-NEXT:    [[OR27:%.*]] = or i64 [[TMP3]], [[TMP4]]
+; CHECK-NEXT:    [[OR27:%.*]] = or disjoint i64 [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    ret i64 [[OR27]]
 ;
   %1 = lshr i64 %i, 8
diff --git a/llvm/test/Transforms/InstCombine/and.ll b/llvm/test/Transforms/InstCombine/and.ll
index 386ee3807050140..79857f3efbc18bc 100644
--- a/llvm/test/Transforms/InstCombine/and.ll
+++ b/llvm/test/Transforms/InstCombine/and.ll
@@ -2433,7 +2433,7 @@ define i8 @negate_lowbitmask_use2(i8 %x, i8 %y) {
 define i64 @test_and_or_constexpr_infloop() {
 ; CHECK-LABEL: @test_and_or_constexpr_infloop(
 ; CHECK-NEXT:    [[AND:%.*]] = and i64 ptrtoint (ptr @g to i64), -8
-; CHECK-NEXT:    [[OR:%.*]] = or i64 [[AND]], 1
+; CHECK-NEXT:    [[OR:%.*]] = or disjoint i64 [[AND]], 1
 ; CHECK-NEXT:    ret i64 [[OR]]
 ;
   %and = and i64 ptrtoint (ptr @g to i64), -8
diff --git a/llvm/test/Transforms/InstCombine/apint-shift.ll b/llvm/test/Transforms/InstCombine/apint-shift.ll
index 377cc9978c5b766..05c3db70ce1ca91 100644
--- a/llvm/test/Transforms/InstCombine/apint-shift.ll
+++ b/llvm/test/Transforms/InstCombine/apint-shift.ll
@@ -273,7 +273,7 @@ define i18 @test13(i18 %x) {
 define i35 @test14(i35 %A) {
 ; CHECK-LABEL: @test14(
 ; CHECK-NEXT:    [[B:%.*]] = and i35 [[A:%.*]], -19760
-; CHECK-NEXT:    [[C:%.*]] = or i35 [[B]], 19744
+; CHECK-NEXT:    [[C:%.*]] = or disjoint i35 [[B]], 19744
 ; CHECK-NEXT:    ret i35 [[C]]
 ;
   %B = lshr i35 %A, 4
diff --git a/llvm/test/Transforms/InstCombine/binop-and-shifts.ll b/llvm/test/Transforms/InstCombine/binop-and-shifts.ll
index 45fd87be3c33189..148963894b89fb2 100644
--- a/llvm/test/Transforms/InstCombine/binop-and-shifts.ll
+++ b/llvm/test/Transforms/InstCombine/binop-and-shifts.ll
@@ -365,7 +365,7 @@ define i8 @lshr_xor_or_good_mask(i8 %x, i8 %y) {
 ; CHECK-LABEL: @lshr_xor_or_good_mask(
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Y:%.*]], [[X:%.*]]
 ; CHECK-NEXT:    [[TMP2:%.*]] = lshr i8 [[TMP1]], 4
-; CHECK-NEXT:    [[BW1:%.*]] = or i8 [[TMP2]], 48
+; CHECK-NEXT:    [[BW1:%.*]] = or disjoint i8 [[TMP2]], 48
 ; CHECK-NEXT:    ret i8 [[BW1]]
 ;
   %shift1 = lshr i8 %x, 4
diff --git a/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll b/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll
index 78f4550464681e5..c86dfde6ddece99 100644
--- a/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll
+++ b/llvm/test/Transforms/InstCombine/binop-of-displaced-shifts.ll
@@ -271,7 +271,7 @@ define i8 @mismatched_shifts(i8 %x) {
 ; CHECK-NEXT:    [[SHIFT:%.*]] = shl i8 16, [[X]]
 ; CHECK-NEXT:    [[ADD:%.*]] = add i8 [[X]], 1
 ; CHECK-NEXT:    [[SHIFT2:%.*]] = lshr i8 3, [[ADD]]
-; CHECK-NEXT:    [[BINOP:%.*]] = or i8 [[SHIFT]], [[SHIFT2]]
+; CHECK-NEXT:    [[BINOP:%.*]] = or disjoint i8 [[SHIFT]], [[SHIFT2]]
 ; CHECK-NEXT:    ret i8 [[BINOP]]
 ;
   %shift = shl i8 16, %x
diff --git a/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll b/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll
index b99111580277d6a..410a441f7778e66 100644
--- a/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll
+++ b/llvm/test/Transforms/InstCombine/bitcast-inselt-bitcast.ll
@@ -17,7 +17,7 @@ define i16 @insert0_v2i8(i16 %x, i8 %y) {
 ; LE-LABEL: @insert0_v2i8(
 ; LE-NEXT:    [[TMP1:%.*]] = and i16 [[X:%.*]], -256
 ; LE-NEXT:    [[TMP2:%.*]] = zext i8 [[Y:%.*]] to i16
-; LE-NEXT:    [[R:%.*]] = or i16 [[TMP1]], [[TMP2]]
+; LE-NEXT:    [[R:%.*]] = or disjoint i16 [[TMP1]], [[TMP2]]
 ; LE-NEXT:    ret i16 [[R]]
 ;
   %v = bitcast i16 %x to <2 x i8>
@@ -33,7 +33,7 @@ define i16 @insert1_v2i8(i16 %x, i8 %y) {
 ; BE-LABEL: @insert1_v2i8(
 ; BE-NEXT:    [[TMP1:%.*]] = and i16 [[X:%.*]], -256
 ; BE-NEXT:    [[TMP2:%.*]] = zext i8 [[Y:%.*]] to i16
-; BE-NEXT:    [[R:%.*]] = or i16 [[TMP1]], [[TMP2]]
+; BE-NEXT:    [[R:%.*]] = or disjoint i16 [[TMP1]], [[TMP2]]
 ; BE-NEXT:    ret i16 [[R]]
 ;
 ; LE-LABEL: @insert1_v2i8(
@@ -61,7 +61,7 @@ define i32 @insert0_v4i8(i32 %x, i8 %y) {
 ; LE-LABEL: @insert0_v4i8(
 ; LE-NEXT:    [[TMP1:%.*]] = and i32 [[X:%.*]], -256
 ; LE-NEXT:    [[TMP2:%.*]] = zext i8 [[Y:%.*]] to i32
-; LE-NEXT:    [[R:%.*]] = or i32 [[TMP1]], [[TMP2]]
+; LE-NEXT:    [[R:%.*]] = or disjoint i32 [[TMP1]], [[TMP2]]
 ; LE-NEXT:    ret i32 [[R]]
 ;
   %v = bitcast i32 %x to <4 x i8>
@@ -100,7 +100,7 @@ define i64 @insert0_v4i16(i64 %x, i16 %y) {
 ; LE-LABEL: @insert0_v4i16(
 ; LE-NEXT:    [[TMP1:%.*]] = and i64 [[X:%.*]], -65536
 ; LE-NEXT:    [[TMP2:%.*]] = zext i16 [[Y:%.*]] to i64
-; LE-NEXT:    [[R:%.*]] = or i64 [[TMP1]], [[TMP2]]
+; LE-NEXT:    [[R:%.*]] = or disjoint i64 [[TMP1]], [[TMP2]]
 ; LE-NEXT:    ret i64 [[R]]
 ;
   %v = bitcast i64 %x to <4 x i16>
@@ -131,7 +131,7 @@ define i64 @insert3_v4i16(i64 %x, i16 %y) {
 ; BE-LABEL: @insert3_v4i16(
 ; BE-NEXT:    [[TMP1:%.*]] = and i64 [[X:%.*]], -65536
 ; BE-NEXT:    [[TMP2:%.*]] = zext i16 [[Y:%.*]] to i64
-; BE-NEXT:    [[R:%.*]] = or i64 [[TMP1]], [[TMP2]]
+; BE-NEXT:    [[R:%.*]] = or disjoint i64 [[TMP1]], [[TMP2]]
 ; BE-NEXT:    ret i64 [[R]]
 ;
 ; LE-LABEL: @insert3_v4i16(
diff --git a/llvm/test/Transforms/InstCombine/bitreverse.ll b/llvm/test/Transforms/InstCombine/bitreverse.ll
index dca52e2c545e1e1..bf09ffe14101242 100644
--- a/llvm/test/Transforms/InstCombine/bitreverse.ll
+++ b/llvm/test/Transforms/InstCombine/bitreverse.ll
@@ -243,7 +243,7 @@ define i8 @rev8_mul_and_lshr(i8 %0) {
 ; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], 139536
 ; CHECK-NEXT:    [[TMP5:%.*]] = mul nuw nsw i64 [[TMP2]], 32800
 ; CHECK-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], 558144
-; CHECK-NEXT:    [[TMP7:%.*]] = or i64 [[TMP4]], [[TMP6]]
+; CHECK-NEXT:    [[TMP7:%.*]] = or disjoint i64 [[TMP4]], [[TMP6]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = mul nuw nsw i64 [[TMP7]], 65793
 ; CHECK-NEXT:    [[TMP9:%.*]] = lshr i64 [[TMP8]], 16
 ; CHECK-NEXT:    [[TMP10:%.*]] = trunc i64 [[TMP9]] to i8
diff --git a/llvm/test/Transforms/InstCombine/bswap.ll b/llvm/test/Transforms/InstCombine/bswap.ll
index 631d02ad8d806c3..756e898b18ebac9 100644
--- a/llvm/test/Transforms/InstCombine/bswap.ll
+++ b/llvm/test/Transforms/InstCombine/bswap.ll
@@ -42,7 +42,7 @@ define i16 @test1_trunc(i32 %i) {
 ; CHECK-NEXT:    [[T1:%.*]] = lshr i32 [[I:%.*]], 24
 ; CHECK-NEXT:    [[T3:%.*]] = lshr i32 [[I]], 8
 ; CHECK-NEXT:    [[T4:%.*]] = and i32 [[T3]], 65280
-; CHECK-NEXT:    [[T5:%.*]] = or i32 [[T1]], [[T4]]
+; CHECK-NEXT:    [[T5:%.*]] = or disjoint i32 [[T1]], [[T4]]
 ; CHECK-NEXT:    [[T13:%.*]] = trunc i32 [[T5]] to i16
 ; CHECK-NEXT:    ret i16 [[T13]]
 ;
@@ -59,7 +59,7 @@ define i16 @test1_trunc_extra_use(i32 %i) {
 ; CHECK-NEXT:    [[T1:%.*]] = lshr i32 [[I:%.*]], 24
 ; CHECK-NEXT:    [[T3:%.*]] = lshr i32 [[I]], 8
 ; CHECK-NEXT:    [[T4:%.*]] = and i32 [[T3]], 65280
-; CHECK-NEXT:    [[T5:%.*]] = or i32 [[T1]], [[T4]]
+; CHECK-NEXT:    [[T5:%.*]] = or disjoint i32 [[T1]], [[T4]]
 ; CHECK-NEXT:    call void @extra_use(i32 [[T5]])
 ; CHECK-NEXT:    [[T13:%.*]] = trunc i32 [[T5]] to i16
 ; CHECK-NEXT:    ret i16 [[T13]]
@@ -605,7 +605,7 @@ define i64 @bswap_and_mask_1(i64 %0) {
 ; CHECK-NEXT:    [[TMP2:%.*]] = lshr i64 [[TMP0:%.*]], 56
 ; CHECK-NEXT:    [[TMP3:%.*]] = lshr i64 [[TMP0]], 40
 ; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], 65280
-; CHECK-NEXT:    [[TMP5:%.*]] = or i64 [[TMP4]], [[TMP2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = or disjoint i64 [[TMP4]], [[TMP2]]
 ; CHECK-NEXT:    ret i64 [[TMP5]]
 ;
   %2 = lshr i64 %0, 56
@@ -781,7 +781,7 @@ define i16 @trunc_bswap_i160(ptr %a0) {
 ; CHECK-NEXT:    [[SH_DIFF:%.*]] = lshr i160 [[LOAD]], 120
 ; CHECK-NEXT:    [[TR_SH_DIFF:%.*]] = trunc i160 [[SH_DIFF]] to i16
 ; CHECK-NEXT:    [[SHL:%.*]] = and i16 [[TR_SH_DIFF]], -256
-; CHECK-NEXT:    [[OR:%.*]] = or i16 [[AND1]], [[SHL]]
+; CHECK-NEXT:    [[OR:%.*]] = or disjoint i16 [[AND1]], [[SHL]]
 ; CHECK-NEXT:    ret i16 [[OR]]
 ;
   %load = load i160, ptr %a0, align 4
diff --git a/llvm/test/Transforms/InstCombine/cast-mul-select.ll b/llvm/test/Transforms/InstCombine/cast-mul-select.ll
index 454522b85a1e843..975d7a34db36c1b 100644
--- a/llvm/test/Transforms/InstCombine/cast-mul-select.ll
+++ b/llvm/test/Transforms/InstCombine/cast-mul-select.ll
@@ -145,7 +145,7 @@ define i32 @eval_sext_multi_use_in_one_inst(i32 %x) {
 ; CHECK-NEXT:    [[T:%.*]] = trunc i32 [[X:%.*]] to i16
 ; CHECK-NEXT:    [[A:%.*]] = and i16 [[T]], 14
 ; CHECK-NEXT:    [[M:%.*]] = mul nuw nsw i16 [[A]], [[A]]
-; CHECK-NEXT:    [[O:%.*]] = or i16 [[M]], -32768
+; CHECK-NEXT:    [[O:%.*]] = or disjoint i16 [[M]], -32768
 ; CHECK-NEXT:    [[R:%.*]] = sext i16 [[O]] to i32
 ; CHECK-NEXT:    ret i32 [[R]]
 ;
@@ -156,7 +156,7 @@ define i32 @eval_sext_multi_use_in_one_inst(i32 %x) {
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i16 [[A]], metadata [[META77:![0-9]+]], metadata !DIExpression()), !dbg [[DBG82]]
 ; DBGINFO-NEXT:    [[M:%.*]] = mul nuw nsw i16 [[A]], [[A]], !dbg [[DBG83:![0-9]+]]
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i16 [[M]], metadata [[META78:![0-9]+]], metadata !DIExpression()), !dbg [[DBG83]]
-; DBGINFO-NEXT:    [[O:%.*]] = or i16 [[M]], -32768, !dbg [[DBG84:![0-9]+]]
+; DBGINFO-NEXT:    [[O:%.*]] = or disjoint i16 [[M]], -32768, !dbg [[DBG84:![0-9]+]]
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i16 [[O]], metadata [[META79:![0-9]+]], metadata !DIExpression()), !dbg [[DBG84]]
 ; DBGINFO-NEXT:    [[R:%.*]] = sext i16 [[O]] to i32, !dbg [[DBG85:![0-9]+]]
 ; DBGINFO-NEXT:    call void @llvm.dbg.value(metadata i32 [[R]], metadata [[META80:![0-9]+]], metadata !DIExpression()), !dbg [[DBG85]]
diff --git a/llvm/test/Transforms/InstCombine/cast.ll b/llvm/test/Transforms/InstCombine/cast.ll
index afa7ac45e96dcb4..1cda0e503ee9393 100644
--- a/llvm/test/Transforms/InstCombine/cast.ll
+++ b/llvm/test/Transforms/InstCombine/cast.ll
@@ -467,7 +467,7 @@ define i16 @test40(i16 %a) {
 ; ALL-LABEL: @test40(
 ; ALL-NEXT:    [[T21:%.*]] = lshr i16 [[A:%.*]], 9
 ; ALL-NEXT:    [[T5:%.*]] = shl i16 [[A]], 8
-; ALL-NEXT:    [[T32:%.*]] = or i16 [[T21]], [[T5]]
+; ALL-NEXT:    [[T32:%.*]] = or disjoint i16 [[T21]], [[T5]]
 ; ALL-NEXT:    ret i16 [[T32]]
 ;
   %t = zext i16 %a to i32
@@ -482,7 +482,7 @@ define <2 x i16> @test40vec(<2 x i16> %a) {
 ; ALL-LABEL: @test40vec(
 ; ALL-NEXT:    [[T21:%.*]] = lshr <2 x i16> [[A:%.*]], <i16 9, i16 9>
 ; ALL-NEXT:    [[T5:%.*]] = shl <2 x i16> [[A]], <i16 8, i16 8>
-; ALL-NEXT:    [[T32:%.*]] = or <2 x i16> [[T21]], [[T5]]
+; ALL-NEXT:    [[T32:%.*]] = or disjoint <2 x i16> [[T21]], [[T5]]
 ; ALL-NEXT:    ret <2 x i16> [[T32]]
 ;
   %t = zext <2 x i16> %a to <2 x i32>
@@ -497,7 +497,7 @@ define <2 x i16> @test40vec_nonuniform(<2 x i16> %a) {
 ; ALL-LABEL: @test40vec_nonuniform(
 ; ALL-NEXT:    [[T21:%.*]] = lshr <2 x i16> [[A:%.*]], <i16 9, i16 10>
 ; ALL-NEXT:    [[T5:%.*]] = shl <2 x i16> [[A]], <i16 8, i16 9>
-; ALL-NEXT:    [[T32:%.*]] = or <2 x i16> [[T21]], [[T5]]
+; ALL-NEXT:    [[T32:%.*]] = or disjoint <2 x i16> [[T21]], [[T5]]
 ; ALL-NEXT:    ret <2 x i16> [[T32]]
 ;
   %t = zext <2 x i16> %a to <2 x i32>
@@ -646,7 +646,7 @@ define i64 @test48(i8 %A1, i8 %a2) {
 ; ALL-LABEL: @test48(
 ; ALL-NEXT:    [[Z2:%.*]] = zext i8 [[A1:%.*]] to i32
 ; ALL-NEXT:    [[C:%.*]] = shl nuw nsw i32 [[Z2]], 8
-; ALL-NEXT:    [[D:%.*]] = or i32 [[C]], [[Z2]]
+; ALL-NEXT:    [[D:%.*]] = or disjoint i32 [[C]], [[Z2]]
 ; ALL-NEXT:    [[E:%.*]] = zext nneg i32 [[D]] to i64
 ; ALL-NEXT:    ret i64 [[E]]
 ;
@@ -690,7 +690,7 @@ define i64 @test51(i64 %A, i1 %cond) {
 ; ALL-NEXT:    [[C:%.*]] = and i64 [[A:%.*]], 4294967294
 ; ALL-NEXT:    [[NOT_COND:%.*]] = xor i1 [[COND:%.*]], true
 ; ALL-NEXT:    [[MASKSEL:%.*]] = zext i1 [[NOT_COND]] to i64
-; ALL-NEXT:    [[E:%.*]] = or i64 [[C]], [[MASKSEL]]
+; ALL-NEXT:    [[E:%.*]] = or disjoint i64 [[C]], [[MASKSEL]]
 ; ALL-NEXT:    [[SEXT:%.*]] = shl nuw i64 [[E]], 32
 ; ALL-NEXT:    [[F:%.*]] = ashr exact i64 [[SEXT]], 32
 ; ALL-NEXT:    ret i64 [[F]]
@@ -707,7 +707,7 @@ define i32 @test52(i64 %A) {
 ; ALL-LABEL: @test52(
 ; ALL-NEXT:    [[B:%.*]] = trunc i64 [[A:%.*]] to i32
 ; ALL-NEXT:    [[C:%.*]] = and i32 [[B]], 7224
-; ALL-NEXT:    [[D:%.*]] = or i32 [[C]], 32962
+; ALL-NEXT:    [[D:%.*]] = or disjoint i32 [[C]], 32962
 ; ALL-NEXT:    ret i32 [[D]]
 ;
   %B = trunc i64 %A to i16
@@ -720,7 +720,7 @@ define i32 @test52(i64 %A) {
 define i64 @test53(i32 %A) {
 ; ALL-LABEL: @test53(
 ; ALL-NEXT:    [[TMP1:%.*]] = and i32 [[A:%.*]], 7224
-; ALL-NEXT:    [[TMP2:%.*]] = or i32 [[TMP1]], 32962
+; ALL-NEXT:    [[TMP2:%.*]] = or disjoint i32 [[TMP1]], 32962
 ; ALL-NEXT:    [[D:%.*]] = zext nneg i32 [[TMP2]] to i64
 ; ALL-NEXT:    ret i64 [[D]]
 ;
@@ -735,7 +735,7 @@ define i32 @test54(i64 %A) {
 ; ALL-LABEL: @test54(
 ; ALL-NEXT:    [[B:%.*]] = trunc i64 [[A:%.*]] to i32
 ; ALL-NEXT:    [[C:%.*]] = and i32 [[B]], 7224
-; ALL-NEXT:    [[D:%.*]] = or i32 [[C]], -32574
+; ALL-NEXT:    [[D:%.*]] = or disjoint i32 [[C]], -32574
 ; ALL-NEXT:    ret i32 [[D]]
 ;
   %B = trunc i64 %A to i16
@@ -749,7 +749,7 @@ define i64 @test55(i32 %A) {
 ; ALL-LABEL: @test55(
 ; ALL-NEXT:    [[TMP1:%.*]] = and i32 [[A:%.*]], 7224
 ; ALL-NEXT:    [[C:%.*]] = zext nneg i32 [[TMP1]] to i64
-; ALL-NEXT:    [[D:%.*]] = or i64 [[C]], -32574
+; ALL-NEXT:    [[D:%.*]] = or disjoint i64 [[C]], -32574
 ; ALL-NEXT:    ret i64 [[D]]
 ;
   %B = trunc i32 %A to i16
@@ ...
[truncated]

@topperc
Copy link
Collaborator Author

topperc commented Nov 28, 2023

Are the KnownBits in SimplifyDemandedBit usable? We have this code

    if (SimplifyDemandedBits(I, 1, DemandedMask, RHSKnown, Depth + 1) ||         
        SimplifyDemandedBits(I, 0, DemandedMask & ~RHSKnown.One, LHSKnown,       
                             Depth + 1)) { 

@nikic Can we trust the known bits for the LHS if we didn't demanded them due to known 1s on the right hand side?

nikic added a commit that referenced this pull request Nov 30, 2023
In practice this is already true, and having this as an explicit
guarantee is useful for #72912. I don't think there is any good
reason why we would want to produce incorrect KnownBits results
for non-demanded bits.
@nikic
Copy link
Contributor

nikic commented Nov 30, 2023

Do we do that for other flags already? I based this off Add/Sub wrap flags.

Add/Sub are not considered roots for demanded bits simplification, so we can't (reliably) perform this in there. or is a simplification root.

Are the KnownBits in SimplifyDemandedBit usable? We have this code

    if (SimplifyDemandedBits(I, 1, DemandedMask, RHSKnown, Depth + 1) ||         
        SimplifyDemandedBits(I, 0, DemandedMask & ~RHSKnown.One, LHSKnown,       
                             Depth + 1)) { 

@nikic Can we trust the known bits for the LHS if we didn't demanded them due to known 1s on the right hand side?

Good question. I don't think there is any good reason why we would return incorrect KnownBits for non-demanded bits. I've run some tests to verify that we don't do that currently and updated the documentation to guarantee this (2031e72).

@topperc topperc force-pushed the pr/instcombine-infer-disjoint branch from 4b87774 to 8108f67 Compare December 1, 2023 18:54
Copy link
Contributor

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@topperc topperc force-pushed the pr/instcombine-infer-disjoint branch from 8108f67 to 7a04a7f Compare December 2, 2023 22:04
@topperc topperc merged commit 7ec4f60 into llvm:main Dec 2, 2023
2 of 3 checks passed
@topperc topperc deleted the pr/instcombine-infer-disjoint branch December 2, 2023 22:11
@nunoplopes
Copy link
Member

FWIW, Alive2 is complaining about this commit. These patches are not safe w.r.t. undef.

@topperc
Copy link
Collaborator Author

topperc commented Dec 4, 2023

FWIW, Alive2 is complaining about this commit. These patches are not safe w.r.t. undef.

Why not?

@nunoplopes
Copy link
Member

Here's a simple-ish example:

; Transforms/InstCombine/add.ll

define i5 @zext_sext_not(i4 %x) {
  %zx = zext i4 %x to i5
  %notx = xor i4 %x, 15
  %snotx = sext i4 %notx to i5
  %r = add i5 %zx, %snotx
  ret i5 %r
}
=>
define i5 @zext_sext_not(i4 %x) {
  %zx = zext i4 %x to i5
  %notx = xor i4 %x, 15
  %snotx = sext i4 %notx to i5
  %r = or disjoint i5 %zx, %snotx
  ret i5 %r
}
Transformation doesn't verify! (unsound)
ERROR: Target is more poisonous than source

Example:
i4 %x = undef

Source:
i5 %zx = #x00 (0)	[based on undef value]
i4 %notx = #xf (15, -1)	[based on undef value]
i5 %snotx = #x1f (31, -1)
i5 %r = #x1f (31, -1)

Target:
i5 %zx = #x0f (15)
i4 %notx = #xf (15, -1)
i5 %snotx = #x1f (31, -1)
i5 %r = poison
Source value: #x1f (31, -1)
Target value: poison

Essentially the code doesn't take into consideration that a same register may have different values when it is undef.

@topperc
Copy link
Collaborator Author

topperc commented Dec 4, 2023

Here's a simple-ish example:

; Transforms/InstCombine/add.ll

define i5 @zext_sext_not(i4 %x) {
  %zx = zext i4 %x to i5
  %notx = xor i4 %x, 15
  %snotx = sext i4 %notx to i5
  %r = add i5 %zx, %snotx
  ret i5 %r
}
=>
define i5 @zext_sext_not(i4 %x) {
  %zx = zext i4 %x to i5
  %notx = xor i4 %x, 15
  %snotx = sext i4 %notx to i5
  %r = or disjoint i5 %zx, %snotx
  ret i5 %r
}
Transformation doesn't verify! (unsound)
ERROR: Target is more poisonous than source

Example:
i4 %x = undef

Source:
i5 %zx = #x00 (0)	[based on undef value]
i4 %notx = #xf (15, -1)	[based on undef value]
i5 %snotx = #x1f (31, -1)
i5 %r = #x1f (31, -1)

Target:
i5 %zx = #x0f (15)
i4 %notx = #xf (15, -1)
i5 %snotx = #x1f (31, -1)
i5 %r = poison
Source value: #x1f (31, -1)
Target value: poison

Essentially the code doesn't take into consideration that a same register may have different values when it is undef.

Looks like this is caused by the checks in haveNoCommonBitsSetSpecialCases.

@topperc
Copy link
Collaborator Author

topperc commented Dec 4, 2023

@nikic is something like this the right fix?

diff --git a/llvm/lib/Analysis/ValueTracking.cpp b/llvm/lib/Analysis/ValueTracking.cpp
index 8c29c242215d..b03a56c922de 100644
--- a/llvm/lib/Analysis/ValueTracking.cpp
+++ b/llvm/lib/Analysis/ValueTracking.cpp
@@ -235,8 +235,11 @@ bool llvm::haveNoCommonBitsSet(const WithCache<const Value *> &LHSCache,
          "LHS and RHS should be integers");
 
   if (haveNoCommonBitsSetSpecialCases(LHS, RHS) ||
-      haveNoCommonBitsSetSpecialCases(RHS, LHS))
-    return true;
+      haveNoCommonBitsSetSpecialCases(RHS, LHS)) {
+    if (isGuaranteedNotToBeUndefOrPoison(LHS, SQ.AC, SQ.CxtI, SQ.DT) &&
+        isGuaranteedNotToBeUndefOrPoison(RHS, SQ.AC, SQ.CxtI, SQ.DT))
+      return true;
+  }
 
   return KnownBits::haveNoCommonBitsSet(LHSCache.getKnownBits(SQ),
                                         RHSCache.getKnownBits(SQ));

@nunoplopes
Copy link
Member

We don't have a isGuaranteedNotToBeUndef only function, so that's the only way. I would leave a fixme, since this call can be removed if we ever manage to kill undef.
Thanks for your help! 🙂

@nikic
Copy link
Contributor

nikic commented Dec 4, 2023

We don't have a isGuaranteedNotToBeUndef only function, so that's the only way. I would leave a fixme, since this call can be removed if we ever manage to kill undef.

I actually added this function earlier today.

@nikic is something like this the right fix?

I'd move the calls to isGuaranteedNotToBeUndef into haveNoCommonBitsSetSpecialCases, so you can check the correct values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clang Clang issues not falling into any other category llvm:transforms
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants