Skip to content

Conversation

@cofibrant
Copy link
Contributor

Use LLT::changeElementType() instead of LLT::changeElementSize() in LegalizerHelper::lowerMinMax() to avoid a crash in the case that the destination type is a pointer vector.

Fixes #166556

@llvmbot
Copy link
Member

llvmbot commented Nov 20, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Nathan Corbyn (cofibrant)

Changes

Use LLT::changeElementType() instead of LLT::changeElementSize() in LegalizerHelper::lowerMinMax() to avoid a crash in the case that the destination type is a pointer vector.

Fixes #166556


Full diff: https://github.com/llvm/llvm-project/pull/168872.diff

2 Files Affected:

  • (modified) llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp (+1-1)
  • (added) llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir (+174)
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index 120c38ab8404c..e19f14b863ee3 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -8657,7 +8657,7 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerMinMax(MachineInstr &MI) {
   auto [Dst, Src0, Src1] = MI.getFirst3Regs();
 
   const CmpInst::Predicate Pred = minMaxToCompare(MI.getOpcode());
-  LLT CmpType = MRI.getType(Dst).changeElementSize(1);
+  LLT CmpType = MRI.getType(Dst).changeElementType(LLT::scalar(1));
 
   auto Cmp = MIRBuilder.buildICmp(Pred, CmpType, Src0, Src1);
   MIRBuilder.buildSelect(Dst, Cmp, Src0, Src1);
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir
new file mode 100644
index 0000000000000..23d8f0bab4487
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir
@@ -0,0 +1,174 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 6
+# RUN: llc -o - -mtriple=aarch64 -run-pass=legalizer -verify-machineinstrs %s | FileCheck %s
+---
+name:            g_umax
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_umax
+    ; CHECK: liveins: $q0, $q1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s64>) = COPY $q0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(<2 x s64>) = COPY $q1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x p0>) = G_BUILD_VECTOR [[C]](p0), [[C]](p0)
+    ; CHECK-NEXT: [[BITCAST:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST1:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ugt), [[BITCAST]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[ICMP1:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ugt), [[BITCAST1]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[BITCAST2:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST3:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[PTRTOINT:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST2]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT1:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST3]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT2:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT3:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 -1
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C1]](s64), [[C1]](s64)
+    ; CHECK-NEXT: [[XOR:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[XOR1:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP1]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT]], [[ICMP]]
+    ; CHECK-NEXT: [[AND1:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT1]], [[ICMP1]]
+    ; CHECK-NEXT: [[AND2:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT2]], [[XOR]]
+    ; CHECK-NEXT: [[AND3:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT3]], [[XOR1]]
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND]], [[AND2]]
+    ; CHECK-NEXT: [[OR1:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND1]], [[AND3]]
+    ; CHECK-NEXT: [[INTTOPTR:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR]](<2 x s64>)
+    ; CHECK-NEXT: [[INTTOPTR1:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR1]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST4:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR]](<2 x p0>)
+    ; CHECK-NEXT: [[BITCAST5:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR1]](<2 x p0>)
+    ; CHECK-NEXT: $q0 = COPY [[BITCAST4]](<2 x s64>)
+    ; CHECK-NEXT: $q1 = COPY [[BITCAST5]](<2 x s64>)
+    ; CHECK-NEXT: RET_ReallyLR implicit $q0, implicit $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_UMAX %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)
+    $q0 = COPY %7(<2 x s64>)
+    $q1 = COPY %8(<2 x s64>)
+    RET_ReallyLR implicit $q0, implicit $q1
+...
+---
+name:            g_umin
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_umin
+    ; CHECK: liveins: $q0, $q1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s64>) = COPY $q0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(<2 x s64>) = COPY $q1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x p0>) = G_BUILD_VECTOR [[C]](p0), [[C]](p0)
+    ; CHECK-NEXT: [[BITCAST:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST1:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ult), [[BITCAST]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[ICMP1:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ult), [[BITCAST1]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[BITCAST2:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST3:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[PTRTOINT:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST2]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT1:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST3]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT2:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT3:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 -1
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C1]](s64), [[C1]](s64)
+    ; CHECK-NEXT: [[XOR:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[XOR1:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP1]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT]], [[ICMP]]
+    ; CHECK-NEXT: [[AND1:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT1]], [[ICMP1]]
+    ; CHECK-NEXT: [[AND2:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT2]], [[XOR]]
+    ; CHECK-NEXT: [[AND3:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT3]], [[XOR1]]
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND]], [[AND2]]
+    ; CHECK-NEXT: [[OR1:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND1]], [[AND3]]
+    ; CHECK-NEXT: [[INTTOPTR:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR]](<2 x s64>)
+    ; CHECK-NEXT: [[INTTOPTR1:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR1]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST4:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR]](<2 x p0>)
+    ; CHECK-NEXT: [[BITCAST5:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR1]](<2 x p0>)
+    ; CHECK-NEXT: $q0 = COPY [[BITCAST4]](<2 x s64>)
+    ; CHECK-NEXT: $q1 = COPY [[BITCAST5]](<2 x s64>)
+    ; CHECK-NEXT: RET_ReallyLR implicit $q0, implicit $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_UMIN %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)
+    $q0 = COPY %7(<2 x s64>)
+    $q1 = COPY %8(<2 x s64>)
+    RET_ReallyLR implicit $q0, implicit $q1
+...
+---
+name:            g_smax
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_smax
+    ; CHECK: liveins: $q0, $q1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s64>) = COPY $q0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(<2 x s64>) = COPY $q1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x p0>) = G_BUILD_VECTOR [[C]](p0), [[C]](p0)
+    ; CHECK-NEXT: [[BITCAST:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST1:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(sgt), [[BITCAST]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[ICMP1:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(sgt), [[BITCAST1]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[BITCAST2:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST3:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[PTRTOINT:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST2]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT1:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST3]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT2:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT3:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 -1
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C1]](s64), [[C1]](s64)
+    ; CHECK-NEXT: [[XOR:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[XOR1:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP1]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT]], [[ICMP]]
+    ; CHECK-NEXT: [[AND1:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT1]], [[ICMP1]]
+    ; CHECK-NEXT: [[AND2:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT2]], [[XOR]]
+    ; CHECK-NEXT: [[AND3:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT3]], [[XOR1]]
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND]], [[AND2]]
+    ; CHECK-NEXT: [[OR1:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND1]], [[AND3]]
+    ; CHECK-NEXT: [[INTTOPTR:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR]](<2 x s64>)
+    ; CHECK-NEXT: [[INTTOPTR1:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR1]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST4:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR]](<2 x p0>)
+    ; CHECK-NEXT: [[BITCAST5:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR1]](<2 x p0>)
+    ; CHECK-NEXT: $q0 = COPY [[BITCAST4]](<2 x s64>)
+    ; CHECK-NEXT: $q1 = COPY [[BITCAST5]](<2 x s64>)
+    ; CHECK-NEXT: RET_ReallyLR implicit $q0, implicit $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_SMAX %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)
+    $q0 = COPY %7(<2 x s64>)
+    $q1 = COPY %8(<2 x s64>)
+    RET_ReallyLR implicit $q0, implicit $q1
+...
+---
+name:            g_smin
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_smin
+    ; CHECK: liveins: $q0, $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_SMIN %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)

@llvmbot
Copy link
Member

llvmbot commented Nov 20, 2025

@llvm/pr-subscribers-llvm-globalisel

Author: Nathan Corbyn (cofibrant)

Changes

Use LLT::changeElementType() instead of LLT::changeElementSize() in LegalizerHelper::lowerMinMax() to avoid a crash in the case that the destination type is a pointer vector.

Fixes #166556


Full diff: https://github.com/llvm/llvm-project/pull/168872.diff

2 Files Affected:

  • (modified) llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp (+1-1)
  • (added) llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir (+174)
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index 120c38ab8404c..e19f14b863ee3 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -8657,7 +8657,7 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerMinMax(MachineInstr &MI) {
   auto [Dst, Src0, Src1] = MI.getFirst3Regs();
 
   const CmpInst::Predicate Pred = minMaxToCompare(MI.getOpcode());
-  LLT CmpType = MRI.getType(Dst).changeElementSize(1);
+  LLT CmpType = MRI.getType(Dst).changeElementType(LLT::scalar(1));
 
   auto Cmp = MIRBuilder.buildICmp(Pred, CmpType, Src0, Src1);
   MIRBuilder.buildSelect(Dst, Cmp, Src0, Src1);
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir
new file mode 100644
index 0000000000000..23d8f0bab4487
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-min-max-crash.mir
@@ -0,0 +1,174 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 6
+# RUN: llc -o - -mtriple=aarch64 -run-pass=legalizer -verify-machineinstrs %s | FileCheck %s
+---
+name:            g_umax
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_umax
+    ; CHECK: liveins: $q0, $q1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s64>) = COPY $q0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(<2 x s64>) = COPY $q1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x p0>) = G_BUILD_VECTOR [[C]](p0), [[C]](p0)
+    ; CHECK-NEXT: [[BITCAST:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST1:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ugt), [[BITCAST]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[ICMP1:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ugt), [[BITCAST1]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[BITCAST2:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST3:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[PTRTOINT:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST2]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT1:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST3]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT2:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT3:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 -1
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C1]](s64), [[C1]](s64)
+    ; CHECK-NEXT: [[XOR:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[XOR1:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP1]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT]], [[ICMP]]
+    ; CHECK-NEXT: [[AND1:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT1]], [[ICMP1]]
+    ; CHECK-NEXT: [[AND2:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT2]], [[XOR]]
+    ; CHECK-NEXT: [[AND3:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT3]], [[XOR1]]
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND]], [[AND2]]
+    ; CHECK-NEXT: [[OR1:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND1]], [[AND3]]
+    ; CHECK-NEXT: [[INTTOPTR:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR]](<2 x s64>)
+    ; CHECK-NEXT: [[INTTOPTR1:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR1]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST4:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR]](<2 x p0>)
+    ; CHECK-NEXT: [[BITCAST5:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR1]](<2 x p0>)
+    ; CHECK-NEXT: $q0 = COPY [[BITCAST4]](<2 x s64>)
+    ; CHECK-NEXT: $q1 = COPY [[BITCAST5]](<2 x s64>)
+    ; CHECK-NEXT: RET_ReallyLR implicit $q0, implicit $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_UMAX %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)
+    $q0 = COPY %7(<2 x s64>)
+    $q1 = COPY %8(<2 x s64>)
+    RET_ReallyLR implicit $q0, implicit $q1
+...
+---
+name:            g_umin
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_umin
+    ; CHECK: liveins: $q0, $q1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s64>) = COPY $q0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(<2 x s64>) = COPY $q1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x p0>) = G_BUILD_VECTOR [[C]](p0), [[C]](p0)
+    ; CHECK-NEXT: [[BITCAST:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST1:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ult), [[BITCAST]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[ICMP1:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(ult), [[BITCAST1]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[BITCAST2:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST3:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[PTRTOINT:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST2]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT1:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST3]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT2:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT3:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 -1
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C1]](s64), [[C1]](s64)
+    ; CHECK-NEXT: [[XOR:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[XOR1:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP1]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT]], [[ICMP]]
+    ; CHECK-NEXT: [[AND1:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT1]], [[ICMP1]]
+    ; CHECK-NEXT: [[AND2:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT2]], [[XOR]]
+    ; CHECK-NEXT: [[AND3:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT3]], [[XOR1]]
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND]], [[AND2]]
+    ; CHECK-NEXT: [[OR1:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND1]], [[AND3]]
+    ; CHECK-NEXT: [[INTTOPTR:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR]](<2 x s64>)
+    ; CHECK-NEXT: [[INTTOPTR1:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR1]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST4:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR]](<2 x p0>)
+    ; CHECK-NEXT: [[BITCAST5:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR1]](<2 x p0>)
+    ; CHECK-NEXT: $q0 = COPY [[BITCAST4]](<2 x s64>)
+    ; CHECK-NEXT: $q1 = COPY [[BITCAST5]](<2 x s64>)
+    ; CHECK-NEXT: RET_ReallyLR implicit $q0, implicit $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_UMIN %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)
+    $q0 = COPY %7(<2 x s64>)
+    $q1 = COPY %8(<2 x s64>)
+    RET_ReallyLR implicit $q0, implicit $q1
+...
+---
+name:            g_smax
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_smax
+    ; CHECK: liveins: $q0, $q1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s64>) = COPY $q0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(<2 x s64>) = COPY $q1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x p0>) = G_BUILD_VECTOR [[C]](p0), [[C]](p0)
+    ; CHECK-NEXT: [[BITCAST:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST1:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(sgt), [[BITCAST]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[ICMP1:%[0-9]+]]:_(<2 x s64>) = G_ICMP intpred(sgt), [[BITCAST1]](<2 x p0>), [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[BITCAST2:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST3:%[0-9]+]]:_(<2 x p0>) = G_BITCAST [[COPY1]](<2 x s64>)
+    ; CHECK-NEXT: [[PTRTOINT:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST2]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT1:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BITCAST3]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT2:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[PTRTOINT3:%[0-9]+]]:_(<2 x s64>) = G_PTRTOINT [[BUILD_VECTOR]](<2 x p0>)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 -1
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C1]](s64), [[C1]](s64)
+    ; CHECK-NEXT: [[XOR:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[XOR1:%[0-9]+]]:_(<2 x s64>) = G_XOR [[ICMP1]], [[BUILD_VECTOR1]]
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT]], [[ICMP]]
+    ; CHECK-NEXT: [[AND1:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT1]], [[ICMP1]]
+    ; CHECK-NEXT: [[AND2:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT2]], [[XOR]]
+    ; CHECK-NEXT: [[AND3:%[0-9]+]]:_(<2 x s64>) = G_AND [[PTRTOINT3]], [[XOR1]]
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND]], [[AND2]]
+    ; CHECK-NEXT: [[OR1:%[0-9]+]]:_(<2 x s64>) = G_OR [[AND1]], [[AND3]]
+    ; CHECK-NEXT: [[INTTOPTR:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR]](<2 x s64>)
+    ; CHECK-NEXT: [[INTTOPTR1:%[0-9]+]]:_(<2 x p0>) = G_INTTOPTR [[OR1]](<2 x s64>)
+    ; CHECK-NEXT: [[BITCAST4:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR]](<2 x p0>)
+    ; CHECK-NEXT: [[BITCAST5:%[0-9]+]]:_(<2 x s64>) = G_BITCAST [[INTTOPTR1]](<2 x p0>)
+    ; CHECK-NEXT: $q0 = COPY [[BITCAST4]](<2 x s64>)
+    ; CHECK-NEXT: $q1 = COPY [[BITCAST5]](<2 x s64>)
+    ; CHECK-NEXT: RET_ReallyLR implicit $q0, implicit $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_SMAX %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)
+    $q0 = COPY %7(<2 x s64>)
+    $q1 = COPY %8(<2 x s64>)
+    RET_ReallyLR implicit $q0, implicit $q1
+...
+---
+name:            g_smin
+legalized:       false
+body:             |
+  bb.0:
+    liveins: $q0, $q1
+
+    ; CHECK-LABEL: name: g_smin
+    ; CHECK: liveins: $q0, $q1
+    %1:_(<2 x s64>) = COPY $q0
+    %2:_(<2 x s64>) = COPY $q1
+    %0:_(<4 x p0>) = G_CONCAT_VECTORS %1(<2 x s64>), %2(<2 x s64>)
+    %4:_(p0) = G_CONSTANT i64 0
+    %3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
+    %6:_(<4 x p0>) = G_SMIN %0, %3
+    %7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)

@github-actions
Copy link

🐧 Linux x64 Test Results

  • 186424 tests passed
  • 4863 tests skipped

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

End to end IR tests would be better

@@ -0,0 +1,174 @@
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 6
# RUN: llc -o - -mtriple=aarch64 -run-pass=legalizer -verify-machineinstrs %s | FileCheck %s
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# RUN: llc -o - -mtriple=aarch64 -run-pass=legalizer -verify-machineinstrs %s | FileCheck %s
# RUN: llc -o - -mtriple=aarch64 -run-pass=legalizer %s | FileCheck %s

%4:_(p0) = G_CONSTANT i64 0
%3:_(<4 x p0>) = G_BUILD_VECTOR %4(p0), %4(p0), %4(p0), %4(p0)
%6:_(<4 x p0>) = G_SMIN %0, %3
%7:_(<2 x s64>), %8:_(<2 x s64>) = G_UNMERGE_VALUES %6(<4 x p0>)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing separators at end of function

@cofibrant
Copy link
Contributor Author

End to end IR tests would be better

I had planned to use an end to end IR test but unfortunately we still fail to select here so had thought a MIR test would do the job. What would you recommend?

@arsenm
Copy link
Contributor

arsenm commented Nov 20, 2025

End to end IR tests would be better

I had planned to use an end to end IR test but unfortunately we still fail to select here so had thought a MIR test would do the job. What would you recommend?

I guess keep it as MIR for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[AArch64][GISel] Assert "invalid to directly change element size for pointers"

4 participants