Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -15654,6 +15654,56 @@ SDValue AArch64TargetLowering::LowerBUILD_VECTOR(SDValue Op,
}
}

// 128-bit NEON integer vectors:
// If BUILD_VECTOR has low half == splat(lane 0) and high half == zero,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is wrong. I think you want to say that if data is only contained in lower half of the register and the upper half is zeroed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think this comment should be less detailed and at high level explain that we can exploit the fact that 64-bit dup and fmov instructions zero the other 64 bits, to simplify generated code.

// build the low half and return SUBREG_TO_REG(0, Lo, dsub).
// This avoids INSERT_VECTOR_ELT chains and lets later passes assume the
// other lanes are zero.
if (VT.isFixedLengthVector() && VT.getSizeInBits() == 128) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this work if it is 64 bits vector? Is it valid to have the same for 64 bits vector?

EVT LaneVT = VT.getVectorElementType();
if (LaneVT.isInteger()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, this should work on floating point types as well since we don't do any integer specific operations here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it needs to be always integer? What happens if this is a float type? It looks like nothing breaks if I remove this statement.

const unsigned HalfElts = NumElts >> 1;
SDValue FirstVal = Op.getOperand(0);

auto IsZero = [&](SDValue V) { return isNullConstant(V); };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably want your check here to be: isNullConstantOrUndef || isNullFPConstant


bool IsLoSplatHiZero = true;
for (unsigned i = 0; i < NumElts; ++i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this for loop could be merged to the big one at the beginning. We don't need to add anything to check the lower half, we can just check if NumDifferentLanes == 2 and for the upper half we can introduce new variable to keep track if upper half is zero or undef. This will simplify guarding of this optimization to single if check.

SDValue Vi = Op.getOperand(i);
bool violates = (i < HalfElts) ? (Vi != FirstVal) : !IsZero(Vi);
if (violates) {
IsLoSplatHiZero = false;
break;
}
}

if (IsLoSplatHiZero) {
EVT HalfVT = VT.getHalfNumVectorElementsVT(*DAG.getContext());
unsigned LaneBits = LaneVT.getSizeInBits();

auto buildSubregToReg = [&](SDValue LoHalf) -> SDValue {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure this needs to be its own lambda function.

SDValue ZeroImm = DAG.getTargetConstant(0, DL, MVT::i32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need targetConstatnts here ? I think getConstant should be sufficient.

SDValue SubIdx = DAG.getTargetConstant(AArch64::dsub, DL, MVT::i32);
SDNode *N = DAG.getMachineNode(TargetOpcode::SUBREG_TO_REG, DL, VT,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi - we should ideally not be introducing MachineNodes this early in the pipeline. It is like a layering violation. It looked like it would be possible to use CONCAT(Dup, Zeroes) and get the same result in the simpler tests. Some of the others didn't work so well though.

We could also think of doing this for other BV's too, not just dups. CONCAT(BV, Zero) could potentially lower more efficiently than a lot of zero inserts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we have tried different nodes here, but we always ended up with redundant moves for some cases. That's why we went with SUBREG_TO_REG. We could create a new ISD node to use here to take care of layering concern here if you would like though.

{ZeroImm, LoHalf, SubIdx});
return SDValue(N, 0);
};

if (LaneBits == 64) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You do a lot of code repetition in this if statement. I think common parts should be pulled out of it and then it can be simplified

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (LaneBits == 64) {
if (FirstVal.getValueSizeInBits() == 64) {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should also add quick comment to explain why you use different node for this case.

// v2i64
SDValue First64 = DAG.getZExtOrTrunc(FirstVal, DL, MVT::i64);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

THere is no need to cast here. I think you can safely assume that the first element is 64 bit already

SDValue Lo = DAG.getNode(ISD::SCALAR_TO_VECTOR, DL, HalfVT, First64);
return buildSubregToReg(Lo);
} else {
// v4i32/v8i16/v16i8
SDValue FirstW = DAG.getZExtOrTrunc(FirstVal, DL, MVT::i32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you always casting to i32 here ?

SDValue DupLo = DAG.getNode(AArch64ISD::DUP, DL, HalfVT, FirstW);
return buildSubregToReg(DupLo);
}
}
}
}

// Use DUP for non-constant splats. For f32 constant splats, reduce to
// i32 and try again.
if (usesOnlyOneValue) {
Expand Down
9 changes: 3 additions & 6 deletions llvm/test/CodeGen/AArch64/aarch64-addv.ll
Original file line number Diff line number Diff line change
Expand Up @@ -553,9 +553,8 @@ define i8 @addv_zero_lanes_negative_v8i8(ptr %arr) {
define i8 @addv_zero_lanes_v16i8(ptr %arr) {
; CHECK-SD-LABEL: addv_zero_lanes_v16i8:
; CHECK-SD: // %bb.0:
; CHECK-SD-NEXT: movi v0.2d, #0000000000000000
; CHECK-SD-NEXT: ldrb w8, [x0]
; CHECK-SD-NEXT: mov v0.d[0], x8
; CHECK-SD-NEXT: fmov d0, x8
; CHECK-SD-NEXT: addv b0, v0.16b
; CHECK-SD-NEXT: fmov w0, s0
; CHECK-SD-NEXT: ret
Expand All @@ -578,9 +577,8 @@ define i8 @addv_zero_lanes_v16i8(ptr %arr) {
define i16 @addv_zero_lanes_v8i16(ptr %arr) {
; CHECK-SD-LABEL: addv_zero_lanes_v8i16:
; CHECK-SD: // %bb.0:
; CHECK-SD-NEXT: movi v0.2d, #0000000000000000
; CHECK-SD-NEXT: ldrh w8, [x0]
; CHECK-SD-NEXT: mov v0.d[0], x8
; CHECK-SD-NEXT: fmov d0, x8
; CHECK-SD-NEXT: addv h0, v0.8h
; CHECK-SD-NEXT: fmov w0, s0
; CHECK-SD-NEXT: ret
Expand All @@ -603,9 +601,8 @@ define i16 @addv_zero_lanes_v8i16(ptr %arr) {
define i32 @addv_zero_lanes_v4i32(ptr %arr) {
; CHECK-SD-LABEL: addv_zero_lanes_v4i32:
; CHECK-SD: // %bb.0:
; CHECK-SD-NEXT: movi v0.2d, #0000000000000000
; CHECK-SD-NEXT: ldr w8, [x0]
; CHECK-SD-NEXT: mov v0.d[0], x8
; CHECK-SD-NEXT: fmov d0, x8
; CHECK-SD-NEXT: addv s0, v0.4s
; CHECK-SD-NEXT: fmov w0, s0
; CHECK-SD-NEXT: ret
Expand Down
19 changes: 9 additions & 10 deletions llvm/test/CodeGen/AArch64/aarch64-matrix-umull-smull.ll
Original file line number Diff line number Diff line change
Expand Up @@ -823,15 +823,14 @@ define i64 @red_mla_dup_ext_u8_s8_s64(ptr noalias noundef readonly captures(none
; CHECK-SD-NEXT: // %bb.9: // %vec.epilog.iter.check
; CHECK-SD-NEXT: cbz x11, .LBB6_13
; CHECK-SD-NEXT: .LBB6_10: // %vec.epilog.ph
; CHECK-SD-NEXT: movi v0.2d, #0000000000000000
; CHECK-SD-NEXT: mov w11, w1
; CHECK-SD-NEXT: movi v1.2d, #0000000000000000
; CHECK-SD-NEXT: movi v0.2d, #0000000000000000
; CHECK-SD-NEXT: movi v2.2d, #0x000000000000ff
; CHECK-SD-NEXT: sxtb x11, w11
; CHECK-SD-NEXT: movi v3.2d, #0x000000000000ff
; CHECK-SD-NEXT: dup v2.2s, w11
; CHECK-SD-NEXT: fmov d3, x8
; CHECK-SD-NEXT: dup v1.2s, w11
; CHECK-SD-NEXT: mov x11, x10
; CHECK-SD-NEXT: and x10, x9, #0xfffffffc
; CHECK-SD-NEXT: mov v0.d[0], x8
; CHECK-SD-NEXT: sub x8, x11, x10
; CHECK-SD-NEXT: add x11, x0, x11
; CHECK-SD-NEXT: .LBB6_11: // %vec.epilog.vector.body
Expand All @@ -842,15 +841,15 @@ define i64 @red_mla_dup_ext_u8_s8_s64(ptr noalias noundef readonly captures(none
; CHECK-SD-NEXT: ushll v4.4s, v4.4h, #0
; CHECK-SD-NEXT: ushll v5.2d, v4.2s, #0
; CHECK-SD-NEXT: ushll2 v4.2d, v4.4s, #0
; CHECK-SD-NEXT: and v5.16b, v5.16b, v3.16b
; CHECK-SD-NEXT: and v4.16b, v4.16b, v3.16b
; CHECK-SD-NEXT: and v5.16b, v5.16b, v2.16b
; CHECK-SD-NEXT: and v4.16b, v4.16b, v2.16b
; CHECK-SD-NEXT: xtn v5.2s, v5.2d
; CHECK-SD-NEXT: xtn v4.2s, v4.2d
; CHECK-SD-NEXT: smlal v1.2d, v2.2s, v4.2s
; CHECK-SD-NEXT: smlal v0.2d, v2.2s, v5.2s
; CHECK-SD-NEXT: smlal v0.2d, v1.2s, v4.2s
; CHECK-SD-NEXT: smlal v3.2d, v1.2s, v5.2s
; CHECK-SD-NEXT: b.ne .LBB6_11
; CHECK-SD-NEXT: // %bb.12: // %vec.epilog.middle.block
; CHECK-SD-NEXT: add v0.2d, v0.2d, v1.2d
; CHECK-SD-NEXT: add v0.2d, v3.2d, v0.2d
; CHECK-SD-NEXT: cmp x10, x9
; CHECK-SD-NEXT: addp d0, v0.2d
; CHECK-SD-NEXT: fmov x8, d0
Expand Down
3 changes: 1 addition & 2 deletions llvm/test/CodeGen/AArch64/bitcast-extend.ll
Original file line number Diff line number Diff line change
Expand Up @@ -339,9 +339,8 @@ define <8 x i8> @load_sext_i32_v8i8(ptr %p) {
define <16 x i8> @load_zext_v16i8(ptr %p) {
; CHECK-SD-LABEL: load_zext_v16i8:
; CHECK-SD: // %bb.0:
; CHECK-SD-NEXT: movi v0.2d, #0000000000000000
; CHECK-SD-NEXT: ldr w8, [x0]
; CHECK-SD-NEXT: mov v0.d[0], x8
; CHECK-SD-NEXT: fmov d0, x8
; CHECK-SD-NEXT: ret
;
; CHECK-GI-LABEL: load_zext_v16i8:
Expand Down
56 changes: 28 additions & 28 deletions llvm/test/CodeGen/AArch64/combine-sdiv.ll
Original file line number Diff line number Diff line change
Expand Up @@ -578,10 +578,10 @@ define <2 x i64> @combine_vec_sdiv_by_pow2b_v2i64(<2 x i64> %x) {
; CHECK-SD-NEXT: adrp x8, .LCPI21_1
; CHECK-SD-NEXT: ushl v1.2d, v1.2d, v2.2d
; CHECK-SD-NEXT: ldr q2, [x8, :lo12:.LCPI21_1]
; CHECK-SD-NEXT: adrp x8, .LCPI21_2
; CHECK-SD-NEXT: mov x8, #-1 // =0xffffffffffffffff
; CHECK-SD-NEXT: add v1.2d, v0.2d, v1.2d
; CHECK-SD-NEXT: sshl v1.2d, v1.2d, v2.2d
; CHECK-SD-NEXT: ldr q2, [x8, :lo12:.LCPI21_2]
; CHECK-SD-NEXT: fmov d2, x8
; CHECK-SD-NEXT: bif v0.16b, v1.16b, v2.16b
; CHECK-SD-NEXT: ret
;
Expand Down Expand Up @@ -612,23 +612,23 @@ define <4 x i64> @combine_vec_sdiv_by_pow2b_v4i64(<4 x i64> %x) {
; CHECK-SD: // %bb.0:
; CHECK-SD-NEXT: adrp x8, .LCPI22_0
; CHECK-SD-NEXT: cmlt v2.2d, v0.2d, #0
; CHECK-SD-NEXT: adrp x9, .LCPI22_3
; CHECK-SD-NEXT: ldr q3, [x8, :lo12:.LCPI22_0]
; CHECK-SD-NEXT: adrp x8, .LCPI22_3
; CHECK-SD-NEXT: ldr q4, [x8, :lo12:.LCPI22_3]
; CHECK-SD-NEXT: adrp x8, .LCPI22_2
; CHECK-SD-NEXT: ldr q4, [x8, :lo12:.LCPI22_2]
; CHECK-SD-NEXT: adrp x8, .LCPI22_1
; CHECK-SD-NEXT: ushl v2.2d, v2.2d, v3.2d
; CHECK-SD-NEXT: cmlt v3.2d, v1.2d, #0
; CHECK-SD-NEXT: add v2.2d, v0.2d, v2.2d
; CHECK-SD-NEXT: ushl v3.2d, v3.2d, v4.2d
; CHECK-SD-NEXT: ldr q4, [x8, :lo12:.LCPI22_1]
; CHECK-SD-NEXT: adrp x8, .LCPI22_2
; CHECK-SD-NEXT: mov x8, #-1 // =0xffffffffffffffff
; CHECK-SD-NEXT: sshl v2.2d, v2.2d, v4.2d
; CHECK-SD-NEXT: ldr q4, [x8, :lo12:.LCPI22_2]
; CHECK-SD-NEXT: add v1.2d, v1.2d, v3.2d
; CHECK-SD-NEXT: adrp x8, .LCPI22_4
; CHECK-SD-NEXT: ldr q3, [x8, :lo12:.LCPI22_4]
; CHECK-SD-NEXT: bif v0.16b, v2.16b, v4.16b
; CHECK-SD-NEXT: sshl v1.2d, v1.2d, v3.2d
; CHECK-SD-NEXT: fmov d3, x8
; CHECK-SD-NEXT: ldr q4, [x9, :lo12:.LCPI22_3]
; CHECK-SD-NEXT: bif v0.16b, v2.16b, v3.16b
; CHECK-SD-NEXT: sshl v1.2d, v1.2d, v4.2d
; CHECK-SD-NEXT: ret
;
; CHECK-GI-LABEL: combine_vec_sdiv_by_pow2b_v4i64:
Expand Down Expand Up @@ -670,28 +670,28 @@ define <8 x i64> @combine_vec_sdiv_by_pow2b_v8i64(<8 x i64> %x) {
; CHECK-SD-NEXT: cmlt v4.2d, v0.2d, #0
; CHECK-SD-NEXT: cmlt v6.2d, v2.2d, #0
; CHECK-SD-NEXT: ldr q5, [x8, :lo12:.LCPI23_0]
; CHECK-SD-NEXT: adrp x8, .LCPI23_3
; CHECK-SD-NEXT: cmlt v7.2d, v3.2d, #0
; CHECK-SD-NEXT: ldr q16, [x8, :lo12:.LCPI23_3]
; CHECK-SD-NEXT: adrp x8, .LCPI23_1
; CHECK-SD-NEXT: adrp x8, .LCPI23_2
; CHECK-SD-NEXT: cmlt v7.2d, v1.2d, #0
; CHECK-SD-NEXT: cmlt v16.2d, v3.2d, #0
; CHECK-SD-NEXT: ushl v4.2d, v4.2d, v5.2d
; CHECK-SD-NEXT: ushl v5.2d, v6.2d, v5.2d
; CHECK-SD-NEXT: cmlt v6.2d, v1.2d, #0
; CHECK-SD-NEXT: ldr q6, [x8, :lo12:.LCPI23_2]
; CHECK-SD-NEXT: adrp x8, .LCPI23_1
; CHECK-SD-NEXT: ushl v7.2d, v7.2d, v6.2d
; CHECK-SD-NEXT: ldr q17, [x8, :lo12:.LCPI23_1]
; CHECK-SD-NEXT: ushl v7.2d, v7.2d, v16.2d
; CHECK-SD-NEXT: adrp x8, .LCPI23_2
; CHECK-SD-NEXT: ushl v6.2d, v16.2d, v6.2d
; CHECK-SD-NEXT: add v4.2d, v0.2d, v4.2d
; CHECK-SD-NEXT: add v5.2d, v2.2d, v5.2d
; CHECK-SD-NEXT: ushl v6.2d, v6.2d, v16.2d
; CHECK-SD-NEXT: ldr q16, [x8, :lo12:.LCPI23_2]
; CHECK-SD-NEXT: adrp x8, .LCPI23_4
; CHECK-SD-NEXT: add v3.2d, v3.2d, v7.2d
; CHECK-SD-NEXT: mov x8, #-1 // =0xffffffffffffffff
; CHECK-SD-NEXT: add v1.2d, v1.2d, v7.2d
; CHECK-SD-NEXT: fmov d7, x8
; CHECK-SD-NEXT: adrp x8, .LCPI23_3
; CHECK-SD-NEXT: sshl v4.2d, v4.2d, v17.2d
; CHECK-SD-NEXT: sshl v5.2d, v5.2d, v17.2d
; CHECK-SD-NEXT: add v1.2d, v1.2d, v6.2d
; CHECK-SD-NEXT: ldr q6, [x8, :lo12:.LCPI23_4]
; CHECK-SD-NEXT: bif v0.16b, v4.16b, v16.16b
; CHECK-SD-NEXT: bif v2.16b, v5.16b, v16.16b
; CHECK-SD-NEXT: add v3.2d, v3.2d, v6.2d
; CHECK-SD-NEXT: ldr q6, [x8, :lo12:.LCPI23_3]
; CHECK-SD-NEXT: bif v0.16b, v4.16b, v7.16b
; CHECK-SD-NEXT: bif v2.16b, v5.16b, v7.16b
; CHECK-SD-NEXT: sshl v1.2d, v1.2d, v6.2d
; CHECK-SD-NEXT: sshl v3.2d, v3.2d, v6.2d
; CHECK-SD-NEXT: ret
Expand Down Expand Up @@ -920,13 +920,13 @@ define <4 x i32> @non_splat_minus_one_divisor_2(<4 x i32> %A) {
; CHECK-SD-NEXT: adrp x8, .LCPI27_1
; CHECK-SD-NEXT: ushl v1.4s, v1.4s, v2.4s
; CHECK-SD-NEXT: ldr q2, [x8, :lo12:.LCPI27_1]
; CHECK-SD-NEXT: mov w8, #-1 // =0xffffffff
; CHECK-SD-NEXT: dup v3.2s, w8
; CHECK-SD-NEXT: adrp x8, .LCPI27_2
; CHECK-SD-NEXT: add v1.4s, v0.4s, v1.4s
; CHECK-SD-NEXT: sshl v1.4s, v1.4s, v2.4s
; CHECK-SD-NEXT: ldr q2, [x8, :lo12:.LCPI27_2]
; CHECK-SD-NEXT: adrp x8, .LCPI27_3
; CHECK-SD-NEXT: bif v0.16b, v1.16b, v2.16b
; CHECK-SD-NEXT: ldr q2, [x8, :lo12:.LCPI27_3]
; CHECK-SD-NEXT: bif v0.16b, v1.16b, v3.16b
; CHECK-SD-NEXT: neg v1.4s, v0.4s
; CHECK-SD-NEXT: bit v0.16b, v1.16b, v2.16b
; CHECK-SD-NEXT: ret
Expand Down
3 changes: 1 addition & 2 deletions llvm/test/CodeGen/AArch64/ctpop.ll
Original file line number Diff line number Diff line change
Expand Up @@ -603,10 +603,9 @@ entry:
define i128 @i128_mask(i128 %x) {
; CHECK-SD-LABEL: i128_mask:
; CHECK-SD: // %bb.0: // %entry
; CHECK-SD-NEXT: movi v0.2d, #0000000000000000
; CHECK-SD-NEXT: and x8, x0, #0xff
; CHECK-SD-NEXT: mov x1, xzr
; CHECK-SD-NEXT: mov v0.d[0], x8
; CHECK-SD-NEXT: fmov d0, x8
; CHECK-SD-NEXT: cnt v0.16b, v0.16b
; CHECK-SD-NEXT: addv b0, v0.16b
; CHECK-SD-NEXT: fmov x0, d0
Expand Down
45 changes: 45 additions & 0 deletions llvm/test/CodeGen/AArch64/neon-lowhalf128-optimisation.ll
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
; RUN: llc -mtriple=aarch64-linux-gnu -o - %s | FileCheck %s

define <2 x i64> @low_vector_splat_v2i64_from_i64(i64 %0){
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add tests for the floating points as well.
<2 x double>, <4 x float>, <8 x half>, <8 x bfloat16>?

; CHECK-LABEL: low_vector_splat_v2i64_from_i64:
; CHECK: // %bb.0:
; CHECK-NEXT: fmov d0, x0
; CHECK-NEXT: ret
%2 = insertelement <1 x i64> poison, i64 %0, i64 0
%3 = shufflevector <1 x i64> %2, <1 x i64> zeroinitializer, <2 x i32> <i32 0, i32 1>
ret <2 x i64> %3
}

define <4 x i32> @low_vector_splat_v4i32_from_i32(i32 %0) {
; CHECK-LABEL: low_vector_splat_v4i32_from_i32:
; CHECK: // %bb.0:
; CHECK-NEXT: dup v0.2s, w0
; CHECK-NEXT: ret
%2 = insertelement <2 x i32> poison, i32 %0, i64 0
%3 = shufflevector <2 x i32> %2, <2 x i32> poison, <2 x i32> zeroinitializer
%4 = shufflevector <2 x i32> %3, <2 x i32> zeroinitializer, <4 x i32> <i32 0, i32 1, i32 2, i32 3>
ret <4 x i32> %4
}

define <8 x i16> @low_vector_splat_v8i16_from_i16(i16 %0) {
; CHECK-LABEL: low_vector_splat_v8i16_from_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: dup v0.4h, w0
; CHECK-NEXT: ret
%2 = insertelement <4 x i16> poison, i16 %0, i64 0
%3 = shufflevector <4 x i16> %2, <4 x i16> poison, <4 x i32> zeroinitializer
%4 = shufflevector <4 x i16> %3, <4 x i16> zeroinitializer, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
ret <8 x i16> %4
}

define <16 x i8> @low_vector_splat_v16i8_from_i8(i8 %0) {
; CHECK-LABEL: low_vector_splat_v16i8_from_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: dup v0.8b, w0
; CHECK-NEXT: ret
%2 = insertelement <8 x i8> poison, i8 %0, i64 0
%3 = shufflevector <8 x i8> %2, <8 x i8> poison, <8 x i32> zeroinitializer
%4 = shufflevector <8 x i8> %3, <8 x i8> zeroinitializer, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
ret <16 x i8> %4
}
4 changes: 2 additions & 2 deletions llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,9 @@ define <3 x i1> @test_srem_vec(<3 x i33> %X) nounwind {
; CHECK-NEXT: add x8, x12, x8
; CHECK-NEXT: and v0.16b, v0.16b, v1.16b
; CHECK-NEXT: fmov d3, x8
; CHECK-NEXT: adrp x8, .LCPI3_1
; CHECK-NEXT: mov w8, #3 // =0x3
; CHECK-NEXT: cmeq v0.2d, v0.2d, v2.2d
; CHECK-NEXT: ldr q2, [x8, :lo12:.LCPI3_1]
; CHECK-NEXT: fmov d2, x8
; CHECK-NEXT: and v1.16b, v3.16b, v1.16b
; CHECK-NEXT: mvn v0.16b, v0.16b
; CHECK-NEXT: cmeq v1.2d, v1.2d, v2.2d
Expand Down
Loading