Skip to content

Commit

Permalink
[TTI][X86] Add SSE2 sub-128bit vXi16/32 and v2i64 stride 2 interleave…
Browse files Browse the repository at this point in the history
…d load costs

These cases use the same codegen as AVX2 (pshuflw/pshufd) for the sub-128bit vector deinterleaving, and unpcklqdq for v2i64.

It's going to take a while to add full interleaved cost coverage, but since these are the same for SSE2 -> AVX2 it should be an easy win.

Fixes PR47437

Differential Revision: https://reviews.llvm.org/D111938
  • Loading branch information
RKSimon committed Oct 16, 2021
1 parent bfe5b1b commit 6ec644e
Show file tree
Hide file tree
Showing 9 changed files with 392 additions and 153 deletions.
18 changes: 14 additions & 4 deletions llvm/lib/Target/X86/X86TargetTransformInfo.cpp
Expand Up @@ -5220,19 +5220,15 @@ InstructionCost X86TTIImpl::getInterleavedMemoryOpCost(
{2, MVT::v16i8, 4}, // (load 32i8 and) deinterleave into 2 x 16i8
{2, MVT::v32i8, 6}, // (load 64i8 and) deinterleave into 2 x 32i8

{2, MVT::v2i16, 2}, // (load 4i16 and) deinterleave into 2 x 2i16
{2, MVT::v4i16, 2}, // (load 8i16 and) deinterleave into 2 x 4i16
{2, MVT::v8i16, 6}, // (load 16i16 and) deinterleave into 2 x 8i16
{2, MVT::v16i16, 9}, // (load 32i16 and) deinterleave into 2 x 16i16
{2, MVT::v32i16, 18}, // (load 64i16 and) deinterleave into 2 x 32i16

{2, MVT::v2i32, 2}, // (load 4i32 and) deinterleave into 2 x 2i32
{2, MVT::v4i32, 2}, // (load 8i32 and) deinterleave into 2 x 4i32
{2, MVT::v8i32, 4}, // (load 16i32 and) deinterleave into 2 x 8i32
{2, MVT::v16i32, 8}, // (load 32i32 and) deinterleave into 2 x 16i32
{2, MVT::v32i32, 16}, // (load 64i32 and) deinterleave into 2 x 32i32

{2, MVT::v2i64, 2}, // (load 4i64 and) deinterleave into 2 x 2i64
{2, MVT::v4i64, 4}, // (load 8i64 and) deinterleave into 2 x 4i64
{2, MVT::v8i64, 8}, // (load 16i64 and) deinterleave into 2 x 8i64
{2, MVT::v16i64, 16}, // (load 32i64 and) deinterleave into 2 x 16i64
Expand Down Expand Up @@ -5303,6 +5299,15 @@ InstructionCost X86TTIImpl::getInterleavedMemoryOpCost(
{8, MVT::v8i32, 40} // (load 64i32 and) deinterleave into 8 x 8i32
};

static const CostTblEntry SSE2InterleavedLoadTbl[] = {
{2, MVT::v2i16, 2}, // (load 4i16 and) deinterleave into 2 x 2i16

{2, MVT::v2i32, 2}, // (load 4i32 and) deinterleave into 2 x 2i32
{2, MVT::v4i32, 2}, // (load 8i32 and) deinterleave into 2 x 4i32

{2, MVT::v2i64, 2}, // (load 4i64 and) deinterleave into 2 x 2i64
};

static const CostTblEntry AVX2InterleavedStoreTbl[] = {
{2, MVT::v2i8, 1}, // interleave 2 x 2i8 into 4i8 (and store)
{2, MVT::v4i8, 1}, // interleave 2 x 4i8 into 8i8 (and store)
Expand Down Expand Up @@ -5398,6 +5403,11 @@ InstructionCost X86TTIImpl::getInterleavedMemoryOpCost(
if (const auto *Entry = CostTableLookup(AVX2InterleavedLoadTbl, Factor,
ETy.getSimpleVT()))
return MemOpCosts + Entry->Cost;

if (ST->hasSSE2())
if (const auto *Entry = CostTableLookup(SSE2InterleavedLoadTbl, Factor,
ETy.getSimpleVT()))
return MemOpCosts + Entry->Cost;
} else {
assert(Opcode == Instruction::Store &&
"Expected Store Instruction at this point");
Expand Down
Expand Up @@ -13,14 +13,14 @@ target triple = "x86_64-unknown-linux-gnu"
; CHECK: LV: Checking a loop in "test"
;
; SSE2: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load float, float* %in0, align 4
; SSE2: LV: Found an estimated cost of 6 for VF 2 For instruction: %v0 = load float, float* %in0, align 4
; SSE2: LV: Found an estimated cost of 14 for VF 4 For instruction: %v0 = load float, float* %in0, align 4
; SSE2: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load float, float* %in0, align 4
; SSE2: LV: Found an estimated cost of 4 for VF 4 For instruction: %v0 = load float, float* %in0, align 4
; SSE2: LV: Found an estimated cost of 28 for VF 8 For instruction: %v0 = load float, float* %in0, align 4
; SSE2: LV: Found an estimated cost of 56 for VF 16 For instruction: %v0 = load float, float* %in0, align 4
;
; AVX1: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load float, float* %in0, align 4
; AVX1: LV: Found an estimated cost of 6 for VF 2 For instruction: %v0 = load float, float* %in0, align 4
; AVX1: LV: Found an estimated cost of 17 for VF 4 For instruction: %v0 = load float, float* %in0, align 4
; AVX1: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load float, float* %in0, align 4
; AVX1: LV: Found an estimated cost of 3 for VF 4 For instruction: %v0 = load float, float* %in0, align 4
; AVX1: LV: Found an estimated cost of 38 for VF 8 For instruction: %v0 = load float, float* %in0, align 4
; AVX1: LV: Found an estimated cost of 76 for VF 16 For instruction: %v0 = load float, float* %in0, align 4
; AVX1: LV: Found an estimated cost of 152 for VF 32 For instruction: %v0 = load float, float* %in0, align 4
Expand Down
Expand Up @@ -13,13 +13,13 @@ target triple = "x86_64-unknown-linux-gnu"
; CHECK: LV: Checking a loop in "test"
;
; SSE2: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load double, double* %in0, align 8
; SSE2: LV: Found an estimated cost of 6 for VF 2 For instruction: %v0 = load double, double* %in0, align 8
; SSE2: LV: Found an estimated cost of 4 for VF 2 For instruction: %v0 = load double, double* %in0, align 8
; SSE2: LV: Found an estimated cost of 12 for VF 4 For instruction: %v0 = load double, double* %in0, align 8
; SSE2: LV: Found an estimated cost of 24 for VF 8 For instruction: %v0 = load double, double* %in0, align 8
; SSE2: LV: Found an estimated cost of 48 for VF 16 For instruction: %v0 = load double, double* %in0, align 8
;
; AVX1: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load double, double* %in0, align 8
; AVX1: LV: Found an estimated cost of 7 for VF 2 For instruction: %v0 = load double, double* %in0, align 8
; AVX1: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load double, double* %in0, align 8
; AVX1: LV: Found an estimated cost of 16 for VF 4 For instruction: %v0 = load double, double* %in0, align 8
; AVX1: LV: Found an estimated cost of 32 for VF 8 For instruction: %v0 = load double, double* %in0, align 8
; AVX1: LV: Found an estimated cost of 64 for VF 16 For instruction: %v0 = load double, double* %in0, align 8
Expand Down
Expand Up @@ -13,13 +13,13 @@ target triple = "x86_64-unknown-linux-gnu"
; CHECK: LV: Checking a loop in "test"
;
; SSE2: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i16, i16* %in0, align 2
; SSE2: LV: Found an estimated cost of 9 for VF 2 For instruction: %v0 = load i16, i16* %in0, align 2
; SSE2: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load i16, i16* %in0, align 2
; SSE2: LV: Found an estimated cost of 17 for VF 4 For instruction: %v0 = load i16, i16* %in0, align 2
; SSE2: LV: Found an estimated cost of 34 for VF 8 For instruction: %v0 = load i16, i16* %in0, align 2
; SSE2: LV: Found an estimated cost of 68 for VF 16 For instruction: %v0 = load i16, i16* %in0, align 2
;
; AVX1: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i16, i16* %in0, align 2
; AVX1: LV: Found an estimated cost of 9 for VF 2 For instruction: %v0 = load i16, i16* %in0, align 2
; AVX1: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load i16, i16* %in0, align 2
; AVX1: LV: Found an estimated cost of 17 for VF 4 For instruction: %v0 = load i16, i16* %in0, align 2
; AVX1: LV: Found an estimated cost of 41 for VF 8 For instruction: %v0 = load i16, i16* %in0, align 2
; AVX1: LV: Found an estimated cost of 86 for VF 16 For instruction: %v0 = load i16, i16* %in0, align 2
Expand Down
Expand Up @@ -13,14 +13,14 @@ target triple = "x86_64-unknown-linux-gnu"
; CHECK: LV: Checking a loop in "test"
;
; SSE2: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 7 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 15 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 4 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 30 for VF 8 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 60 for VF 16 For instruction: %v0 = load i32, i32* %in0, align 4
;
; AVX1: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 5 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 11 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 3 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 24 for VF 8 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 48 for VF 16 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 96 for VF 32 For instruction: %v0 = load i32, i32* %in0, align 4
Expand Down
Expand Up @@ -13,14 +13,14 @@ target triple = "x86_64-unknown-linux-gnu"
; CHECK: LV: Checking a loop in "test"
;
; SSE2: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 14 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 30 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 4 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 60 for VF 8 For instruction: %v0 = load i32, i32* %in0, align 4
; SSE2: LV: Found an estimated cost of 120 for VF 16 For instruction: %v0 = load i32, i32* %in0, align 4
;
; AVX1: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 9 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 21 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 3 for VF 4 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 46 for VF 8 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 92 for VF 16 For instruction: %v0 = load i32, i32* %in0, align 4
; AVX1: LV: Found an estimated cost of 184 for VF 32 For instruction: %v0 = load i32, i32* %in0, align 4
Expand Down
Expand Up @@ -13,13 +13,13 @@ target triple = "x86_64-unknown-linux-gnu"
; CHECK: LV: Checking a loop in "test"
;
; SSE2: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i64, i64* %in0, align 8
; SSE2: LV: Found an estimated cost of 14 for VF 2 For instruction: %v0 = load i64, i64* %in0, align 8
; SSE2: LV: Found an estimated cost of 4 for VF 2 For instruction: %v0 = load i64, i64* %in0, align 8
; SSE2: LV: Found an estimated cost of 28 for VF 4 For instruction: %v0 = load i64, i64* %in0, align 8
; SSE2: LV: Found an estimated cost of 56 for VF 8 For instruction: %v0 = load i64, i64* %in0, align 8
; SSE2: LV: Found an estimated cost of 112 for VF 16 For instruction: %v0 = load i64, i64* %in0, align 8
;
; AVX1: LV: Found an estimated cost of 1 for VF 1 For instruction: %v0 = load i64, i64* %in0, align 8
; AVX1: LV: Found an estimated cost of 11 for VF 2 For instruction: %v0 = load i64, i64* %in0, align 8
; AVX1: LV: Found an estimated cost of 3 for VF 2 For instruction: %v0 = load i64, i64* %in0, align 8
; AVX1: LV: Found an estimated cost of 26 for VF 4 For instruction: %v0 = load i64, i64* %in0, align 8
; AVX1: LV: Found an estimated cost of 52 for VF 8 For instruction: %v0 = load i64, i64* %in0, align 8
; AVX1: LV: Found an estimated cost of 104 for VF 16 For instruction: %v0 = load i64, i64* %in0, align 8
Expand Down

0 comments on commit 6ec644e

Please sign in to comment.