-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GEMM codegen] Distribute Shared memory copy #303
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
RUN: Result:
|
After step 0, generic linalg copy ops: scf.forall (%arg2, %arg3) in (42, 42) {
%alloca = memref.alloca() {__byteir_alloca_accumulator__} : memref<128x128xf16, #gpu.address_space<workgroup>>
%alloca_0 = memref.alloca() {__byteir_alloca_matrix_b__} : memref<32x128xf16, #gpu.address_space<workgroup>>
%alloca_1 = memref.alloca() {__byteir_alloca_matrix_a__} : memref<128x32xf16, #gpu.address_space<workgroup>>
%0 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg2)
%1 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg3)
%subview = memref.subview %alloc[%0, %1] [128, 128] [1, 1] : memref<5376x5376xf16> to memref<128x128xf16, strided<[5376, 1], offset: ?>>
linalg.fill ins(%cst : f16) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
scf.for %arg4 = %c0 to %c2048 step %c32 {
%subview_2 = memref.subview %arg0[%0, %arg4] [128, 32] [1, 1] : memref<5376x2048xf16> to memref<128x32xf16, strided<[2048, 1], offset: ?>>
%subview_3 = memref.subview %arg1[%arg4, %1] [32, 128] [1, 1] : memref<2048x5376xf16> to memref<32x128xf16, strided<[5376, 1], offset: ?>>
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_2 : memref<128x32xf16, strided<[2048, 1], offset: ?>>) outs(%alloca_1 : memref<128x32xf16, #gpu.address_space<workgroup>>) attrs = {__byteir_load_matrix_a__, __internal_linalg_transform__ = "__byteir_copy_related_to_workgroup_memory__"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_3 : memref<32x128xf16, strided<[5376, 1], offset: ?>>) outs(%alloca_0 : memref<32x128xf16, #gpu.address_space<workgroup>>) attrs = {__byteir_load_matrix_b__, __internal_linalg_transform__ = "__byteir_copy_related_to_workgroup_memory__"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
linalg.matmul {__byteir_gpu_tile_gemm_0, __byteir_mma__, __byteir_mma_level__ = "Threadblock", __byteir_target__ = "nv_sm_80"} ins(%alloca_1, %alloca_0 : memref<128x32xf16, #gpu.address_space<workgroup>>, memref<32x128xf16, #gpu.address_space<workgroup>>) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
}
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>) outs(%subview : memref<128x128xf16, strided<[5376, 1], offset: ?>>) attrs = {__byteir_store_matrix_c__, __internal_linalg_transform__ = "__byteir_copy_related_to_workgroup_memory__"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
} {mapping = [#gpu.block<y>, #gpu.block<x>]} |
After step 1 tiling: scf.forall (%arg2, %arg3) in (42, 42) {
%alloca = memref.alloca() {__byteir_alloca_accumulator__} : memref<128x128xf16, #gpu.address_space<workgroup>>
%alloca_0 = memref.alloca() {__byteir_alloca_matrix_b__} : memref<32x128xf16, #gpu.address_space<workgroup>>
%alloca_1 = memref.alloca() {__byteir_alloca_matrix_a__} : memref<128x32xf16, #gpu.address_space<workgroup>>
%0 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg2)
%1 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg3)
%subview = memref.subview %alloc[%0, %1] [128, 128] [1, 1] : memref<5376x5376xf16> to memref<128x128xf16, strided<[5376, 1], offset: ?>>
linalg.fill ins(%cst : f16) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
scf.for %arg4 = %c0 to %c2048 step %c32 {
%subview_8 = memref.subview %arg0[%0, %arg4] [128, 32] [1, 1] : memref<5376x2048xf16> to memref<128x32xf16, strided<[2048, 1], offset: ?>>
%subview_9 = memref.subview %arg1[%arg4, %1] [32, 128] [1, 1] : memref<2048x5376xf16> to memref<32x128xf16, strided<[5376, 1], offset: ?>>
%c32_10 = arith.constant 32 : index
%c32_11 = arith.constant 32 : index
%c0_12 = arith.constant 0 : index
%c128_13 = arith.constant 128 : index
%c32_14 = arith.constant 32 : index
%c0_15 = arith.constant 0 : index
%c32_16 = arith.constant 32 : index
%c32_17 = arith.constant 32 : index
scf.for %arg5 = %c0_12 to %c128_13 step %c32_14 {
scf.for %arg6 = %c0_15 to %c32_16 step %c32_17 {
%subview_26 = memref.subview %subview_8[%arg5, %arg6] [32, 32] [1, 1] : memref<128x32xf16, strided<[2048, 1], offset: ?>> to memref<32x32xf16, strided<[2048, 1], offset: ?>>
%subview_27 = memref.subview %alloca_1[%arg5, %arg6] [32, 32] [1, 1] : memref<128x32xf16, #gpu.address_space<workgroup>> to memref<32x32xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_26 : memref<32x32xf16, strided<[2048, 1], offset: ?>>) outs(%subview_27 : memref<32x32xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>) attrs = {__byteir_load_matrix_a__, __internal_linalg_transform__ = "copy_to_distribute"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
}
}
%c8_18 = arith.constant 8 : index
%c128_19 = arith.constant 128 : index
%c0_20 = arith.constant 0 : index
%c32_21 = arith.constant 32 : index
%c8_22 = arith.constant 8 : index
%c0_23 = arith.constant 0 : index
%c128_24 = arith.constant 128 : index
%c128_25 = arith.constant 128 : index
scf.for %arg5 = %c0_20 to %c32_21 step %c8_22 {
scf.for %arg6 = %c0_23 to %c128_24 step %c128_25 {
%subview_26 = memref.subview %subview_9[%arg5, %arg6] [8, 128] [1, 1] : memref<32x128xf16, strided<[5376, 1], offset: ?>> to memref<8x128xf16, strided<[5376, 1], offset: ?>>
%subview_27 = memref.subview %alloca_0[%arg5, %arg6] [8, 128] [1, 1] : memref<32x128xf16, #gpu.address_space<workgroup>> to memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_26 : memref<8x128xf16, strided<[5376, 1], offset: ?>>) outs(%subview_27 : memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>) attrs = {__byteir_load_matrix_b__, __internal_linalg_transform__ = "copy_to_distribute"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
}
}
linalg.matmul {__byteir_gpu_tile_gemm_0, __byteir_mma__, __byteir_mma_level__ = "Threadblock", __byteir_target__ = "nv_sm_80"} ins(%alloca_1, %alloca_0 : memref<128x32xf16, #gpu.address_space<workgroup>>, memref<32x128xf16, #gpu.address_space<workgroup>>) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
}
%c8 = arith.constant 8 : index
%c128 = arith.constant 128 : index
%c0_2 = arith.constant 0 : index
%c128_3 = arith.constant 128 : index
%c8_4 = arith.constant 8 : index
%c0_5 = arith.constant 0 : index
%c128_6 = arith.constant 128 : index
%c128_7 = arith.constant 128 : index
scf.for %arg4 = %c0_2 to %c128_3 step %c8_4 {
scf.for %arg5 = %c0_5 to %c128_6 step %c128_7 {
%subview_8 = memref.subview %alloca[%arg4, %arg5] [8, 128] [1, 1] : memref<128x128xf16, #gpu.address_space<workgroup>> to memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%subview_9 = memref.subview %subview[%arg4, %arg5] [8, 128] [1, 1] : memref<128x128xf16, strided<[5376, 1], offset: ?>> to memref<8x128xf16, strided<[5376, 1], offset: ?>>
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_8 : memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>) outs(%subview_9 : memref<8x128xf16, strided<[5376, 1], offset: ?>>) attrs = {__byteir_store_matrix_c__, __internal_linalg_transform__ = "copy_to_distribute"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
}
}
} {mapping = [#gpu.block<y>, #gpu.block<x>]}
|
After step 2: thread distribution: scf.forall (%arg2, %arg3) in (42, 42) {
%0 = gpu.thread_id x
%1 = gpu.thread_id y
%2 = gpu.thread_id z
%3 = affine.apply affine_map<(d0, d1, d2) -> (d0 + d1 * 64 + d2 * 128)>(%0, %1, %2)
%alloca = memref.alloca() {__byteir_alloca_accumulator__} : memref<128x128xf16, #gpu.address_space<workgroup>>
%alloca_0 = memref.alloca() {__byteir_alloca_matrix_b__} : memref<32x128xf16, #gpu.address_space<workgroup>>
%alloca_1 = memref.alloca() {__byteir_alloca_matrix_a__} : memref<128x32xf16, #gpu.address_space<workgroup>>
%4 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg2)
%5 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg3)
%subview = memref.subview %alloc[%4, %5] [128, 128] [1, 1] : memref<5376x5376xf16> to memref<128x128xf16, strided<[5376, 1], offset: ?>>
linalg.fill ins(%cst : f16) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
scf.for %arg4 = %c0 to %c2048 step %c32 {
%subview_8 = memref.subview %arg0[%4, %arg4] [128, 32] [1, 1] : memref<5376x2048xf16> to memref<128x32xf16, strided<[2048, 1], offset: ?>>
%subview_9 = memref.subview %arg1[%arg4, %5] [32, 128] [1, 1] : memref<2048x5376xf16> to memref<32x128xf16, strided<[5376, 1], offset: ?>>
%c32_10 = arith.constant 32 : index
%c32_11 = arith.constant 32 : index
%c0_12 = arith.constant 0 : index
%c128_13 = arith.constant 128 : index
%c32_14 = arith.constant 32 : index
%c0_15 = arith.constant 0 : index
%c32_16 = arith.constant 32 : index
%c32_17 = arith.constant 32 : index
scf.for %arg5 = %c0_12 to %c128_13 step %c32_14 {
scf.for %arg6 = %c0_15 to %c32_16 step %c32_17 {
%subview_26 = memref.subview %subview_8[%arg5, %arg6] [32, 32] [1, 1] : memref<128x32xf16, strided<[2048, 1], offset: ?>> to memref<32x32xf16, strided<[2048, 1], offset: ?>>
%subview_27 = memref.subview %alloca_1[%arg5, %arg6] [32, 32] [1, 1] : memref<128x32xf16, #gpu.address_space<workgroup>> to memref<32x32xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>
%c1 = arith.constant 1 : index
%c8_28 = arith.constant 8 : index
%6 = affine.apply affine_map<(d0) -> (d0 mod 4)>(%0)
%c4 = arith.constant 4 : index
%7 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%c32_29 = arith.constant 32 : index
%8 = affine.apply affine_map<(d0, d1, d2) -> ((d1 * 16 + d2 * 32 + d0 floordiv 4) floordiv 32)>(%0, %1, %2)
%c0_30 = arith.constant 0 : index
%c32_31 = arith.constant 32 : index
%c1_32 = arith.constant 1 : index
%c0_33 = arith.constant 0 : index
%c32_34 = arith.constant 32 : index
%c8_35 = arith.constant 8 : index
%9 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%10 = affine.apply affine_map<() -> (32)>()
%11 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 4) * 32)>(%0)
%12 = affine.apply affine_map<() -> (32)>()
%13 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%14 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 4) * 32)>(%0)
%15 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%16 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 4) * 32)>(%0)
%subview_36 = memref.subview %subview_26[%13, %14] [1, 8] [1, 1] : memref<32x32xf16, strided<[2048, 1], offset: ?>> to memref<1x8xf16, strided<[2048, 1], offset: ?>>
%subview_37 = memref.subview %subview_27[%15, %16] [1, 8] [1, 1] : memref<32x32xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>> to memref<1x8xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_36 : memref<1x8xf16, strided<[2048, 1], offset: ?>>) outs(%subview_37 : memref<1x8xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>) attrs = {__byteir_load_matrix_a__, __internal_linalg_transform__ = "copy_distributed"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
}
}
%c8_18 = arith.constant 8 : index
%c128_19 = arith.constant 128 : index
%c0_20 = arith.constant 0 : index
%c32_21 = arith.constant 32 : index
%c8_22 = arith.constant 8 : index
%c0_23 = arith.constant 0 : index
%c128_24 = arith.constant 128 : index
%c128_25 = arith.constant 128 : index
scf.for %arg5 = %c0_20 to %c32_21 step %c8_22 {
scf.for %arg6 = %c0_23 to %c128_24 step %c128_25 {
%subview_26 = memref.subview %subview_9[%arg5, %arg6] [8, 128] [1, 1] : memref<32x128xf16, strided<[5376, 1], offset: ?>> to memref<8x128xf16, strided<[5376, 1], offset: ?>>
%subview_27 = memref.subview %alloca_0[%arg5, %arg6] [8, 128] [1, 1] : memref<32x128xf16, #gpu.address_space<workgroup>> to memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%c1 = arith.constant 1 : index
%c8_28 = arith.constant 8 : index
%6 = affine.apply affine_map<(d0) -> (d0 mod 16)>(%0)
%c16 = arith.constant 16 : index
%7 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%c8_29 = arith.constant 8 : index
%8 = affine.apply affine_map<(d0, d1, d2) -> ((d1 * 4 + d2 * 8 + d0 floordiv 16) floordiv 8)>(%0, %1, %2)
%c0_30 = arith.constant 0 : index
%c8_31 = arith.constant 8 : index
%c1_32 = arith.constant 1 : index
%c0_33 = arith.constant 0 : index
%c128_34 = arith.constant 128 : index
%c8_35 = arith.constant 8 : index
%9 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%10 = affine.apply affine_map<() -> (8)>()
%11 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%12 = affine.apply affine_map<() -> (128)>()
%13 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%14 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%15 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%16 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%subview_36 = memref.subview %subview_26[%13, %14] [1, 8] [1, 1] : memref<8x128xf16, strided<[5376, 1], offset: ?>> to memref<1x8xf16, strided<[5376, 1], offset: ?>>
%subview_37 = memref.subview %subview_27[%15, %16] [1, 8] [1, 1] : memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>> to memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_36 : memref<1x8xf16, strided<[5376, 1], offset: ?>>) outs(%subview_37 : memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>) attrs = {__byteir_load_matrix_b__, __internal_linalg_transform__ = "copy_distributed"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
}
}
linalg.matmul {__byteir_gpu_tile_gemm_0, __byteir_mma__, __byteir_mma_level__ = "Threadblock", __byteir_target__ = "nv_sm_80"} ins(%alloca_1, %alloca_0 : memref<128x32xf16, #gpu.address_space<workgroup>>, memref<32x128xf16, #gpu.address_space<workgroup>>) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
}
%c8 = arith.constant 8 : index
%c128 = arith.constant 128 : index
%c0_2 = arith.constant 0 : index
%c128_3 = arith.constant 128 : index
%c8_4 = arith.constant 8 : index
%c0_5 = arith.constant 0 : index
%c128_6 = arith.constant 128 : index
%c128_7 = arith.constant 128 : index
scf.for %arg4 = %c0_2 to %c128_3 step %c8_4 {
scf.for %arg5 = %c0_5 to %c128_6 step %c128_7 {
%subview_8 = memref.subview %alloca[%arg4, %arg5] [8, 128] [1, 1] : memref<128x128xf16, #gpu.address_space<workgroup>> to memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%subview_9 = memref.subview %subview[%arg4, %arg5] [8, 128] [1, 1] : memref<128x128xf16, strided<[5376, 1], offset: ?>> to memref<8x128xf16, strided<[5376, 1], offset: ?>>
%c1 = arith.constant 1 : index
%c8_10 = arith.constant 8 : index
%6 = affine.apply affine_map<(d0) -> (d0 mod 16)>(%0)
%c16 = arith.constant 16 : index
%7 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%c8_11 = arith.constant 8 : index
%8 = affine.apply affine_map<(d0, d1, d2) -> ((d1 * 4 + d2 * 8 + d0 floordiv 16) floordiv 8)>(%0, %1, %2)
%c0_12 = arith.constant 0 : index
%c8_13 = arith.constant 8 : index
%c1_14 = arith.constant 1 : index
%c0_15 = arith.constant 0 : index
%c128_16 = arith.constant 128 : index
%c8_17 = arith.constant 8 : index
%9 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%10 = affine.apply affine_map<() -> (8)>()
%11 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%12 = affine.apply affine_map<() -> (128)>()
%13 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%14 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%15 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%16 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%subview_18 = memref.subview %subview_8[%13, %14] [1, 8] [1, 1] : memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>> to memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%subview_19 = memref.subview %subview_9[%15, %16] [1, 8] [1, 1] : memref<8x128xf16, strided<[5376, 1], offset: ?>> to memref<1x8xf16, strided<[5376, 1], offset: ?>>
linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%subview_18 : memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>) outs(%subview_19 : memref<1x8xf16, strided<[5376, 1], offset: ?>>) attrs = {__byteir_store_matrix_c__, __internal_linalg_transform__ = "copy_distributed"} {
^bb0(%in: f16, %out: f16):
linalg.yield %in : f16
}
}
}
} {mapping = [#gpu.block<y>, #gpu.block<x>]} |
After step 3 vectorizaton: scf.forall (%arg2, %arg3) in (42, 42) {
%0 = gpu.thread_id x
%1 = gpu.thread_id y
%2 = gpu.thread_id z
%3 = affine.apply affine_map<(d0, d1, d2) -> (d0 + d1 * 64 + d2 * 128)>(%0, %1, %2)
%alloca = memref.alloca() {__byteir_alloca_accumulator__} : memref<128x128xf16, #gpu.address_space<workgroup>>
%alloca_0 = memref.alloca() {__byteir_alloca_matrix_b__} : memref<32x128xf16, #gpu.address_space<workgroup>>
%alloca_1 = memref.alloca() {__byteir_alloca_matrix_a__} : memref<128x32xf16, #gpu.address_space<workgroup>>
%4 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg2)
%5 = affine.apply affine_map<(d0) -> (d0 * 128)>(%arg3)
%subview = memref.subview %alloc[%4, %5] [128, 128] [1, 1] : memref<5376x5376xf16> to memref<128x128xf16, strided<[5376, 1], offset: ?>>
linalg.fill ins(%cst : f16) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
scf.for %arg4 = %c0 to %c2048 step %c32 {
%subview_8 = memref.subview %arg0[%4, %arg4] [128, 32] [1, 1] : memref<5376x2048xf16> to memref<128x32xf16, strided<[2048, 1], offset: ?>>
%subview_9 = memref.subview %arg1[%arg4, %5] [32, 128] [1, 1] : memref<2048x5376xf16> to memref<32x128xf16, strided<[5376, 1], offset: ?>>
%c32_10 = arith.constant 32 : index
%c32_11 = arith.constant 32 : index
%c0_12 = arith.constant 0 : index
%c128_13 = arith.constant 128 : index
%c32_14 = arith.constant 32 : index
%c0_15 = arith.constant 0 : index
%c32_16 = arith.constant 32 : index
%c32_17 = arith.constant 32 : index
scf.for %arg5 = %c0_12 to %c128_13 step %c32_14 {
scf.for %arg6 = %c0_15 to %c32_16 step %c32_17 {
%subview_26 = memref.subview %subview_8[%arg5, %arg6] [32, 32] [1, 1] : memref<128x32xf16, strided<[2048, 1], offset: ?>> to memref<32x32xf16, strided<[2048, 1], offset: ?>>
%subview_27 = memref.subview %alloca_1[%arg5, %arg6] [32, 32] [1, 1] : memref<128x32xf16, #gpu.address_space<workgroup>> to memref<32x32xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>
%c1 = arith.constant 1 : index
%c8_28 = arith.constant 8 : index
%6 = affine.apply affine_map<(d0) -> (d0 mod 4)>(%0)
%c4 = arith.constant 4 : index
%7 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%c32_29 = arith.constant 32 : index
%8 = affine.apply affine_map<(d0, d1, d2) -> ((d1 * 16 + d2 * 32 + d0 floordiv 4) floordiv 32)>(%0, %1, %2)
%c0_30 = arith.constant 0 : index
%c32_31 = arith.constant 32 : index
%c1_32 = arith.constant 1 : index
%c0_33 = arith.constant 0 : index
%c32_34 = arith.constant 32 : index
%c8_35 = arith.constant 8 : index
%9 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%10 = affine.apply affine_map<() -> (32)>()
%11 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 4) * 32)>(%0)
%12 = affine.apply affine_map<() -> (32)>()
%13 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%14 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 4) * 32)>(%0)
%15 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 16 + d2 * 32 + d0 floordiv 4)>(%0, %1, %2)
%16 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 4) * 32)>(%0)
%subview_36 = memref.subview %subview_26[%13, %14] [1, 8] [1, 1] : memref<32x32xf16, strided<[2048, 1], offset: ?>> to memref<1x8xf16, strided<[2048, 1], offset: ?>>
%subview_37 = memref.subview %subview_27[%15, %16] [1, 8] [1, 1] : memref<32x32xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>> to memref<1x8xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>
%c1_38 = arith.constant 1 : index
%c8_39 = arith.constant 8 : index
%c0_40 = arith.constant 0 : index
%cst_41 = arith.constant 0.000000e+00 : f16
%17 = vector.transfer_read %subview_36[%c0_40, %c0_40], %cst_41 : memref<1x8xf16, strided<[2048, 1], offset: ?>>, vector<1x8xf16>
%cst_42 = arith.constant 0.000000e+00 : f16
%18 = vector.transfer_read %subview_37[%c0_40, %c0_40], %cst_42 : memref<1x8xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>, vector<1x8xf16>
%c0_43 = arith.constant 0 : index
vector.transfer_write %17, %subview_37[%c0_43, %c0_43] : vector<1x8xf16>, memref<1x8xf16, strided<[32, 1], offset: ?>, #gpu.address_space<workgroup>>
}
}
%c8_18 = arith.constant 8 : index
%c128_19 = arith.constant 128 : index
%c0_20 = arith.constant 0 : index
%c32_21 = arith.constant 32 : index
%c8_22 = arith.constant 8 : index
%c0_23 = arith.constant 0 : index
%c128_24 = arith.constant 128 : index
%c128_25 = arith.constant 128 : index
scf.for %arg5 = %c0_20 to %c32_21 step %c8_22 {
scf.for %arg6 = %c0_23 to %c128_24 step %c128_25 {
%subview_26 = memref.subview %subview_9[%arg5, %arg6] [8, 128] [1, 1] : memref<32x128xf16, strided<[5376, 1], offset: ?>> to memref<8x128xf16, strided<[5376, 1], offset: ?>>
%subview_27 = memref.subview %alloca_0[%arg5, %arg6] [8, 128] [1, 1] : memref<32x128xf16, #gpu.address_space<workgroup>> to memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%c1 = arith.constant 1 : index
%c8_28 = arith.constant 8 : index
%6 = affine.apply affine_map<(d0) -> (d0 mod 16)>(%0)
%c16 = arith.constant 16 : index
%7 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%c8_29 = arith.constant 8 : index
%8 = affine.apply affine_map<(d0, d1, d2) -> ((d1 * 4 + d2 * 8 + d0 floordiv 16) floordiv 8)>(%0, %1, %2)
%c0_30 = arith.constant 0 : index
%c8_31 = arith.constant 8 : index
%c1_32 = arith.constant 1 : index
%c0_33 = arith.constant 0 : index
%c128_34 = arith.constant 128 : index
%c8_35 = arith.constant 8 : index
%9 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%10 = affine.apply affine_map<() -> (8)>()
%11 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%12 = affine.apply affine_map<() -> (128)>()
%13 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%14 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%15 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%16 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%subview_36 = memref.subview %subview_26[%13, %14] [1, 8] [1, 1] : memref<8x128xf16, strided<[5376, 1], offset: ?>> to memref<1x8xf16, strided<[5376, 1], offset: ?>>
%subview_37 = memref.subview %subview_27[%15, %16] [1, 8] [1, 1] : memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>> to memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%c1_38 = arith.constant 1 : index
%c8_39 = arith.constant 8 : index
%c0_40 = arith.constant 0 : index
%cst_41 = arith.constant 0.000000e+00 : f16
%17 = vector.transfer_read %subview_36[%c0_40, %c0_40], %cst_41 : memref<1x8xf16, strided<[5376, 1], offset: ?>>, vector<1x8xf16>
%cst_42 = arith.constant 0.000000e+00 : f16
%18 = vector.transfer_read %subview_37[%c0_40, %c0_40], %cst_42 : memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>, vector<1x8xf16>
%c0_43 = arith.constant 0 : index
vector.transfer_write %17, %subview_37[%c0_43, %c0_43] : vector<1x8xf16>, memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
}
}
linalg.matmul {__byteir_gpu_tile_gemm_0, __byteir_mma__, __byteir_mma_level__ = "Threadblock", __byteir_target__ = "nv_sm_80"} ins(%alloca_1, %alloca_0 : memref<128x32xf16, #gpu.address_space<workgroup>>, memref<32x128xf16, #gpu.address_space<workgroup>>) outs(%alloca : memref<128x128xf16, #gpu.address_space<workgroup>>)
}
%c8 = arith.constant 8 : index
%c128 = arith.constant 128 : index
%c0_2 = arith.constant 0 : index
%c128_3 = arith.constant 128 : index
%c8_4 = arith.constant 8 : index
%c0_5 = arith.constant 0 : index
%c128_6 = arith.constant 128 : index
%c128_7 = arith.constant 128 : index
scf.for %arg4 = %c0_2 to %c128_3 step %c8_4 {
scf.for %arg5 = %c0_5 to %c128_6 step %c128_7 {
%subview_8 = memref.subview %alloca[%arg4, %arg5] [8, 128] [1, 1] : memref<128x128xf16, #gpu.address_space<workgroup>> to memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%subview_9 = memref.subview %subview[%arg4, %arg5] [8, 128] [1, 1] : memref<128x128xf16, strided<[5376, 1], offset: ?>> to memref<8x128xf16, strided<[5376, 1], offset: ?>>
%c1 = arith.constant 1 : index
%c8_10 = arith.constant 8 : index
%6 = affine.apply affine_map<(d0) -> (d0 mod 16)>(%0)
%c16 = arith.constant 16 : index
%7 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%c8_11 = arith.constant 8 : index
%8 = affine.apply affine_map<(d0, d1, d2) -> ((d1 * 4 + d2 * 8 + d0 floordiv 16) floordiv 8)>(%0, %1, %2)
%c0_12 = arith.constant 0 : index
%c8_13 = arith.constant 8 : index
%c1_14 = arith.constant 1 : index
%c0_15 = arith.constant 0 : index
%c128_16 = arith.constant 128 : index
%c8_17 = arith.constant 8 : index
%9 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%10 = affine.apply affine_map<() -> (8)>()
%11 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%12 = affine.apply affine_map<() -> (128)>()
%13 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%14 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%15 = affine.apply affine_map<(d0, d1, d2) -> (d1 * 4 + d2 * 8 + d0 floordiv 16)>(%0, %1, %2)
%16 = affine.apply affine_map<(d0) -> (d0 * 8 - (d0 floordiv 16) * 128)>(%0)
%subview_18 = memref.subview %subview_8[%13, %14] [1, 8] [1, 1] : memref<8x128xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>> to memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>
%subview_19 = memref.subview %subview_9[%15, %16] [1, 8] [1, 1] : memref<8x128xf16, strided<[5376, 1], offset: ?>> to memref<1x8xf16, strided<[5376, 1], offset: ?>>
%c1_20 = arith.constant 1 : index
%c8_21 = arith.constant 8 : index
%c0_22 = arith.constant 0 : index
%cst_23 = arith.constant 0.000000e+00 : f16
%17 = vector.transfer_read %subview_18[%c0_22, %c0_22], %cst_23 : memref<1x8xf16, strided<[128, 1], offset: ?>, #gpu.address_space<workgroup>>, vector<1x8xf16>
%cst_24 = arith.constant 0.000000e+00 : f16
%18 = vector.transfer_read %subview_19[%c0_22, %c0_22], %cst_24 : memref<1x8xf16, strided<[5376, 1], offset: ?>>, vector<1x8xf16>
%c0_25 = arith.constant 0 : index
vector.transfer_write %17, %subview_19[%c0_25, %c0_25] : vector<1x8xf16>, memref<1x8xf16, strided<[5376, 1], offset: ?>>
}
}
} {mapping = [#gpu.block<y>, #gpu.block<x>]}
|
XG-zheng
approved these changes
Jun 18, 2024
Vremold
added a commit
that referenced
this pull request
Jul 4, 2024
- 59c2bbb [compiler] fix DeviceGraphCluster for if op (#362) - 93b7671 [runtime] support bmm with crr rcr ccr layout (#350) - f24924f [compiler] remove cat aggressive mode (#361) - f187897 [torch-frontend] change setupBackendTypeConversion to set... - e5248af [GEMM codegen] Distribute Shared memory copy (#303) - d1d1fa9 [torch-frontend] update torch-mlir to c7d52f63b482b2c30f4... - 01098f8 [compiler] fix compilation on AArch64 platform (#358) - c576d6f [Gemm Codegen]add optimize-vector-transfer (#301) - 7fa4807 [e2e] add profiler entry for single stablehlo/mhlo file (... GitOrigin-RevId: 59c2bbb
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Precondition: Check whether the computation can be vectorized. If it cannot, fall back to the non-vectorized algorithm.
Convert linalg.copy to linalg.generic.
Perform tiling. Total number of elements = N. The number of elements that can be read in one vectorized operation = V. So the number of loops equals N divided by V times the number of threads. In the current case, this is
8×128. Once the thread block completes a tile, it moves to the next tile.
Within the tile, distribute by introducing threadIdx.x and threadIdx.y, calculate the flattened 1D coordinate, and then distribute, each thread should copy a "1x8" tile.
Transform vectorized operations to transfer_read and transfer_write.
Outer loop unroll.