-
Notifications
You must be signed in to change notification settings - Fork 10.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SDag] Notify listeners when deleting a node #66991
base: main
Are you sure you want to change the base?
[SDag] Notify listeners when deleting a node #66991
Conversation
443d870
to
d0463a6
Compare
@llvm/pr-subscribers-backend-x86 @llvm/pr-subscribers-llvm-selectiondag ChangesSelectionDAG::DeleteNode does not notify listeners about node deletion. As a result, SelectionDAG::Legalize may skip legalization of some nodes resulting in "Legalized selection DAG" containing illegal nodes. These nodes will be legalized during subsequent DAG combining phase, but this may be too late for some patterns to match. Patch is 122.75 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66991.diff 11 Files Affected:
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 7fcd1f4f898911a..fd244f33e56c39b 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -1036,6 +1036,9 @@ void SelectionDAG::RemoveDeadNode(SDNode *N){
}
void SelectionDAG::DeleteNode(SDNode *N) {
+ for (DAGUpdateListener *DUL = UpdateListeners; DUL; DUL = DUL->Next)
+ DUL->NodeDeleted(N, nullptr);
+
// First take this out of the appropriate CSE map.
RemoveNodeFromCSEMaps(N);
diff --git a/llvm/test/CodeGen/AMDGPU/half.ll b/llvm/test/CodeGen/AMDGPU/half.ll
index e2d55990473c09c..e55c7bd3ac5e628 100644
--- a/llvm/test/CodeGen/AMDGPU/half.ll
+++ b/llvm/test/CodeGen/AMDGPU/half.ll
@@ -1969,19 +1969,19 @@ define amdgpu_kernel void @global_extload_v16f16_to_v16f64(ptr addrspace(1) %out
; VI-NEXT: s_addc_u32 s3, s1, 0
; VI-NEXT: v_mov_b32_e32 v15, s3
; VI-NEXT: v_mov_b32_e32 v14, s2
-; VI-NEXT: s_add_u32 s2, s0, 0x50
+; VI-NEXT: s_add_u32 s2, s0, 0x70
; VI-NEXT: s_addc_u32 s3, s1, 0
; VI-NEXT: v_mov_b32_e32 v17, s3
; VI-NEXT: v_mov_b32_e32 v16, s2
-; VI-NEXT: s_add_u32 s2, s0, 64
+; VI-NEXT: s_add_u32 s2, s0, 0x60
; VI-NEXT: s_addc_u32 s3, s1, 0
; VI-NEXT: v_mov_b32_e32 v19, s3
; VI-NEXT: v_mov_b32_e32 v11, s1
; VI-NEXT: v_mov_b32_e32 v18, s2
-; VI-NEXT: s_add_u32 s2, s0, 0x70
+; VI-NEXT: s_add_u32 s2, s0, 0x50
; VI-NEXT: v_mov_b32_e32 v10, s0
; VI-NEXT: s_addc_u32 s3, s1, 0
-; VI-NEXT: s_add_u32 s0, s0, 0x60
+; VI-NEXT: s_add_u32 s0, s0, 64
; VI-NEXT: s_addc_u32 s1, s1, 0
; VI-NEXT: s_waitcnt vmcnt(1)
; VI-NEXT: v_cvt_f32_f16_e32 v22, v4
@@ -1995,15 +1995,15 @@ define amdgpu_kernel void @global_extload_v16f16_to_v16f64(ptr addrspace(1) %out
; VI-NEXT: v_cvt_f64_f32_e32 v[4:5], v4
; VI-NEXT: v_cvt_f64_f32_e32 v[6:7], v7
; VI-NEXT: s_waitcnt vmcnt(0)
-; VI-NEXT: v_cvt_f32_f16_e32 v26, v2
-; VI-NEXT: v_cvt_f32_f16_sdwa v27, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
-; VI-NEXT: v_cvt_f32_f16_sdwa v28, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; VI-NEXT: v_cvt_f32_f16_e32 v26, v0
+; VI-NEXT: v_cvt_f32_f16_sdwa v27, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; VI-NEXT: v_cvt_f32_f16_sdwa v28, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
; VI-NEXT: flat_store_dwordx4 v[8:9], v[4:7]
-; VI-NEXT: v_cvt_f32_f16_e32 v8, v3
-; VI-NEXT: v_cvt_f32_f16_e32 v29, v0
-; VI-NEXT: v_cvt_f32_f16_sdwa v30, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
-; VI-NEXT: v_cvt_f32_f16_e32 v31, v1
-; VI-NEXT: v_cvt_f32_f16_sdwa v32, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; VI-NEXT: v_cvt_f32_f16_e32 v8, v1
+; VI-NEXT: v_cvt_f32_f16_e32 v29, v2
+; VI-NEXT: v_cvt_f32_f16_sdwa v30, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; VI-NEXT: v_cvt_f32_f16_e32 v31, v3
+; VI-NEXT: v_cvt_f32_f16_sdwa v32, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
; VI-NEXT: v_cvt_f64_f32_e32 v[0:1], v20
; VI-NEXT: v_cvt_f64_f32_e32 v[2:3], v21
; VI-NEXT: v_cvt_f64_f32_e32 v[4:5], v22
@@ -2040,40 +2040,40 @@ define amdgpu_kernel void @global_extload_v16f16_to_v16f64(ptr addrspace(1) %out
; GFX11-NEXT: global_load_b128 v[0:3], v32, s[2:3]
; GFX11-NEXT: global_load_b128 v[4:7], v32, s[2:3] offset:16
; GFX11-NEXT: s_waitcnt vmcnt(1)
-; GFX11-NEXT: v_cvt_f32_f16_e32 v10, v1
-; GFX11-NEXT: s_waitcnt vmcnt(0)
-; GFX11-NEXT: v_lshrrev_b32_e32 v23, 16, v5
; GFX11-NEXT: v_lshrrev_b32_e32 v11, 16, v1
-; GFX11-NEXT: v_lshrrev_b32_e32 v19, 16, v4
-; GFX11-NEXT: v_cvt_f32_f16_e32 v15, v7
+; GFX11-NEXT: s_waitcnt vmcnt(0)
+; GFX11-NEXT: v_cvt_f32_f16_e32 v23, v7
; GFX11-NEXT: v_lshrrev_b32_e32 v7, 16, v7
-; GFX11-NEXT: v_cvt_f32_f16_e32 v14, v6
+; GFX11-NEXT: v_cvt_f32_f16_e32 v22, v6
; GFX11-NEXT: v_lshrrev_b32_e32 v6, 16, v6
+; GFX11-NEXT: v_lshrrev_b32_e32 v19, 16, v5
+; GFX11-NEXT: v_cvt_f32_f16_e32 v10, v1
+; GFX11-NEXT: v_lshrrev_b32_e32 v15, 16, v4
; GFX11-NEXT: v_cvt_f32_f16_e32 v13, v3
; GFX11-NEXT: v_lshrrev_b32_e32 v3, 16, v3
; GFX11-NEXT: v_cvt_f32_f16_e32 v12, v2
; GFX11-NEXT: v_lshrrev_b32_e32 v2, 16, v2
-; GFX11-NEXT: v_cvt_f32_f16_e32 v18, v4
-; GFX11-NEXT: v_cvt_f32_f16_e32 v22, v5
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[4:5], v10
-; GFX11-NEXT: v_cvt_f32_f16_e32 v10, v23
-; GFX11-NEXT: v_cvt_f32_f16_e32 v34, v11
-; GFX11-NEXT: v_cvt_f32_f16_e32 v11, v19
-; GFX11-NEXT: v_lshrrev_b32_e32 v9, 16, v0
; GFX11-NEXT: v_cvt_f32_f16_e32 v7, v7
; GFX11-NEXT: v_cvt_f32_f16_e32 v6, v6
+; GFX11-NEXT: v_lshrrev_b32_e32 v9, 16, v0
+; GFX11-NEXT: v_cvt_f32_f16_e32 v18, v5
+; GFX11-NEXT: v_cvt_f32_f16_e32 v34, v11
+; GFX11-NEXT: v_cvt_f32_f16_e32 v11, v19
+; GFX11-NEXT: v_cvt_f32_f16_e32 v14, v4
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[4:5], v10
+; GFX11-NEXT: v_cvt_f32_f16_e32 v10, v15
; GFX11-NEXT: v_cvt_f32_f16_e32 v8, v0
; GFX11-NEXT: v_cvt_f32_f16_e32 v3, v3
; GFX11-NEXT: v_cvt_f32_f16_e32 v2, v2
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[28:29], v22
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[30:31], v10
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[24:25], v18
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[26:27], v11
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[28:29], v23
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[30:31], v7
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[24:25], v22
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[26:27], v6
; GFX11-NEXT: v_cvt_f32_f16_e32 v33, v9
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[20:21], v15
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[22:23], v7
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[20:21], v18
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[22:23], v11
; GFX11-NEXT: v_cvt_f64_f32_e32 v[16:17], v14
-; GFX11-NEXT: v_cvt_f64_f32_e32 v[18:19], v6
+; GFX11-NEXT: v_cvt_f64_f32_e32 v[18:19], v10
; GFX11-NEXT: v_cvt_f64_f32_e32 v[0:1], v8
; GFX11-NEXT: v_cvt_f64_f32_e32 v[8:9], v12
; GFX11-NEXT: v_cvt_f64_f32_e32 v[12:13], v13
@@ -2082,10 +2082,10 @@ define amdgpu_kernel void @global_extload_v16f16_to_v16f64(ptr addrspace(1) %out
; GFX11-NEXT: v_cvt_f64_f32_e32 v[6:7], v34
; GFX11-NEXT: v_cvt_f64_f32_e32 v[2:3], v33
; GFX11-NEXT: s_clause 0x7
-; GFX11-NEXT: global_store_b128 v32, v[28:31], s[0:1] offset:80
-; GFX11-NEXT: global_store_b128 v32, v[24:27], s[0:1] offset:64
-; GFX11-NEXT: global_store_b128 v32, v[20:23], s[0:1] offset:112
-; GFX11-NEXT: global_store_b128 v32, v[16:19], s[0:1] offset:96
+; GFX11-NEXT: global_store_b128 v32, v[28:31], s[0:1] offset:112
+; GFX11-NEXT: global_store_b128 v32, v[24:27], s[0:1] offset:96
+; GFX11-NEXT: global_store_b128 v32, v[20:23], s[0:1] offset:80
+; GFX11-NEXT: global_store_b128 v32, v[16:19], s[0:1] offset:64
; GFX11-NEXT: global_store_b128 v32, v[12:15], s[0:1] offset:48
; GFX11-NEXT: global_store_b128 v32, v[8:11], s[0:1] offset:32
; GFX11-NEXT: global_store_b128 v32, v[4:7], s[0:1] offset:16
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index 438b1bfe319a04e..f433bd86023645b 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -4526,33 +4526,33 @@ define amdgpu_kernel void @constant_zextload_v16i1_to_v16i64(ptr addrspace(1) %o
; GFX6-NEXT: s_mov_b32 s0, s4
; GFX6-NEXT: s_mov_b32 s1, s5
; GFX6-NEXT: s_waitcnt vmcnt(0)
-; GFX6-NEXT: v_bfe_u32 v2, v29, 11, 1
-; GFX6-NEXT: v_bfe_u32 v0, v29, 10, 1
-; GFX6-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0 offset:80
-; GFX6-NEXT: v_bfe_u32 v5, v29, 9, 1
+; GFX6-NEXT: v_lshrrev_b32_e32 v2, 15, v29
+; GFX6-NEXT: v_bfe_u32 v0, v29, 14, 1
+; GFX6-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0 offset:112
+; GFX6-NEXT: v_bfe_u32 v5, v29, 13, 1
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_bfe_u32 v3, v29, 8, 1
-; GFX6-NEXT: buffer_store_dwordx4 v[3:6], off, s[0:3], 0 offset:64
-; GFX6-NEXT: v_lshrrev_b32_e32 v8, 15, v29
+; GFX6-NEXT: v_bfe_u32 v3, v29, 12, 1
+; GFX6-NEXT: buffer_store_dwordx4 v[3:6], off, s[0:3], 0 offset:96
+; GFX6-NEXT: v_bfe_u32 v8, v29, 11, 1
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_bfe_u32 v6, v29, 14, 1
-; GFX6-NEXT: buffer_store_dwordx4 v[6:9], off, s[0:3], 0 offset:112
-; GFX6-NEXT: v_bfe_u32 v27, v29, 5, 1
-; GFX6-NEXT: v_bfe_u32 v23, v29, 7, 1
-; GFX6-NEXT: v_bfe_u32 v19, v29, 1, 1
-; GFX6-NEXT: v_bfe_u32 v15, v29, 3, 1
-; GFX6-NEXT: v_bfe_u32 v11, v29, 13, 1
-; GFX6-NEXT: v_bfe_u32 v25, v29, 4, 1
-; GFX6-NEXT: v_bfe_u32 v21, v29, 6, 1
-; GFX6-NEXT: v_and_b32_e32 v17, 1, v29
-; GFX6-NEXT: v_bfe_u32 v13, v29, 2, 1
+; GFX6-NEXT: v_bfe_u32 v6, v29, 10, 1
+; GFX6-NEXT: buffer_store_dwordx4 v[6:9], off, s[0:3], 0 offset:80
+; GFX6-NEXT: v_bfe_u32 v27, v29, 1, 1
+; GFX6-NEXT: v_bfe_u32 v23, v29, 3, 1
+; GFX6-NEXT: v_bfe_u32 v19, v29, 5, 1
+; GFX6-NEXT: v_bfe_u32 v15, v29, 7, 1
+; GFX6-NEXT: v_bfe_u32 v11, v29, 9, 1
+; GFX6-NEXT: v_and_b32_e32 v25, 1, v29
+; GFX6-NEXT: v_bfe_u32 v21, v29, 2, 1
+; GFX6-NEXT: v_bfe_u32 v17, v29, 4, 1
+; GFX6-NEXT: v_bfe_u32 v13, v29, 6, 1
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_bfe_u32 v9, v29, 12, 1
-; GFX6-NEXT: buffer_store_dwordx4 v[9:12], off, s[0:3], 0 offset:96
-; GFX6-NEXT: buffer_store_dwordx4 v[13:16], off, s[0:3], 0 offset:16
-; GFX6-NEXT: buffer_store_dwordx4 v[17:20], off, s[0:3], 0
-; GFX6-NEXT: buffer_store_dwordx4 v[21:24], off, s[0:3], 0 offset:48
-; GFX6-NEXT: buffer_store_dwordx4 v[25:28], off, s[0:3], 0 offset:32
+; GFX6-NEXT: v_bfe_u32 v9, v29, 8, 1
+; GFX6-NEXT: buffer_store_dwordx4 v[9:12], off, s[0:3], 0 offset:64
+; GFX6-NEXT: buffer_store_dwordx4 v[13:16], off, s[0:3], 0 offset:48
+; GFX6-NEXT: buffer_store_dwordx4 v[17:20], off, s[0:3], 0 offset:32
+; GFX6-NEXT: buffer_store_dwordx4 v[21:24], off, s[0:3], 0 offset:16
+; GFX6-NEXT: buffer_store_dwordx4 v[25:28], off, s[0:3], 0
; GFX6-NEXT: s_endpgm
;
; GFX8-LABEL: constant_zextload_v16i1_to_v16i64:
@@ -4561,21 +4561,20 @@ define amdgpu_kernel void @constant_zextload_v16i1_to_v16i64(ptr addrspace(1) %o
; GFX8-NEXT: v_mov_b32_e32 v17, 0
; GFX8-NEXT: v_mov_b32_e32 v21, 0
; GFX8-NEXT: v_mov_b32_e32 v19, v17
-; GFX8-NEXT: v_mov_b32_e32 v13, v17
+; GFX8-NEXT: v_mov_b32_e32 v4, 1
; GFX8-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NEXT: v_mov_b32_e32 v0, s2
; GFX8-NEXT: v_mov_b32_e32 v1, s3
; GFX8-NEXT: flat_load_ushort v2, v[0:1]
; GFX8-NEXT: s_add_u32 s2, s0, 0x70
; GFX8-NEXT: s_addc_u32 s3, s1, 0
-; GFX8-NEXT: s_add_u32 s4, s0, 0x50
+; GFX8-NEXT: s_add_u32 s4, s0, 0x60
; GFX8-NEXT: s_addc_u32 s5, s1, 0
-; GFX8-NEXT: v_mov_b32_e32 v0, s4
; GFX8-NEXT: v_mov_b32_e32 v24, s3
-; GFX8-NEXT: v_mov_b32_e32 v1, s5
; GFX8-NEXT: v_mov_b32_e32 v23, s2
-; GFX8-NEXT: s_add_u32 s2, s0, 64
+; GFX8-NEXT: s_add_u32 s2, s0, 0x50
; GFX8-NEXT: s_addc_u32 s3, s1, 0
+; GFX8-NEXT: v_mov_b32_e32 v13, v17
; GFX8-NEXT: v_mov_b32_e32 v9, v17
; GFX8-NEXT: v_mov_b32_e32 v5, v17
; GFX8-NEXT: v_mov_b32_e32 v22, 0
@@ -4584,24 +4583,26 @@ define amdgpu_kernel void @constant_zextload_v16i1_to_v16i64(ptr addrspace(1) %o
; GFX8-NEXT: v_mov_b32_e32 v7, 0
; GFX8-NEXT: v_mov_b32_e32 v11, 0
; GFX8-NEXT: s_waitcnt vmcnt(0)
-; GFX8-NEXT: v_lshrrev_b16_e32 v4, 10, v2
-; GFX8-NEXT: v_and_b32_e32 v18, 1, v4
-; GFX8-NEXT: v_lshrrev_b16_e32 v4, 11, v2
-; GFX8-NEXT: v_and_b32_e32 v4, 1, v4
-; GFX8-NEXT: v_and_b32_e32 v20, 0xffff, v4
-; GFX8-NEXT: v_lshrrev_b16_e32 v4, 14, v2
+; GFX8-NEXT: v_lshrrev_b16_e32 v0, 13, v2
+; GFX8-NEXT: v_lshrrev_b16_e32 v1, 12, v2
+; GFX8-NEXT: v_and_b32_e32 v0, 1, v0
+; GFX8-NEXT: v_and_b32_e32 v18, 1, v1
+; GFX8-NEXT: v_and_b32_e32 v20, 0xffff, v0
+; GFX8-NEXT: v_mov_b32_e32 v0, s4
+; GFX8-NEXT: v_mov_b32_e32 v1, s5
; GFX8-NEXT: flat_store_dwordx4 v[0:1], v[18:21]
-; GFX8-NEXT: v_mov_b32_e32 v0, 1
-; GFX8-NEXT: v_and_b32_e32 v16, 1, v4
+; GFX8-NEXT: v_lshrrev_b16_e32 v0, 14, v2
+; GFX8-NEXT: v_and_b32_e32 v16, 1, v0
; GFX8-NEXT: v_lshrrev_b16_e32 v18, 15, v2
; GFX8-NEXT: flat_store_dwordx4 v[23:24], v[16:19]
+; GFX8-NEXT: v_lshrrev_b16_e32 v0, 11, v2
; GFX8-NEXT: v_mov_b32_e32 v24, s3
-; GFX8-NEXT: v_and_b32_sdwa v16, v2, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1 src1_sel:DWORD
-; GFX8-NEXT: v_lshrrev_b16_e32 v0, 9, v2
-; GFX8-NEXT: v_mov_b32_e32 v23, s2
+; GFX8-NEXT: v_lshrrev_b16_e32 v6, 10, v2
; GFX8-NEXT: v_and_b32_e32 v0, 1, v0
-; GFX8-NEXT: s_add_u32 s2, s0, 0x60
+; GFX8-NEXT: v_mov_b32_e32 v23, s2
+; GFX8-NEXT: s_add_u32 s2, s0, 64
; GFX8-NEXT: v_mov_b32_e32 v19, 0
+; GFX8-NEXT: v_and_b32_e32 v16, 1, v6
; GFX8-NEXT: v_and_b32_e32 v18, 0xffff, v0
; GFX8-NEXT: s_addc_u32 s3, s1, 0
; GFX8-NEXT: flat_store_dwordx4 v[23:24], v[16:19]
@@ -4610,17 +4611,16 @@ define amdgpu_kernel void @constant_zextload_v16i1_to_v16i64(ptr addrspace(1) %o
; GFX8-NEXT: s_add_u32 s2, s0, 48
; GFX8-NEXT: s_addc_u32 s3, s1, 0
; GFX8-NEXT: v_mov_b32_e32 v26, s3
-; GFX8-NEXT: v_lshrrev_b16_e32 v0, 12, v2
; GFX8-NEXT: v_mov_b32_e32 v25, s2
; GFX8-NEXT: s_add_u32 s2, s0, 32
-; GFX8-NEXT: v_and_b32_e32 v19, 1, v0
-; GFX8-NEXT: v_lshrrev_b16_e32 v0, 13, v2
+; GFX8-NEXT: v_lshrrev_b16_e32 v0, 9, v2
; GFX8-NEXT: v_mov_b32_e32 v20, v17
; GFX8-NEXT: v_mov_b32_e32 v1, v17
; GFX8-NEXT: v_mov_b32_e32 v17, s1
; GFX8-NEXT: s_addc_u32 s3, s1, 0
; GFX8-NEXT: v_and_b32_e32 v0, 1, v0
; GFX8-NEXT: v_mov_b32_e32 v16, s0
+; GFX8-NEXT: v_and_b32_sdwa v19, v2, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1 src1_sel:DWORD
; GFX8-NEXT: v_and_b32_e32 v21, 0xffff, v0
; GFX8-NEXT: s_add_u32 s0, s0, 16
; GFX8-NEXT: v_lshrrev_b16_e32 v6, 7, v2
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i16.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i16.ll
index bee3d455187ca78..ace5eafc9aa8478 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i16.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i16.ll
@@ -5650,27 +5650,27 @@ define amdgpu_kernel void @constant_zextload_v16i16_to_v16i64(ptr addrspace(1) %
; GCN-NOHSA-SI-NEXT: s_waitcnt lgkmcnt(0)
; GCN-NOHSA-SI-NEXT: s_lshr_b32 s12, s1, 16
; GCN-NOHSA-SI-NEXT: s_lshr_b32 s13, s3, 16
-; GCN-NOHSA-SI-NEXT: s_lshr_b32 s14, s7, 16
-; GCN-NOHSA-SI-NEXT: s_lshr_b32 s15, s5, 16
-; GCN-NOHSA-SI-NEXT: s_lshr_b32 s16, s4, 16
-; GCN-NOHSA-SI-NEXT: s_lshr_b32 s17, s6, 16
+; GCN-NOHSA-SI-NEXT: s_lshr_b32 s14, s5, 16
+; GCN-NOHSA-SI-NEXT: s_lshr_b32 s15, s7, 16
+; GCN-NOHSA-SI-NEXT: s_lshr_b32 s16, s6, 16
+; GCN-NOHSA-SI-NEXT: s_lshr_b32 s17, s4, 16
; GCN-NOHSA-SI-NEXT: s_lshr_b32 s18, s2, 16
; GCN-NOHSA-SI-NEXT: s_lshr_b32 s19, s0, 16
; GCN-NOHSA-SI-NEXT: s_and_b32 s0, s0, 0xffff
; GCN-NOHSA-SI-NEXT: s_and_b32 s2, s2, 0xffff
-; GCN-NOHSA-SI-NEXT: s_and_b32 s6, s6, 0xffff
; GCN-NOHSA-SI-NEXT: s_and_b32 s4, s4, 0xffff
+; GCN-NOHSA-SI-NEXT: s_and_b32 s6, s6, 0xffff
; GCN-NOHSA-SI-NEXT: s_and_b32 s1, s1, 0xffff
; GCN-NOHSA-SI-NEXT: s_and_b32 s3, s3, 0xffff
-; GCN-NOHSA-SI-NEXT: s_and_b32 s5, s5, 0xffff
; GCN-NOHSA-SI-NEXT: s_and_b32 s7, s7, 0xffff
-; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s5
+; GCN-NOHSA-SI-NEXT: s_and_b32 s5, s5, 0xffff
+; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s7
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v2, s15
-; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:80
+; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:112
; GCN-NOHSA-SI-NEXT: s_waitcnt expcnt(0)
-; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s7
+; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s5
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v2, s14
-; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:112
+; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:80
; GCN-NOHSA-SI-NEXT: s_waitcnt expcnt(0)
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s3
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v2, s13
@@ -5680,13 +5680,13 @@ define amdgpu_kernel void @constant_zextload_v16i16_to_v16i64(ptr addrspace(1) %
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v2, s12
; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:16
; GCN-NOHSA-SI-NEXT: s_waitcnt expcnt(0)
-; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s4
+; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s6
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v2, s16
-; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:64
+; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:96
; GCN-NOHSA-SI-NEXT: s_waitcnt expcnt(0)
-; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s6
+; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s4
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v2, s17
-; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:96
+; GCN-NOHSA-SI-NEXT: buffer_store_dwordx4 v[0:3], off, s[8:11], 0 offset:64
; GCN-NOHSA-SI-NEXT: s_waitcnt expcnt(0)
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v0, s2
; GCN-NOHSA-SI-NEXT: v_mov_b32_e32 v2, s18
@@ -5707,33 +5707,33 @@ define amdgpu_kernel void @constant_zextload_v16i16_to_v16i64(ptr addrspace(1) %
; GCN-HSA-NEXT: s_waitcnt lgkmcnt(0)
; GCN-HSA-NEXT: s_lshr_b32 s12, s5, 16
; GCN-HSA-NEXT: s_lshr_b32 s13, s7, 16
-; GCN-HSA-NEXT: s_lshr_b32 s14, s11, 16
-; GCN-HSA-NEXT: s_lshr_b32 s2, s9, 16
-; GCN-HSA-NEXT: s_lshr_b32 s15, s8, 16
-; GCN-HSA-NEXT: s_lshr_b32 s16, s10, 16
+; GCN-HSA-NEXT: s_lshr_b32 s14, s9, 16
+; GCN-HSA-NEXT: s_lshr_b32 s2, s11, 16
+; GCN-HSA-NEXT: s_lshr_b32 s15, s10, 16
+; GCN-HSA-NEXT: s_lshr_b32 s16, s8, 16
; GCN-HSA-NEXT: s_lshr_b32 s17, s6, 16
; GCN-HSA-NEXT: s_lshr_b32 s18, s4, 16
; GCN-HSA-NEXT: s_and_b32 s4, s4, 0xffff
; GCN-HSA-NEXT: s_and_b32 s6, s6, 0xffff
-; GCN-HSA-NEXT: s_and_b32 s10, s10, 0xffff
; GCN-HSA-NEXT: s_and_b32 s8, s8, 0xffff
+; GCN-HSA-NEXT: s_and_b32 s10, s10, 0xffff
; GCN-HSA-NEXT: s_and_b32 s5, s5, 0xffff
; GCN-HSA-NEXT: s_and_b32 s7, s7, 0xffff
-; GCN-HSA-NEXT: s_and_b32 s11, s11, 0xffff
-; GCN-HSA-NEXT: s_and_b32 s3, s9, 0xffff
+; GCN-HSA-NEXT: s_and_b32 s9, s9, 0xffff
+; GCN-HSA-NEXT: s_and_b32 s3, s11, 0xffff
; GCN-HSA-NEXT: v_mov_b32_e32 v2, s2
-; GCN-HSA-NEXT: s_add_u32 s2, s0, 0x50
+; GCN-HSA-NEXT: s_add_u32 s2, s0, 0x70
; GCN-HSA-NEXT: v_mov_b32_e32 v0, s3
; GCN-HSA-NEXT: s_addc_u32 s3, s1, 0
; GCN-HSA-NEXT: v_mov_b32_e32 v5, s3
; GCN-HSA-NEXT: v_mov_b32_e32 v4, s2
-; GCN-HSA-NEXT: s_add_u32 s2, s0, 0x70
+; GCN-HSA-NEXT: s_add_u32 s2, s0, 0x50
; GCN-HSA-NEXT: s_addc_u32 s3, s1, 0
; GCN-HSA-NEXT: flat_store_dwordx4 v[4:5], v[0:3]
; GCN-HSA-NEXT: v_mov_b32_e32 v5, s3
; GCN-HSA-NEXT: v_mov_b32_e32 v4, s2
; GCN-HSA-NEXT: s_add_u32 s2, s0, 48
-; GCN-HSA-NEXT: v_mov_b32_e32 v0, s11
+; GCN-HSA-NEXT: v_mov_b32_e32 v0, s9
; GCN-HSA-NEXT: v_mov_b32_e32 v2, s14
; GCN-HSA-NEXT: s_addc_u32 s3, s1, 0
; GCN-HSA-NEXT: flat_store_dwordx4 v[4:5], v[0:3]
@@ -5746,22 +5746,22 @@ define amdgpu_kernel void @constant_zextload_v16i16_to_v16i64(ptr addrspace(1) %
; GCN-HSA-NEXT: flat_store_dwordx4 v[4:5], v[0:3]
; GCN-HSA-NEXT: v_mov_b32_e32 v5, s3
; GCN-HSA-NEXT: v_mov_b32_e32 v4, s2
-; GCN-HSA-NEXT: s_add_u32 s2, s0, 64
+; GCN-HSA-NEXT: s_add_u32 s2, s0, 0x60
; GCN-HSA-NEXT: v_mov_b32_e32 v0, s5
; GCN-HSA-NEXT: v_mov_b32_e32 v2, s12
; GCN-HSA-NEXT: s_addc_u32 s3, s1, 0
; GCN-HSA-NEXT: flat_store_dwordx4 v[4:5], v[0:3]
; GCN-HSA-NEXT: v_mov_b32_e32 v5, s3
; GCN-HSA-NEXT: v_mov_b32_e32 v4, s2
-; GCN-HSA-NEXT: s_add_u32 s2, s0, 0x60
-; GCN-HSA-NEXT: v_mov_b32_e32 v0, s8
+; GCN-HSA-NEXT: s_add_u32 s2, s0, 64
+; GCN-HSA-NEXT: v_mov_b32_e32 v0, s10
; GCN-HSA-NEXT: v_mov_b32_e32 v2, s15
; GCN-HSA-NEXT: s_addc_u32 s3, s1, 0
; GCN-HSA-NEXT: flat_store_dwordx4 v[4:5], v[0:3]
; GCN-HSA-NEXT: v_mov_b32_e32 v5, s3
; GCN-HSA-NEXT: v_mov_b32_e32 v4, s2
; GCN-HSA-NEXT: s_add_u32 s2, s0, 32
-; GCN-HSA-NEXT: v_mov_b32_e32 v0, s10
+; GCN-HSA-NEXT: v_mov_b32_e32 v...
[truncated]
|
; CHECK-NEXT: vblendps {{.*#+}} ymm0 = ymm0[0],mem[1,2,3],ymm0[4],mem[5],ymm0[6,7] | ||
; CHECK-NEXT: vbroadcastf128 {{.*#+}} ymm1 = mem[0,1,0,1] | ||
; CHECK-NEXT: vshufpd {{.*#+}} ymm1 = ymm1[0,0,3,2] | ||
; CHECK-NEXT: vblendps {{.*#+}} ymm0 = ymm0[0],ymm1[1,2,3],ymm0[4],ymm1[5],ymm0[6,7] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a regression.
Legalizing: t50: v4f64 = vector_shuffle<2,2,3,u> t48, undef:v4f64
Trying custom legalization
Successfully custom legalized node
... replacing: t50: v4f64 = vector_shuffle<2,2,3,u> t48, undef:v4f64
with: t53: v4f64 = vector_shuffle<0,0,3,u> t52, undef:v4f64
Before this patch, t53 and t50 are not getting legalized during DAG legalization and the "legalized" DAG is:
SelectionDAG has 32 nodes:
t0: ch,glue = EntryToken
t2: v8i32,ch = CopyFromReg t0, Register:v8i32 %0
t28: i64 = X86ISD::Wrapper TargetConstantPool:i64<<8 x i32> <i32 undef, i32 undef, i32 undef, i32 undef, i32 13, i32 14, i32 undef, i32 16>> 0
t26: v8i32,ch = load<(load (s256) from constant-pool)> t0, t28, undef:i64
t48: v4f64 = bitcast t26
t52: v4f64 = vector_shuffle<2,3,2,3> t48, undef:v4f64
t53: v4f64 = vector_shuffle<0,0,3,u> t52, undef:v4f64
t51: v8f32 = bitcast t53
...
t53 is legalized by DAG combiner into VPERMILPI, which is immediately folded with t52 and t26 into:
v4f64 = BUILD_VECTOR ConstantFP:f64<2.970794e-313>, ConstantFP:f64<2.970794e-313>, ConstantFP:f64<3.395193e-313>, ConstantFP:f64<2.970794e-313>
After that t52 is deleted as dead without being legalized.
After this patch, t52 and t53 are legalized during DAG legalization phase, and the resulting DAG is:
SelectionDAG has 31 nodes:
t0: ch,glue = EntryToken
t2: v8i32,ch = CopyFromReg t0, Register:v8i32 %0
t28: i64 = X86ISD::Wrapper TargetConstantPool:i64<<8 x i32> <i32 undef, i32 undef, i32 undef, i32 undef, i32 13, i32 14, i32 undef, i32 16>> 0
t66: i64 = add t28, Constant:i64<16>
t67: v4f64,ch = X86ISD::SUBV_BROADCAST_LOAD<(load (s128) from constant-pool + 16, basealign 32)> t0, t66
t64: v4f64 = X86ISD::VPERMILPI t67, TargetConstant:i8<4>
t51: v8f32 = bitcast t64
...
VPERMILPI is not combined with SUBV_BROADCAST_LOAD presumably because the optimizer cannot get through t66 add
to extract shuffle mask stored in constant pool.
I don't know if this is a serious regression. The test seems to be testing something different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getTargetConstantFromBasePtr currently only works for constant pool pointers with no offset - I'll add a TODO for now
@@ -6519,7 +6519,7 @@ define <16 x i16> @shuffle_v16i16_03_04_05_06_07_16_17_26_11_12_13_14_15_24_25_2 | |||
; AVX2-LABEL: shuffle_v16i16_03_04_05_06_07_16_17_26_11_12_13_14_15_24_25_26: | |||
; AVX2: # %bb.0: | |||
; AVX2-NEXT: vpblendw {{.*#+}} ymm0 = ymm1[0,1,2],ymm0[3,4,5,6,7],ymm1[8,9,10],ymm0[11,12,13,14,15] | |||
; AVX2-NEXT: vpermq {{.*#+}} ymm1 = ymm1[2,3,2,3] | |||
; AVX2-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,2,3] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
regression - this is now dependent on the VPBLENDW
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. This one looks difficult though (ISTM it worked by chance).
https://www.diffchecker.com/tG5Ss94y/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect #90219 can help with this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, it doesn't. Here is the DAG:
t2: v16i16,ch = CopyFromReg t0, Register:v16i16 %0
t4: v16i16,ch = CopyFromReg t0, Register:v16i16 %1
t31: v16i16 = X86ISD::BLENDI t2, t4, TargetConstant:i8<7>
t25: v4i64 = bitcast t31
t70: v4i64 = X86ISD::VPERMI t25, TargetConstant:i8<-18>
t74: v16i16 = bitcast t70
t65: v16i16 = X86ISD::BLENDI t31, t74, TargetConstant:i8<4>
t65: (BLENDI (BLENDI t2, t4, 7), (VPERMI (BLENDI t2, t4, 7), -18), 4)
I guess the main issue here is that t31 = (BLENDI t2, t4, 7)
has more than one use?
AMDGPU changes look fine. |
@@ -5099,8 +5099,8 @@ define <16 x i16> @shuffle_v16i16_03_07_01_00_02_07_03_13_11_15_09_08_10_15_11_1 | |||
; | |||
; AVX2-SLOW-LABEL: shuffle_v16i16_03_07_01_00_02_07_03_13_11_15_09_08_10_15_11_13: | |||
; AVX2-SLOW: # %bb.0: | |||
; AVX2-SLOW-NEXT: vmovdqa {{.*#+}} ymm1 = [0,1,6,3,4,5,6,7] | |||
; AVX2-SLOW-NEXT: vpermd %ymm0, %ymm1, %ymm0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These four are all minorly regressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see the comment above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good - its precisely what we'd expect on AVX2-SLOW/AVX2-FAST-PERLANE targets - as long as AVX2-FAST still merges to VPERMD
x86 shuffle combining and SimplifyDemandedVectorElts are very sensitive to oneuse limits - my guess is thatsthe cause of most of the regressions |
SelectionDAG::DeleteNode does not notify listeners about node deletion. As a result, SelectionDAG::Legalize may skip legalization of some nodes resulting in "Legalized selection DAG" containing illegal nodes. These nodes will be legalized during subsequent DAG combining phase, but this may be too late for some patterns to match.
d0463a6
to
eefc405
Compare
Is this because a new node gets allocated in the same memory that |
Yes, exactly. |
I'm afraid I won't be able to nail this with zero knowledge in X86 vectors. I spent hard time trying to debug with no success. |
; AVX2-SLOW-NEXT: vpmovsxbd {{.*#+}} ymm1 = [0,5,2,3,4,5,6,7] | ||
; AVX2-SLOW-NEXT: vpermd %ymm0, %ymm1, %ymm0 | ||
; AVX2-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,2,3] | ||
; AVX2-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0],ymm1[1],ymm0[2,3,4],ymm1[5],ymm0[6,7] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before this patch we have:
t32: v4i64 = X86ISD::VPERMI t12, TargetConstant:i8<78>
t39: v4i64 = vector_shuffle<4,5,2,3> t12, t32
t40: v8i32 = bitcast t39
t57: v8i32 = X86ISD::BLENDI t53, t40, TargetConstant:i8<34>
which gets combined into
t67: v8i32 = X86ISD::VPERMV t66, t53
After this patch we have:
t62: v4i64 = X86ISD::VPERMI t12, TargetConstant:i8<-18>
t40: v8i32 = bitcast t62
t57: v8i32 = X86ISD::BLENDI t53, t40, TargetConstant:i8<34>
This doesn't get combined because the chain isn't "deep enough" (Depth = 1) according to this check in combineX86ShuffleChain:
// Depth threshold above which we can efficiently use variable mask shuffles.
int VariableCrossLaneShuffleDepth =
Subtarget.hasFastVariableCrossLaneShuffle() ? 1 : 2;
...
AllowVariableCrossLaneMask &=
(Depth >= VariableCrossLaneShuffleDepth) || HasVariableMask;
...
if (is128BitLaneCrossingShuffleMask(MaskVT, Mask)) {
// If we have a single input lane-crossing shuffle then lower to VPERMV.
if (UnaryShuffle && AllowVariableCrossLaneMask && !MaskContainsZeros) {
if (Subtarget.hasAVX2() &&
(MaskVT == MVT::v8f32 || MaskVT == MVT::v8i32)) {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good - its precisely what we'd expect on AVX2-SLOW/AVX2-FAST-PERLANE targets - as long as AVX2-FAST still merges to VPERMD
…ffsets. As noted on #66991 - we sometimes share vector constant pool entries, referencing subvectors within them via pointer offsets
Just a random idea: could we add a (expensive checks?) check after legalization, that a second round of legalization has no effect? |
Maybe worth checking if this has any compile time impact? |
I've created https://github.com/RKSimon/llvm-project/tree/perf/pr66991 - it should appear on https://llvm-compile-time-tracker.com shortly |
That's a shame but (a) it doesn't look too bad and (b) this does feel like a reasonably serious bug, if running the legalizer does not result in a fully legalized DAG. Incidentally I understand that the DAG gets fixed up by later combine passes, but I was a bit surprised that all the combine passes are run even at -O0. Was there ever a plan to not run them at -O0? Would that be feasible? |
Every time we encounter this kind of situation I come back to fixing topological ordering in DAG once and for all..... |
I am a big fan of D152928 but would it completely eliminate this deleted-nodes bug? |
No, but I expect the impact will be reduced considerably |
SelectionDAG::DeleteNode does not notify listeners about node deletion. As a result, SelectionDAG::Legalize may skip legalization of some nodes resulting in "Legalized selection DAG" containing illegal nodes. These nodes will be legalized during subsequent DAG combining phase, but this may be too late for some patterns to match.