Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ASAN][AMDGPU] Make address sanitizer checks more efficient for the divergent target. #72247

Merged
merged 1 commit into from
Jan 4, 2024

Conversation

vpykhtin
Copy link
Contributor

Address sanitizer checks for AMDGPU target in non-recovery mode aren't quite efficient at the moment which can be illustrated with a program:

instr_before; 
load ptr1; 
instr_in_the_middle; 
load ptr2; 
instr_after; 

ASAN generates the following instrumentation:

instr_before; 
if (sanity_check_passed(ptr1)) 
  load ptr1; 
  instr_in_the_middle; 
  if (sanity_check_passed(ptr2)) 
     load ptr2; 
     instr_after; 
  else 
     // ASAN report block 2 
     __asan_report(ptr2); // wave terminates   
     unreachable; 
else 
   // ASAN report block 1 
  __asan_report(ptr1); // wave terminates 
  unreachable; 

Each sanitizer check is treated as a non-uniform condition (and this is true because some lanes may pass the check and some don't). This results in the program above: basically normal program flow is continued in then blocks. This way it allows lanes that pass all sanity checks to complete the program and then the wave terminates at the first reporting else block. For each else block it has to keep execmask and pointer value to report error consuming tons (megatons!) of registers which are live till the program end.

This patch changes the behavior on a failing sanity check: instead of waiting when passing lanes reach program end report error and terminate as soon as any lane has violated the sanity check. Sanity check condition is treated uniform with this approach and the resulting program looks much like ordinary CPU code:

instr_before; 
if (any_lane_violated(sanity_check_passed(ptr1)))
  // ASAN report block 1 
  __asan_report(ptr1); // abort the program 
  unreachable; 
load ptr1; 
instr_in_the_middle; 
if (any_lane_violated(sanity_check_passed(ptr2))) 
  // ASAN report block 2   
  __asan_report(ptr2); // abort the program 
  unreachable; 
load ptr2; 
instr_after; 

However it has to use a trick to pass structurizer and some later passes: ASAN check is generated like in recovery mode but reporting function aborts, that is standard unreachable instruction isn't used:

...
if (any_lane_violated(sanity_check_passed(ptr1)))
  // ASAN report block 1 
  __asan_report(ptr1); // abort the program 
  // pretend we're going to continue the program
load ptr1; 
...

This may create some undesirable effects:

  1. Register allocator generates a lot of code for save/restore registers for asan_report call. This may potentially bloat the code since we have a report block for every accessed pointer.
  2. Loop invariant code in report blocks is hoisted into a loop preheader. I'm not sure but probably this can be solved using block frequency information, but most likely this isn't a problem at all.

These problems are to be addressed later.

Flattening address sanitizer check

In order to simplify divergent CFG this patch also changes the instrumentation code from:

  uint64_t address = ptr; 
  sbyte *shadow_address = MemToShadow(address); 
  sbyte shadow_value = *shadow_address; 
  if (shadow_value) { 
    sbyte last_accessed_byte = (address & 7) + kAccessSize - 1; 
    if (last_accessed_byte >= shadow_value) { 
      ReportError(address, kAccessSize, kIsWrite); 
      abort(); 
    } 
  } 

to

  uint64_t address = ptr; 
  sbyte *shadow_address = MemToShadow(address); 
  sbyte shadow_value = *shadow_address; 

  sbyte last_accessed_byte = (address & 7) + kAccessSize - 1; 
  if (shadow_value && last_accessed_byte >= shadow_value) { 
    ReportError(address, kAccessSize, kIsWrite); 
    abort(); 
  } 

It saves one if which really avoids very few instructions and their latency can be hidden by the load from shadow memory.

@llvmbot
Copy link
Collaborator

llvmbot commented Nov 14, 2023

@llvm/pr-subscribers-llvm-transforms

@llvm/pr-subscribers-backend-amdgpu

Author: Valery Pykhtin (vpykhtin)

Changes

Address sanitizer checks for AMDGPU target in non-recovery mode aren't quite efficient at the moment which can be illustrated with a program:

instr_before; 
load ptr1; 
instr_in_the_middle; 
load ptr2; 
instr_after; 

ASAN generates the following instrumentation:

instr_before; 
if (sanity_check_passed(ptr1)) 
  load ptr1; 
  instr_in_the_middle; 
  if (sanity_check_passed(ptr2)) 
     load ptr2; 
     instr_after; 
  else 
     // ASAN report block 2 
     __asan_report(ptr2); // wave terminates   
     unreachable; 
else 
   // ASAN report block 1 
  __asan_report(ptr1); // wave terminates 
  unreachable; 

Each sanitizer check is treated as a non-uniform condition (and this is true because some lanes may pass the check and some don't). This results in the program above: basically normal program flow is continued in then blocks. This way it allows lanes that pass all sanity checks to complete the program and then the wave terminates at the first reporting else block. For each else block it has to keep execmask and pointer value to report error consuming tons (megatons!) of registers which are live till the program end.

This patch changes the behavior on a failing sanity check: instead of waiting when passing lanes reach program end report error and terminate as soon as any lane has violated the sanity check. Sanity check condition is treated uniform with this approach and the resulting program looks much like ordinary CPU code:

instr_before; 
if (any_lane_violated(sanity_check_passed(ptr1)))
  // ASAN report block 1 
  __asan_report(ptr1); // abort the program 
  unreachable; 
load ptr1; 
instr_in_the_middle; 
if (any_lane_violated(sanity_check_passed(ptr2))) 
  // ASAN report block 2   
  __asan_report(ptr2); // abort the program 
  unreachable; 
load ptr2; 
instr_after; 

However it has to use a trick to pass structurizer and some later passes: ASAN check is generated like in recovery mode but reporting function aborts, that is standard unreachable instruction isn't used:

...
if (any_lane_violated(sanity_check_passed(ptr1)))
  // ASAN report block 1 
  __asan_report(ptr1); // abort the program 
  // pretend we're going to continue the program
load ptr1; 
...

This may create some undesirable effects:

  1. Register allocator generates a lot of code for save/restore registers for asan_report call. This may potentially bloat the code since we have a report block for every accessed pointer.
  2. Loop invariant code in report blocks is hoisted into a loop preheader. I'm not sure but probably this can be solved using block frequency information, but most likely this isn't a problem at all.

These problems are to be addressed later.

Flattening address sanitizer check

In order to simplify divergent CFG this patch also changes the instrumentation code from:

  uint64_t address = ptr; 
  sbyte *shadow_address = MemToShadow(address); 
  sbyte shadow_value = *shadow_address; 
  if (shadow_value) { 
    sbyte last_accessed_byte = (address & 7) + kAccessSize - 1; 
    if (last_accessed_byte >= shadow_value) { 
      ReportError(address, kAccessSize, kIsWrite); 
      abort(); 
    } 
  } 

to

  uint64_t address = ptr; 
  sbyte *shadow_address = MemToShadow(address); 
  sbyte shadow_value = *shadow_address; 

  sbyte last_accessed_byte = (address & 7) + kAccessSize - 1; 
  if (shadow_value && last_accessed_byte >= shadow_value) { 
    ReportError(address, kAccessSize, kIsWrite); 
    abort(); 
  } 

It saves one if which really avoids very few instructions and their latency can be hidden by the load from shadow memory.


Patch is 76.07 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/72247.diff

6 Files Affected:

  • (modified) llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp (+37-1)
  • (added) llvm/test/CodeGen/AMDGPU/asan_loop.ll (+231)
  • (added) llvm/test/CodeGen/AMDGPU/asan_trivial.ll (+610)
  • (modified) llvm/test/Instrumentation/AddressSanitizer/AMDGPU/asan_instrument_constant_address_space.ll (+92-11)
  • (modified) llvm/test/Instrumentation/AddressSanitizer/AMDGPU/asan_instrument_generic_address_space.ll (+234-26)
  • (modified) llvm/test/Instrumentation/AddressSanitizer/AMDGPU/asan_instrument_global_address_space.ll (+173-22)
diff --git a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
index 5c2763850ac6540..a88b271ed8e7325 100644
--- a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
@@ -174,6 +174,8 @@ const char kAsanAllocasUnpoison[] = "__asan_allocas_unpoison";
 
 const char kAMDGPUAddressSharedName[] = "llvm.amdgcn.is.shared";
 const char kAMDGPUAddressPrivateName[] = "llvm.amdgcn.is.private";
+const char kAMDGPUBallotName[] = "llvm.amdgcn.ballot.i64";
+const char kAMDGPUUnreachableName[] = "llvm.amdgcn.unreachable";
 
 // Accesses sizes are powers of two: 1, 2, 4, 8, 16.
 static const size_t kNumberOfAccessSizes = 5;
@@ -692,6 +694,8 @@ struct AddressSanitizer {
                                        Instruction *InsertBefore, Value *Addr,
                                        uint32_t TypeStoreSize, bool IsWrite,
                                        Value *SizeArgument);
+  Instruction *genAMDGPUReportBlock(IRBuilder<> &IRB, Value *Cond,
+                                    bool Recover);
   void instrumentUnusualSizeOrAlignment(Instruction *I,
                                         Instruction *InsertBefore, Value *Addr,
                                         TypeSize TypeStoreSize, bool IsWrite,
@@ -1707,6 +1711,30 @@ Instruction *AddressSanitizer::instrumentAMDGPUAddress(
   return InsertBefore;
 }
 
+Instruction *AddressSanitizer::genAMDGPUReportBlock(IRBuilder<> &IRB,
+                                                    Value *Cond, bool Recover) {
+  Module &M = *IRB.GetInsertBlock()->getModule();
+  Value *ReportCond = Cond;
+  if (!Recover) {
+    auto Ballot = M.getOrInsertFunction(kAMDGPUBallotName, IRB.getInt64Ty(),
+                                        IRB.getInt1Ty());
+    ReportCond = IRB.CreateIsNotNull(IRB.CreateCall(Ballot, {Cond}));
+  }
+
+  auto *Trm =
+      SplitBlockAndInsertIfThen(ReportCond, &*IRB.GetInsertPoint(), false,
+                                MDBuilder(*C).createBranchWeights(1, 100000));
+  Trm->getParent()->setName("asan.report");
+
+  if (Recover)
+    return Trm;
+
+  Trm = SplitBlockAndInsertIfThen(Cond, Trm, false);
+  IRB.SetInsertPoint(Trm);
+  return IRB.CreateCall(
+      M.getOrInsertFunction(kAMDGPUUnreachableName, IRB.getVoidTy()), {});
+}
+
 void AddressSanitizer::instrumentAddress(Instruction *OrigIns,
                                          Instruction *InsertBefore, Value *Addr,
                                          MaybeAlign Alignment,
@@ -1758,7 +1786,15 @@ void AddressSanitizer::instrumentAddress(Instruction *OrigIns,
   size_t Granularity = 1ULL << Mapping.Scale;
   Instruction *CrashTerm = nullptr;
 
-  if (ClAlwaysSlowPath || (TypeStoreSize < 8 * Granularity)) {
+  bool GenSlowPath = (ClAlwaysSlowPath || (TypeStoreSize < 8 * Granularity));
+
+  if (TargetTriple.isAMDGPU()) {
+    if (GenSlowPath) {
+      auto *Cmp2 = createSlowPathCmp(IRB, AddrLong, ShadowValue, TypeStoreSize);
+      Cmp = IRB.CreateAnd(Cmp, Cmp2);
+    }
+    CrashTerm = genAMDGPUReportBlock(IRB, Cmp, Recover);
+  } else if (GenSlowPath) {
     // We use branch weights for the slow path check, to indicate that the slow
     // path is rarely taken. This seems to be the case for SPEC benchmarks.
     Instruction *CheckTerm = SplitBlockAndInsertIfThen(
diff --git a/llvm/test/CodeGen/AMDGPU/asan_loop.ll b/llvm/test/CodeGen/AMDGPU/asan_loop.ll
new file mode 100644
index 000000000000000..44561b6f4b9a134
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/asan_loop.ll
@@ -0,0 +1,231 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 3
+; RUN: opt  -passes=asan -S < %s | FileCheck %s --check-prefix=OPT
+; RUN: opt < %s -passes='asan,default<O3>' -o - | llc -O3 -mtriple=amdgcn-hsa-amdhsa -mcpu=gfx90a -o - | FileCheck %s --check-prefix=LLC
+
+; This test contains checks for opt and llc, to update use:
+;   utils/update_test_checks.py --force-update
+;   utils/update_llc_test_checks.py --force-update
+;
+; --force-update allows to override "Assertions have been autogenerated by" guard
+target datalayout = "e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7"
+target triple = "amdgcn-amd-amdhsa"
+
+declare i32 @llvm.amdgcn.workitem.id.x() #0
+
+define protected amdgpu_kernel void @uniform_loop_global(i32 %num, ptr addrspace(1) %ptr1, ptr addrspace(1) %ptr2) sanitize_address {
+; OPT-LABEL: define protected amdgpu_kernel void @uniform_loop_global(
+; OPT-SAME: i32 [[NUM:%.*]], ptr addrspace(1) [[PTR1:%.*]], ptr addrspace(1) [[PTR2:%.*]]) #[[ATTR1:[0-9]+]] {
+; OPT-NEXT:  entry:
+; OPT-NEXT:    [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x()
+; OPT-NEXT:    br label [[WHILE_COND:%.*]]
+; OPT:       while.cond:
+; OPT-NEXT:    [[C:%.*]] = phi i32 [ [[NUM]], [[ENTRY:%.*]] ], [ [[NEXT_C:%.*]], [[TMP31:%.*]] ]
+; OPT-NEXT:    [[CMP:%.*]] = icmp eq i32 [[C]], 0
+; OPT-NEXT:    br i1 [[CMP]], label [[EXIT:%.*]], label [[WHILE_BODY:%.*]]
+; OPT:       while.body:
+; OPT-NEXT:    [[OFFS32:%.*]] = add i32 [[TID]], [[C]]
+; OPT-NEXT:    [[OFFS:%.*]] = zext i32 [[OFFS32]] to i64
+; OPT-NEXT:    [[PP1:%.*]] = getelementptr inbounds i64, ptr addrspace(1) [[PTR1]], i64 [[OFFS]]
+; OPT-NEXT:    [[TMP0:%.*]] = ptrtoint ptr addrspace(1) [[PP1]] to i64
+; OPT-NEXT:    [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
+; OPT-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
+; OPT-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; OPT-NEXT:    [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
+; OPT-NEXT:    [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
+; OPT-NEXT:    [[TMP6:%.*]] = and i64 [[TMP0]], 7
+; OPT-NEXT:    [[TMP7:%.*]] = add i64 [[TMP6]], 3
+; OPT-NEXT:    [[TMP8:%.*]] = trunc i64 [[TMP7]] to i8
+; OPT-NEXT:    [[TMP9:%.*]] = icmp sge i8 [[TMP8]], [[TMP4]]
+; OPT-NEXT:    [[TMP10:%.*]] = and i1 [[TMP5]], [[TMP9]]
+; OPT-NEXT:    [[TMP11:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP10]])
+; OPT-NEXT:    [[TMP12:%.*]] = icmp ne i64 [[TMP11]], 0
+; OPT-NEXT:    br i1 [[TMP12]], label [[ASAN_REPORT:%.*]], label [[TMP15:%.*]], !prof [[PROF0:![0-9]+]]
+; OPT:       asan.report:
+; OPT-NEXT:    br i1 [[TMP10]], label [[TMP13:%.*]], label [[TMP14:%.*]]
+; OPT:       13:
+; OPT-NEXT:    call void @__asan_report_load4(i64 [[TMP0]]) #[[ATTR5:[0-9]+]]
+; OPT-NEXT:    call void @llvm.amdgcn.unreachable()
+; OPT-NEXT:    br label [[TMP14]]
+; OPT:       14:
+; OPT-NEXT:    br label [[TMP15]]
+; OPT:       15:
+; OPT-NEXT:    [[VAL:%.*]] = load i32, ptr addrspace(1) [[PP1]], align 4
+; OPT-NEXT:    [[SUM:%.*]] = add i32 [[VAL]], 42
+; OPT-NEXT:    [[PP2:%.*]] = getelementptr inbounds i64, ptr addrspace(1) [[PTR2]], i64 [[OFFS]]
+; OPT-NEXT:    [[TMP16:%.*]] = ptrtoint ptr addrspace(1) [[PP2]] to i64
+; OPT-NEXT:    [[TMP17:%.*]] = lshr i64 [[TMP16]], 3
+; OPT-NEXT:    [[TMP18:%.*]] = add i64 [[TMP17]], 2147450880
+; OPT-NEXT:    [[TMP19:%.*]] = inttoptr i64 [[TMP18]] to ptr
+; OPT-NEXT:    [[TMP20:%.*]] = load i8, ptr [[TMP19]], align 1
+; OPT-NEXT:    [[TMP21:%.*]] = icmp ne i8 [[TMP20]], 0
+; OPT-NEXT:    [[TMP22:%.*]] = and i64 [[TMP16]], 7
+; OPT-NEXT:    [[TMP23:%.*]] = add i64 [[TMP22]], 3
+; OPT-NEXT:    [[TMP24:%.*]] = trunc i64 [[TMP23]] to i8
+; OPT-NEXT:    [[TMP25:%.*]] = icmp sge i8 [[TMP24]], [[TMP20]]
+; OPT-NEXT:    [[TMP26:%.*]] = and i1 [[TMP21]], [[TMP25]]
+; OPT-NEXT:    [[TMP27:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP26]])
+; OPT-NEXT:    [[TMP28:%.*]] = icmp ne i64 [[TMP27]], 0
+; OPT-NEXT:    br i1 [[TMP28]], label [[ASAN_REPORT1:%.*]], label [[TMP31]], !prof [[PROF0]]
+; OPT:       asan.report1:
+; OPT-NEXT:    br i1 [[TMP26]], label [[TMP29:%.*]], label [[TMP30:%.*]]
+; OPT:       29:
+; OPT-NEXT:    call void @__asan_report_store4(i64 [[TMP16]]) #[[ATTR5]]
+; OPT-NEXT:    call void @llvm.amdgcn.unreachable()
+; OPT-NEXT:    br label [[TMP30]]
+; OPT:       30:
+; OPT-NEXT:    br label [[TMP31]]
+; OPT:       31:
+; OPT-NEXT:    store i32 [[SUM]], ptr addrspace(1) [[PP2]], align 4
+; OPT-NEXT:    [[NEXT_C]] = sub i32 [[C]], 1
+; OPT-NEXT:    br label [[WHILE_COND]]
+; OPT:       exit:
+; OPT-NEXT:    ret void
+;
+; LLC-LABEL: uniform_loop_global:
+; LLC:       ; %bb.0: ; %entry
+; LLC-NEXT:    s_load_dword s54, s[8:9], 0x0
+; LLC-NEXT:    s_add_u32 flat_scratch_lo, s12, s17
+; LLC-NEXT:    s_addc_u32 flat_scratch_hi, s13, 0
+; LLC-NEXT:    s_add_u32 s0, s0, s17
+; LLC-NEXT:    s_addc_u32 s1, s1, 0
+; LLC-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-NEXT:    s_cmp_eq_u32 s54, 0
+; LLC-NEXT:    s_mov_b32 s32, 0
+; LLC-NEXT:    s_cbranch_scc1 .LBB0_11
+; LLC-NEXT:  ; %bb.1: ; %while.body.preheader
+; LLC-NEXT:    s_mov_b64 s[40:41], s[4:5]
+; LLC-NEXT:    s_getpc_b64 s[4:5]
+; LLC-NEXT:    s_add_u32 s4, s4, __asan_report_load4@gotpcrel32@lo+4
+; LLC-NEXT:    s_addc_u32 s5, s5, __asan_report_load4@gotpcrel32@hi+12
+; LLC-NEXT:    s_load_dwordx2 s[48:49], s[4:5], 0x0
+; LLC-NEXT:    s_getpc_b64 s[4:5]
+; LLC-NEXT:    s_add_u32 s4, s4, __asan_report_store4@gotpcrel32@lo+4
+; LLC-NEXT:    s_addc_u32 s5, s5, __asan_report_store4@gotpcrel32@hi+12
+; LLC-NEXT:    s_load_dwordx4 s[44:47], s[8:9], 0x8
+; LLC-NEXT:    s_load_dwordx2 s[50:51], s[4:5], 0x0
+; LLC-NEXT:    v_mov_b32_e32 v44, v0
+; LLC-NEXT:    s_mov_b32 s33, s16
+; LLC-NEXT:    s_mov_b32 s42, s15
+; LLC-NEXT:    s_mov_b64 s[34:35], s[8:9]
+; LLC-NEXT:    s_mov_b32 s43, s14
+; LLC-NEXT:    s_mov_b64 s[36:37], s[10:11]
+; LLC-NEXT:    s_mov_b64 s[38:39], s[6:7]
+; LLC-NEXT:    v_and_b32_e32 v45, 0x3ff, v44
+; LLC-NEXT:    v_mov_b32_e32 v47, 0
+; LLC-NEXT:    s_branch .LBB0_4
+; LLC-NEXT:  .LBB0_2: ; %Flow
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_or_b64 exec, exec, s[52:53]
+; LLC-NEXT:  .LBB0_3: ; %while.cond
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_add_i32 s54, s54, -1
+; LLC-NEXT:    v_add_u32_e32 v0, 42, v46
+; LLC-NEXT:    s_cmp_lg_u32 s54, 0
+; LLC-NEXT:    global_store_dword v[42:43], v0, off
+; LLC-NEXT:    s_cbranch_scc0 .LBB0_11
+; LLC-NEXT:  .LBB0_4: ; %while.body
+; LLC-NEXT:    ; =>This Inner Loop Header: Depth=1
+; LLC-NEXT:    v_add_u32_e32 v46, s54, v45
+; LLC-NEXT:    v_lshlrev_b64 v[42:43], 3, v[46:47]
+; LLC-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-NEXT:    v_mov_b32_e32 v0, s45
+; LLC-NEXT:    v_add_co_u32_e32 v40, vcc, s44, v42
+; LLC-NEXT:    v_addc_co_u32_e32 v41, vcc, v0, v43, vcc
+; LLC-NEXT:    v_lshrrev_b64 v[0:1], 3, v[40:41]
+; LLC-NEXT:    v_add_co_u32_e32 v0, vcc, 0x7fff8000, v0
+; LLC-NEXT:    v_addc_co_u32_e32 v1, vcc, 0, v1, vcc
+; LLC-NEXT:    flat_load_sbyte v0, v[0:1]
+; LLC-NEXT:    v_and_b32_e32 v1, 7, v40
+; LLC-NEXT:    v_add_u16_e32 v1, 3, v1
+; LLC-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-NEXT:    v_cmp_ne_u16_sdwa s[4:5], v0, v47 src0_sel:BYTE_0 src1_sel:DWORD
+; LLC-NEXT:    v_cmp_ge_i16_e32 vcc, v1, v0
+; LLC-NEXT:    s_and_b64 vcc, s[4:5], vcc
+; LLC-NEXT:    s_cbranch_vccz .LBB0_8
+; LLC-NEXT:  ; %bb.5: ; %asan.report
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_and_saveexec_b64 s[52:53], vcc
+; LLC-NEXT:    s_cbranch_execz .LBB0_7
+; LLC-NEXT:  ; %bb.6: ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_add_u32 s8, s34, 24
+; LLC-NEXT:    s_addc_u32 s9, s35, 0
+; LLC-NEXT:    s_mov_b64 s[4:5], s[40:41]
+; LLC-NEXT:    s_mov_b64 s[6:7], s[38:39]
+; LLC-NEXT:    s_mov_b64 s[10:11], s[36:37]
+; LLC-NEXT:    s_mov_b32 s12, s43
+; LLC-NEXT:    s_mov_b32 s13, s42
+; LLC-NEXT:    s_mov_b32 s14, s33
+; LLC-NEXT:    v_mov_b32_e32 v31, v44
+; LLC-NEXT:    v_mov_b32_e32 v0, v40
+; LLC-NEXT:    v_mov_b32_e32 v1, v41
+; LLC-NEXT:    s_swappc_b64 s[30:31], s[48:49]
+; LLC-NEXT:    ; divergent unreachable
+; LLC-NEXT:  .LBB0_7: ; %Flow4
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_or_b64 exec, exec, s[52:53]
+; LLC-NEXT:  .LBB0_8: ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    v_mov_b32_e32 v0, s47
+; LLC-NEXT:    v_add_co_u32_e32 v42, vcc, s46, v42
+; LLC-NEXT:    v_addc_co_u32_e32 v43, vcc, v0, v43, vcc
+; LLC-NEXT:    v_lshrrev_b64 v[0:1], 3, v[42:43]
+; LLC-NEXT:    v_add_co_u32_e32 v0, vcc, 0x7fff8000, v0
+; LLC-NEXT:    v_addc_co_u32_e32 v1, vcc, 0, v1, vcc
+; LLC-NEXT:    flat_load_sbyte v2, v[0:1]
+; LLC-NEXT:    global_load_dword v46, v[40:41], off
+; LLC-NEXT:    v_and_b32_e32 v0, 7, v42
+; LLC-NEXT:    v_add_u16_e32 v0, 3, v0
+; LLC-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-NEXT:    v_cmp_ne_u16_sdwa s[4:5], v2, v47 src0_sel:BYTE_0 src1_sel:DWORD
+; LLC-NEXT:    v_cmp_ge_i16_e32 vcc, v0, v2
+; LLC-NEXT:    s_and_b64 vcc, s[4:5], vcc
+; LLC-NEXT:    s_cbranch_vccz .LBB0_3
+; LLC-NEXT:  ; %bb.9: ; %asan.report1
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_and_saveexec_b64 s[52:53], vcc
+; LLC-NEXT:    s_cbranch_execz .LBB0_2
+; LLC-NEXT:  ; %bb.10: ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_add_u32 s8, s34, 24
+; LLC-NEXT:    s_addc_u32 s9, s35, 0
+; LLC-NEXT:    s_mov_b64 s[4:5], s[40:41]
+; LLC-NEXT:    s_mov_b64 s[6:7], s[38:39]
+; LLC-NEXT:    s_mov_b64 s[10:11], s[36:37]
+; LLC-NEXT:    s_mov_b32 s12, s43
+; LLC-NEXT:    s_mov_b32 s13, s42
+; LLC-NEXT:    s_mov_b32 s14, s33
+; LLC-NEXT:    v_mov_b32_e32 v31, v44
+; LLC-NEXT:    v_mov_b32_e32 v0, v42
+; LLC-NEXT:    v_mov_b32_e32 v1, v43
+; LLC-NEXT:    s_swappc_b64 s[30:31], s[50:51]
+; LLC-NEXT:    ; divergent unreachable
+; LLC-NEXT:    s_branch .LBB0_2
+; LLC-NEXT:  .LBB0_11: ; %exit
+; LLC-NEXT:    s_endpgm
+entry:
+  %tid = call i32 @llvm.amdgcn.workitem.id.x()
+  br label %while.cond
+
+while.cond:
+  %c = phi i32 [%num, %entry], [%next_c, %while.body]
+  %cmp = icmp eq i32 %c, 0
+  br i1 %cmp, label %exit, label %while.body
+
+while.body:
+  %offs32 = add i32 %tid, %c
+  %offs = zext i32 %offs32 to i64
+
+  %pp1 = getelementptr inbounds i64, ptr addrspace(1) %ptr1, i64 %offs
+  %val = load i32, ptr addrspace(1) %pp1, align 4
+
+  %sum = add i32 %val, 42
+
+  %pp2 = getelementptr inbounds i64, ptr addrspace(1) %ptr2, i64 %offs
+  store i32 %sum, ptr addrspace(1) %pp2, align 4
+
+  %next_c = sub i32 %c, 1
+  br label %while.cond
+
+exit:
+  ret void
+}
+
+attributes #0 = { nounwind readnone }
diff --git a/llvm/test/CodeGen/AMDGPU/asan_trivial.ll b/llvm/test/CodeGen/AMDGPU/asan_trivial.ll
new file mode 100644
index 000000000000000..c3b191b405dba97
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/asan_trivial.ll
@@ -0,0 +1,610 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 3
+; RUN: opt  -passes=asan -S < %s | FileCheck %s --check-prefix=OPT
+; RUN: opt < %s -passes='asan,default<O3>' -o - | llc -O3 -mtriple=amdgcn-hsa-amdhsa -mcpu=gfx90a -o - | FileCheck %s --check-prefix=LLC-W64
+; RUN: opt < %s -passes='asan,default<O3>' -o - | llc -mtriple=amdgcn-hsa-amdhsa -mcpu=gfx1100 -amdgpu-enable-delay-alu=0 -mattr=+wavefrontsize32,-wavefrontsize64 -o - | FileCheck %s --check-prefix=LLC-W32
+
+; This test contains checks for opt and llc, to update use:
+;   utils/update_test_checks.py --force-update
+;   utils/update_llc_test_checks.py --force-update
+;
+; --force-update allows to override "Assertions have been autogenerated by" guard
+target triple = "amdgcn-amd-amdhsa"
+
+declare i32 @llvm.amdgcn.workitem.id.x() #0
+
+define protected amdgpu_kernel void @global_loadstore_uniform(ptr addrspace(1) %ptr) sanitize_address {
+; OPT-LABEL: define protected amdgpu_kernel void @global_loadstore_uniform(
+; OPT-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR1:[0-9]+]] {
+; OPT-NEXT:  entry:
+; OPT-NEXT:    [[TMP0:%.*]] = ptrtoint ptr addrspace(1) [[PTR]] to i64
+; OPT-NEXT:    [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
+; OPT-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
+; OPT-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; OPT-NEXT:    [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
+; OPT-NEXT:    [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
+; OPT-NEXT:    [[TMP6:%.*]] = and i64 [[TMP0]], 7
+; OPT-NEXT:    [[TMP7:%.*]] = add i64 [[TMP6]], 3
+; OPT-NEXT:    [[TMP8:%.*]] = trunc i64 [[TMP7]] to i8
+; OPT-NEXT:    [[TMP9:%.*]] = icmp sge i8 [[TMP8]], [[TMP4]]
+; OPT-NEXT:    [[TMP10:%.*]] = and i1 [[TMP5]], [[TMP9]]
+; OPT-NEXT:    [[TMP11:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP10]])
+; OPT-NEXT:    [[TMP12:%.*]] = icmp ne i64 [[TMP11]], 0
+; OPT-NEXT:    br i1 [[TMP12]], label [[ASAN_REPORT:%.*]], label [[TMP15:%.*]], !prof [[PROF0:![0-9]+]]
+; OPT:       asan.report:
+; OPT-NEXT:    br i1 [[TMP10]], label [[TMP13:%.*]], label [[TMP14:%.*]]
+; OPT:       13:
+; OPT-NEXT:    call void @__asan_report_load4(i64 [[TMP0]]) #[[ATTR5:[0-9]+]]
+; OPT-NEXT:    call void @llvm.amdgcn.unreachable()
+; OPT-NEXT:    br label [[TMP14]]
+; OPT:       14:
+; OPT-NEXT:    br label [[TMP15]]
+; OPT:       15:
+; OPT-NEXT:    [[VAL:%.*]] = load volatile i32, ptr addrspace(1) [[PTR]], align 4
+; OPT-NEXT:    store volatile i32 [[VAL]], ptr addrspace(1) [[PTR]], align 4
+; OPT-NEXT:    ret void
+;
+; LLC-W64-LABEL: global_loadstore_uniform:
+; LLC-W64:       ; %bb.0: ; %entry
+; LLC-W64-NEXT:    s_load_dwordx2 s[34:35], s[8:9], 0x0
+; LLC-W64-NEXT:    s_add_u32 flat_scratch_lo, s12, s17
+; LLC-W64-NEXT:    s_addc_u32 flat_scratch_hi, s13, 0
+; LLC-W64-NEXT:    s_add_u32 s0, s0, s17
+; LLC-W64-NEXT:    s_addc_u32 s1, s1, 0
+; LLC-W64-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-W64-NEXT:    s_lshr_b64 s[12:13], s[34:35], 3
+; LLC-W64-NEXT:    v_mov_b32_e32 v1, s12
+; LLC-W64-NEXT:    v_add_co_u32_e32 v2, vcc, 0x7fff8000, v1
+; LLC-W64-NEXT:    v_mov_b32_e32 v1, s13
+; LLC-W64-NEXT:    v_addc_co_u32_e32 v3, vcc, 0, v1, vcc
+; LLC-W64-NEXT:    flat_load_sbyte v1, v[2:3]
+; LLC-W64-NEXT:    v_and_b32_e64 v2, s34, 7
+; LLC-W64-NEXT:    v_mov_b32_e32 v40, 0
+; LLC-W64-NEXT:    v_add_u16_e32 v2, 3, v2
+; LLC-W64-NEXT:    s_mov_b32 s32, 0
+; LLC-W64-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-W64-NEXT:    v_cmp_ne_u16_sdwa s[12:13], v1, v40 src0_sel:BYTE_0 src1_sel:DWORD
+; LLC-W64-NEXT:    v_cmp_ge_i16_e32 vcc, v2, v1
+; LLC-W64-NEXT:    s_and_b64 vcc, s[12:13], vcc
+; LLC-W64-NEXT:    s_cbranch_vccz .LBB0_4
+; LLC-W64-NEXT:  ; %bb.1: ; %asan.report
+; LLC-W64-NEXT:    s_and_saveexec_b64 s[36:37], vcc
+; LLC-W64-NEXT:    s_cbranch_execz .LBB0_3
+; LLC-W64-NEXT:  ; %bb.2:
+; LLC-W64-NEXT:    s_add_u32 s8, s8, 8
+; LLC-W64-NEXT:    s_addc_u32 s9, s9, 0
+; LLC-W64-NEXT:    s_getpc_b64 s[12:13]
+; LLC-W64-NEXT:    s_add_u32 s12, s12, __asan_report_load4@gotpcrel32@lo+4
+; LLC-W64-NEXT:    s_addc_u32 s13, s13, __asan_report_load4@gotpcrel32@hi+12
+; LLC-W64-NEXT:    s_load_dwordx2 s[18:19], s[12:13], 0x0
+; LLC-W64-NEXT:    s_mov_b32 s12, s14
+; LLC-W64-NEXT:    s_mov_b32 s13, s15
+; LLC-W64-NEXT:    s_mov_b32 s14, s16
+; LLC-W64-NEXT:    v_mov_b32_e32 v31, v0
+; LLC-W64-NEXT:    v_mov_b32_e32 v0, s34
+; LLC-W64-NEXT:    v_mov_b32_e32 v1, s35
+; LLC-W64-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-W64-NEXT:    s_swappc_b64 s[30:31], s[18:19]
+; LLC-W64-NEXT:    ; divergent unreachable
+; LLC-W64-NEXT:  .LBB0_3: ; %Flow
+; LLC-W64-NEXT:    s_or_b64 exec, exec, s[36:37]
+; LLC-W64-NEXT:  .LBB0_4:
+; LLC-W64-NEXT:    global_load_dword v0, v40, s[34:35] glc
+; LLC-W64-NEXT:    s_waitcnt vmcnt(0)
+; LLC-W64-NEXT:    global_store_dword v40, v0, s[34:35]
+; LLC-W64-NEXT:    s_waitcnt vmcnt(0)
+; LLC-W64-NEXT:    s_endpgm
+;
+; LLC-W32-LABEL: global_loadstore_uniform:
+; LLC-W32:       ; %bb.0: ; %entry
+; LLC-W32-NEXT:    s_load_b64 s[34:35], s[4:5], 0x0
+; LLC-W32-NEXT:    s_mov_b64 s[10:11], s[6:7]
+; LLC-W32-NEXT:    s_mov_b32 s9, 0
+; LLC-W32-NEXT:    s_mov_b32 s32, 0
+; LLC-W32-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-W32-NEXT:    s_lshr_b64 s[6:7], s[34:35], 3
+; LLC-W32-NEXT:    v_add_co_u32 v1, s6, 0x7fff8000, s6
+; LLC-W32-NEXT:    v_add_co_ci_u32_e64 v2, null, 0, s7, s6
+; LLC-W32-NEXT:    flat_load_i8 v1, v[1:2]
+; LLC-W32-NEXT:    v_and_b32_e64 v2, s34, 7
+; LLC-W32-NEXT:    v_add_nc_u16 v2, v2, 3
+; LLC-W32-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-W32-NEXT:    v_and_b32_e32 v3, 0xff, v1
+; LLC-W32-NEXT:    v_cmp_ge_i16_e32 vcc_lo, v2, v1
+; LLC-W32-NEXT:    v_cmp_ne_u16_e64 s6, 0, v3
+; LLC-W32-NEXT:    s_and_b32 s6, s6, vcc_lo
+; LLC-W32-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s6
+; LLC-W32-NEXT:    v_cmp_ne_u32_e64 s8, 0, v1
+; LLC-W32-NEXT:    s_cmp_eq_u64 s[8:9], 0
+; LLC-W32-NEXT:    s_cbranch_scc1 .LBB0_4
+; LLC-W32-NEXT:  ; %bb.1: ; %asan.report
+; LLC-W32-NEXT:    s_and_saveexec_b32 ...
[truncated]

@llvmbot
Copy link
Collaborator

llvmbot commented Nov 14, 2023

@llvm/pr-subscribers-compiler-rt-sanitizer

Author: Valery Pykhtin (vpykhtin)

Changes

Address sanitizer checks for AMDGPU target in non-recovery mode aren't quite efficient at the moment which can be illustrated with a program:

instr_before; 
load ptr1; 
instr_in_the_middle; 
load ptr2; 
instr_after; 

ASAN generates the following instrumentation:

instr_before; 
if (sanity_check_passed(ptr1)) 
  load ptr1; 
  instr_in_the_middle; 
  if (sanity_check_passed(ptr2)) 
     load ptr2; 
     instr_after; 
  else 
     // ASAN report block 2 
     __asan_report(ptr2); // wave terminates   
     unreachable; 
else 
   // ASAN report block 1 
  __asan_report(ptr1); // wave terminates 
  unreachable; 

Each sanitizer check is treated as a non-uniform condition (and this is true because some lanes may pass the check and some don't). This results in the program above: basically normal program flow is continued in then blocks. This way it allows lanes that pass all sanity checks to complete the program and then the wave terminates at the first reporting else block. For each else block it has to keep execmask and pointer value to report error consuming tons (megatons!) of registers which are live till the program end.

This patch changes the behavior on a failing sanity check: instead of waiting when passing lanes reach program end report error and terminate as soon as any lane has violated the sanity check. Sanity check condition is treated uniform with this approach and the resulting program looks much like ordinary CPU code:

instr_before; 
if (any_lane_violated(sanity_check_passed(ptr1)))
  // ASAN report block 1 
  __asan_report(ptr1); // abort the program 
  unreachable; 
load ptr1; 
instr_in_the_middle; 
if (any_lane_violated(sanity_check_passed(ptr2))) 
  // ASAN report block 2   
  __asan_report(ptr2); // abort the program 
  unreachable; 
load ptr2; 
instr_after; 

However it has to use a trick to pass structurizer and some later passes: ASAN check is generated like in recovery mode but reporting function aborts, that is standard unreachable instruction isn't used:

...
if (any_lane_violated(sanity_check_passed(ptr1)))
  // ASAN report block 1 
  __asan_report(ptr1); // abort the program 
  // pretend we're going to continue the program
load ptr1; 
...

This may create some undesirable effects:

  1. Register allocator generates a lot of code for save/restore registers for asan_report call. This may potentially bloat the code since we have a report block for every accessed pointer.
  2. Loop invariant code in report blocks is hoisted into a loop preheader. I'm not sure but probably this can be solved using block frequency information, but most likely this isn't a problem at all.

These problems are to be addressed later.

Flattening address sanitizer check

In order to simplify divergent CFG this patch also changes the instrumentation code from:

  uint64_t address = ptr; 
  sbyte *shadow_address = MemToShadow(address); 
  sbyte shadow_value = *shadow_address; 
  if (shadow_value) { 
    sbyte last_accessed_byte = (address &amp; 7) + kAccessSize - 1; 
    if (last_accessed_byte &gt;= shadow_value) { 
      ReportError(address, kAccessSize, kIsWrite); 
      abort(); 
    } 
  } 

to

  uint64_t address = ptr; 
  sbyte *shadow_address = MemToShadow(address); 
  sbyte shadow_value = *shadow_address; 

  sbyte last_accessed_byte = (address &amp; 7) + kAccessSize - 1; 
  if (shadow_value &amp;&amp; last_accessed_byte &gt;= shadow_value) { 
    ReportError(address, kAccessSize, kIsWrite); 
    abort(); 
  } 

It saves one if which really avoids very few instructions and their latency can be hidden by the load from shadow memory.


Patch is 76.07 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/72247.diff

6 Files Affected:

  • (modified) llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp (+37-1)
  • (added) llvm/test/CodeGen/AMDGPU/asan_loop.ll (+231)
  • (added) llvm/test/CodeGen/AMDGPU/asan_trivial.ll (+610)
  • (modified) llvm/test/Instrumentation/AddressSanitizer/AMDGPU/asan_instrument_constant_address_space.ll (+92-11)
  • (modified) llvm/test/Instrumentation/AddressSanitizer/AMDGPU/asan_instrument_generic_address_space.ll (+234-26)
  • (modified) llvm/test/Instrumentation/AddressSanitizer/AMDGPU/asan_instrument_global_address_space.ll (+173-22)
diff --git a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
index 5c2763850ac6540..a88b271ed8e7325 100644
--- a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
@@ -174,6 +174,8 @@ const char kAsanAllocasUnpoison[] = "__asan_allocas_unpoison";
 
 const char kAMDGPUAddressSharedName[] = "llvm.amdgcn.is.shared";
 const char kAMDGPUAddressPrivateName[] = "llvm.amdgcn.is.private";
+const char kAMDGPUBallotName[] = "llvm.amdgcn.ballot.i64";
+const char kAMDGPUUnreachableName[] = "llvm.amdgcn.unreachable";
 
 // Accesses sizes are powers of two: 1, 2, 4, 8, 16.
 static const size_t kNumberOfAccessSizes = 5;
@@ -692,6 +694,8 @@ struct AddressSanitizer {
                                        Instruction *InsertBefore, Value *Addr,
                                        uint32_t TypeStoreSize, bool IsWrite,
                                        Value *SizeArgument);
+  Instruction *genAMDGPUReportBlock(IRBuilder<> &IRB, Value *Cond,
+                                    bool Recover);
   void instrumentUnusualSizeOrAlignment(Instruction *I,
                                         Instruction *InsertBefore, Value *Addr,
                                         TypeSize TypeStoreSize, bool IsWrite,
@@ -1707,6 +1711,30 @@ Instruction *AddressSanitizer::instrumentAMDGPUAddress(
   return InsertBefore;
 }
 
+Instruction *AddressSanitizer::genAMDGPUReportBlock(IRBuilder<> &IRB,
+                                                    Value *Cond, bool Recover) {
+  Module &M = *IRB.GetInsertBlock()->getModule();
+  Value *ReportCond = Cond;
+  if (!Recover) {
+    auto Ballot = M.getOrInsertFunction(kAMDGPUBallotName, IRB.getInt64Ty(),
+                                        IRB.getInt1Ty());
+    ReportCond = IRB.CreateIsNotNull(IRB.CreateCall(Ballot, {Cond}));
+  }
+
+  auto *Trm =
+      SplitBlockAndInsertIfThen(ReportCond, &*IRB.GetInsertPoint(), false,
+                                MDBuilder(*C).createBranchWeights(1, 100000));
+  Trm->getParent()->setName("asan.report");
+
+  if (Recover)
+    return Trm;
+
+  Trm = SplitBlockAndInsertIfThen(Cond, Trm, false);
+  IRB.SetInsertPoint(Trm);
+  return IRB.CreateCall(
+      M.getOrInsertFunction(kAMDGPUUnreachableName, IRB.getVoidTy()), {});
+}
+
 void AddressSanitizer::instrumentAddress(Instruction *OrigIns,
                                          Instruction *InsertBefore, Value *Addr,
                                          MaybeAlign Alignment,
@@ -1758,7 +1786,15 @@ void AddressSanitizer::instrumentAddress(Instruction *OrigIns,
   size_t Granularity = 1ULL << Mapping.Scale;
   Instruction *CrashTerm = nullptr;
 
-  if (ClAlwaysSlowPath || (TypeStoreSize < 8 * Granularity)) {
+  bool GenSlowPath = (ClAlwaysSlowPath || (TypeStoreSize < 8 * Granularity));
+
+  if (TargetTriple.isAMDGPU()) {
+    if (GenSlowPath) {
+      auto *Cmp2 = createSlowPathCmp(IRB, AddrLong, ShadowValue, TypeStoreSize);
+      Cmp = IRB.CreateAnd(Cmp, Cmp2);
+    }
+    CrashTerm = genAMDGPUReportBlock(IRB, Cmp, Recover);
+  } else if (GenSlowPath) {
     // We use branch weights for the slow path check, to indicate that the slow
     // path is rarely taken. This seems to be the case for SPEC benchmarks.
     Instruction *CheckTerm = SplitBlockAndInsertIfThen(
diff --git a/llvm/test/CodeGen/AMDGPU/asan_loop.ll b/llvm/test/CodeGen/AMDGPU/asan_loop.ll
new file mode 100644
index 000000000000000..44561b6f4b9a134
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/asan_loop.ll
@@ -0,0 +1,231 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 3
+; RUN: opt  -passes=asan -S < %s | FileCheck %s --check-prefix=OPT
+; RUN: opt < %s -passes='asan,default<O3>' -o - | llc -O3 -mtriple=amdgcn-hsa-amdhsa -mcpu=gfx90a -o - | FileCheck %s --check-prefix=LLC
+
+; This test contains checks for opt and llc, to update use:
+;   utils/update_test_checks.py --force-update
+;   utils/update_llc_test_checks.py --force-update
+;
+; --force-update allows to override "Assertions have been autogenerated by" guard
+target datalayout = "e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7"
+target triple = "amdgcn-amd-amdhsa"
+
+declare i32 @llvm.amdgcn.workitem.id.x() #0
+
+define protected amdgpu_kernel void @uniform_loop_global(i32 %num, ptr addrspace(1) %ptr1, ptr addrspace(1) %ptr2) sanitize_address {
+; OPT-LABEL: define protected amdgpu_kernel void @uniform_loop_global(
+; OPT-SAME: i32 [[NUM:%.*]], ptr addrspace(1) [[PTR1:%.*]], ptr addrspace(1) [[PTR2:%.*]]) #[[ATTR1:[0-9]+]] {
+; OPT-NEXT:  entry:
+; OPT-NEXT:    [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x()
+; OPT-NEXT:    br label [[WHILE_COND:%.*]]
+; OPT:       while.cond:
+; OPT-NEXT:    [[C:%.*]] = phi i32 [ [[NUM]], [[ENTRY:%.*]] ], [ [[NEXT_C:%.*]], [[TMP31:%.*]] ]
+; OPT-NEXT:    [[CMP:%.*]] = icmp eq i32 [[C]], 0
+; OPT-NEXT:    br i1 [[CMP]], label [[EXIT:%.*]], label [[WHILE_BODY:%.*]]
+; OPT:       while.body:
+; OPT-NEXT:    [[OFFS32:%.*]] = add i32 [[TID]], [[C]]
+; OPT-NEXT:    [[OFFS:%.*]] = zext i32 [[OFFS32]] to i64
+; OPT-NEXT:    [[PP1:%.*]] = getelementptr inbounds i64, ptr addrspace(1) [[PTR1]], i64 [[OFFS]]
+; OPT-NEXT:    [[TMP0:%.*]] = ptrtoint ptr addrspace(1) [[PP1]] to i64
+; OPT-NEXT:    [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
+; OPT-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
+; OPT-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; OPT-NEXT:    [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
+; OPT-NEXT:    [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
+; OPT-NEXT:    [[TMP6:%.*]] = and i64 [[TMP0]], 7
+; OPT-NEXT:    [[TMP7:%.*]] = add i64 [[TMP6]], 3
+; OPT-NEXT:    [[TMP8:%.*]] = trunc i64 [[TMP7]] to i8
+; OPT-NEXT:    [[TMP9:%.*]] = icmp sge i8 [[TMP8]], [[TMP4]]
+; OPT-NEXT:    [[TMP10:%.*]] = and i1 [[TMP5]], [[TMP9]]
+; OPT-NEXT:    [[TMP11:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP10]])
+; OPT-NEXT:    [[TMP12:%.*]] = icmp ne i64 [[TMP11]], 0
+; OPT-NEXT:    br i1 [[TMP12]], label [[ASAN_REPORT:%.*]], label [[TMP15:%.*]], !prof [[PROF0:![0-9]+]]
+; OPT:       asan.report:
+; OPT-NEXT:    br i1 [[TMP10]], label [[TMP13:%.*]], label [[TMP14:%.*]]
+; OPT:       13:
+; OPT-NEXT:    call void @__asan_report_load4(i64 [[TMP0]]) #[[ATTR5:[0-9]+]]
+; OPT-NEXT:    call void @llvm.amdgcn.unreachable()
+; OPT-NEXT:    br label [[TMP14]]
+; OPT:       14:
+; OPT-NEXT:    br label [[TMP15]]
+; OPT:       15:
+; OPT-NEXT:    [[VAL:%.*]] = load i32, ptr addrspace(1) [[PP1]], align 4
+; OPT-NEXT:    [[SUM:%.*]] = add i32 [[VAL]], 42
+; OPT-NEXT:    [[PP2:%.*]] = getelementptr inbounds i64, ptr addrspace(1) [[PTR2]], i64 [[OFFS]]
+; OPT-NEXT:    [[TMP16:%.*]] = ptrtoint ptr addrspace(1) [[PP2]] to i64
+; OPT-NEXT:    [[TMP17:%.*]] = lshr i64 [[TMP16]], 3
+; OPT-NEXT:    [[TMP18:%.*]] = add i64 [[TMP17]], 2147450880
+; OPT-NEXT:    [[TMP19:%.*]] = inttoptr i64 [[TMP18]] to ptr
+; OPT-NEXT:    [[TMP20:%.*]] = load i8, ptr [[TMP19]], align 1
+; OPT-NEXT:    [[TMP21:%.*]] = icmp ne i8 [[TMP20]], 0
+; OPT-NEXT:    [[TMP22:%.*]] = and i64 [[TMP16]], 7
+; OPT-NEXT:    [[TMP23:%.*]] = add i64 [[TMP22]], 3
+; OPT-NEXT:    [[TMP24:%.*]] = trunc i64 [[TMP23]] to i8
+; OPT-NEXT:    [[TMP25:%.*]] = icmp sge i8 [[TMP24]], [[TMP20]]
+; OPT-NEXT:    [[TMP26:%.*]] = and i1 [[TMP21]], [[TMP25]]
+; OPT-NEXT:    [[TMP27:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP26]])
+; OPT-NEXT:    [[TMP28:%.*]] = icmp ne i64 [[TMP27]], 0
+; OPT-NEXT:    br i1 [[TMP28]], label [[ASAN_REPORT1:%.*]], label [[TMP31]], !prof [[PROF0]]
+; OPT:       asan.report1:
+; OPT-NEXT:    br i1 [[TMP26]], label [[TMP29:%.*]], label [[TMP30:%.*]]
+; OPT:       29:
+; OPT-NEXT:    call void @__asan_report_store4(i64 [[TMP16]]) #[[ATTR5]]
+; OPT-NEXT:    call void @llvm.amdgcn.unreachable()
+; OPT-NEXT:    br label [[TMP30]]
+; OPT:       30:
+; OPT-NEXT:    br label [[TMP31]]
+; OPT:       31:
+; OPT-NEXT:    store i32 [[SUM]], ptr addrspace(1) [[PP2]], align 4
+; OPT-NEXT:    [[NEXT_C]] = sub i32 [[C]], 1
+; OPT-NEXT:    br label [[WHILE_COND]]
+; OPT:       exit:
+; OPT-NEXT:    ret void
+;
+; LLC-LABEL: uniform_loop_global:
+; LLC:       ; %bb.0: ; %entry
+; LLC-NEXT:    s_load_dword s54, s[8:9], 0x0
+; LLC-NEXT:    s_add_u32 flat_scratch_lo, s12, s17
+; LLC-NEXT:    s_addc_u32 flat_scratch_hi, s13, 0
+; LLC-NEXT:    s_add_u32 s0, s0, s17
+; LLC-NEXT:    s_addc_u32 s1, s1, 0
+; LLC-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-NEXT:    s_cmp_eq_u32 s54, 0
+; LLC-NEXT:    s_mov_b32 s32, 0
+; LLC-NEXT:    s_cbranch_scc1 .LBB0_11
+; LLC-NEXT:  ; %bb.1: ; %while.body.preheader
+; LLC-NEXT:    s_mov_b64 s[40:41], s[4:5]
+; LLC-NEXT:    s_getpc_b64 s[4:5]
+; LLC-NEXT:    s_add_u32 s4, s4, __asan_report_load4@gotpcrel32@lo+4
+; LLC-NEXT:    s_addc_u32 s5, s5, __asan_report_load4@gotpcrel32@hi+12
+; LLC-NEXT:    s_load_dwordx2 s[48:49], s[4:5], 0x0
+; LLC-NEXT:    s_getpc_b64 s[4:5]
+; LLC-NEXT:    s_add_u32 s4, s4, __asan_report_store4@gotpcrel32@lo+4
+; LLC-NEXT:    s_addc_u32 s5, s5, __asan_report_store4@gotpcrel32@hi+12
+; LLC-NEXT:    s_load_dwordx4 s[44:47], s[8:9], 0x8
+; LLC-NEXT:    s_load_dwordx2 s[50:51], s[4:5], 0x0
+; LLC-NEXT:    v_mov_b32_e32 v44, v0
+; LLC-NEXT:    s_mov_b32 s33, s16
+; LLC-NEXT:    s_mov_b32 s42, s15
+; LLC-NEXT:    s_mov_b64 s[34:35], s[8:9]
+; LLC-NEXT:    s_mov_b32 s43, s14
+; LLC-NEXT:    s_mov_b64 s[36:37], s[10:11]
+; LLC-NEXT:    s_mov_b64 s[38:39], s[6:7]
+; LLC-NEXT:    v_and_b32_e32 v45, 0x3ff, v44
+; LLC-NEXT:    v_mov_b32_e32 v47, 0
+; LLC-NEXT:    s_branch .LBB0_4
+; LLC-NEXT:  .LBB0_2: ; %Flow
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_or_b64 exec, exec, s[52:53]
+; LLC-NEXT:  .LBB0_3: ; %while.cond
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_add_i32 s54, s54, -1
+; LLC-NEXT:    v_add_u32_e32 v0, 42, v46
+; LLC-NEXT:    s_cmp_lg_u32 s54, 0
+; LLC-NEXT:    global_store_dword v[42:43], v0, off
+; LLC-NEXT:    s_cbranch_scc0 .LBB0_11
+; LLC-NEXT:  .LBB0_4: ; %while.body
+; LLC-NEXT:    ; =>This Inner Loop Header: Depth=1
+; LLC-NEXT:    v_add_u32_e32 v46, s54, v45
+; LLC-NEXT:    v_lshlrev_b64 v[42:43], 3, v[46:47]
+; LLC-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-NEXT:    v_mov_b32_e32 v0, s45
+; LLC-NEXT:    v_add_co_u32_e32 v40, vcc, s44, v42
+; LLC-NEXT:    v_addc_co_u32_e32 v41, vcc, v0, v43, vcc
+; LLC-NEXT:    v_lshrrev_b64 v[0:1], 3, v[40:41]
+; LLC-NEXT:    v_add_co_u32_e32 v0, vcc, 0x7fff8000, v0
+; LLC-NEXT:    v_addc_co_u32_e32 v1, vcc, 0, v1, vcc
+; LLC-NEXT:    flat_load_sbyte v0, v[0:1]
+; LLC-NEXT:    v_and_b32_e32 v1, 7, v40
+; LLC-NEXT:    v_add_u16_e32 v1, 3, v1
+; LLC-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-NEXT:    v_cmp_ne_u16_sdwa s[4:5], v0, v47 src0_sel:BYTE_0 src1_sel:DWORD
+; LLC-NEXT:    v_cmp_ge_i16_e32 vcc, v1, v0
+; LLC-NEXT:    s_and_b64 vcc, s[4:5], vcc
+; LLC-NEXT:    s_cbranch_vccz .LBB0_8
+; LLC-NEXT:  ; %bb.5: ; %asan.report
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_and_saveexec_b64 s[52:53], vcc
+; LLC-NEXT:    s_cbranch_execz .LBB0_7
+; LLC-NEXT:  ; %bb.6: ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_add_u32 s8, s34, 24
+; LLC-NEXT:    s_addc_u32 s9, s35, 0
+; LLC-NEXT:    s_mov_b64 s[4:5], s[40:41]
+; LLC-NEXT:    s_mov_b64 s[6:7], s[38:39]
+; LLC-NEXT:    s_mov_b64 s[10:11], s[36:37]
+; LLC-NEXT:    s_mov_b32 s12, s43
+; LLC-NEXT:    s_mov_b32 s13, s42
+; LLC-NEXT:    s_mov_b32 s14, s33
+; LLC-NEXT:    v_mov_b32_e32 v31, v44
+; LLC-NEXT:    v_mov_b32_e32 v0, v40
+; LLC-NEXT:    v_mov_b32_e32 v1, v41
+; LLC-NEXT:    s_swappc_b64 s[30:31], s[48:49]
+; LLC-NEXT:    ; divergent unreachable
+; LLC-NEXT:  .LBB0_7: ; %Flow4
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_or_b64 exec, exec, s[52:53]
+; LLC-NEXT:  .LBB0_8: ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    v_mov_b32_e32 v0, s47
+; LLC-NEXT:    v_add_co_u32_e32 v42, vcc, s46, v42
+; LLC-NEXT:    v_addc_co_u32_e32 v43, vcc, v0, v43, vcc
+; LLC-NEXT:    v_lshrrev_b64 v[0:1], 3, v[42:43]
+; LLC-NEXT:    v_add_co_u32_e32 v0, vcc, 0x7fff8000, v0
+; LLC-NEXT:    v_addc_co_u32_e32 v1, vcc, 0, v1, vcc
+; LLC-NEXT:    flat_load_sbyte v2, v[0:1]
+; LLC-NEXT:    global_load_dword v46, v[40:41], off
+; LLC-NEXT:    v_and_b32_e32 v0, 7, v42
+; LLC-NEXT:    v_add_u16_e32 v0, 3, v0
+; LLC-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-NEXT:    v_cmp_ne_u16_sdwa s[4:5], v2, v47 src0_sel:BYTE_0 src1_sel:DWORD
+; LLC-NEXT:    v_cmp_ge_i16_e32 vcc, v0, v2
+; LLC-NEXT:    s_and_b64 vcc, s[4:5], vcc
+; LLC-NEXT:    s_cbranch_vccz .LBB0_3
+; LLC-NEXT:  ; %bb.9: ; %asan.report1
+; LLC-NEXT:    ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_and_saveexec_b64 s[52:53], vcc
+; LLC-NEXT:    s_cbranch_execz .LBB0_2
+; LLC-NEXT:  ; %bb.10: ; in Loop: Header=BB0_4 Depth=1
+; LLC-NEXT:    s_add_u32 s8, s34, 24
+; LLC-NEXT:    s_addc_u32 s9, s35, 0
+; LLC-NEXT:    s_mov_b64 s[4:5], s[40:41]
+; LLC-NEXT:    s_mov_b64 s[6:7], s[38:39]
+; LLC-NEXT:    s_mov_b64 s[10:11], s[36:37]
+; LLC-NEXT:    s_mov_b32 s12, s43
+; LLC-NEXT:    s_mov_b32 s13, s42
+; LLC-NEXT:    s_mov_b32 s14, s33
+; LLC-NEXT:    v_mov_b32_e32 v31, v44
+; LLC-NEXT:    v_mov_b32_e32 v0, v42
+; LLC-NEXT:    v_mov_b32_e32 v1, v43
+; LLC-NEXT:    s_swappc_b64 s[30:31], s[50:51]
+; LLC-NEXT:    ; divergent unreachable
+; LLC-NEXT:    s_branch .LBB0_2
+; LLC-NEXT:  .LBB0_11: ; %exit
+; LLC-NEXT:    s_endpgm
+entry:
+  %tid = call i32 @llvm.amdgcn.workitem.id.x()
+  br label %while.cond
+
+while.cond:
+  %c = phi i32 [%num, %entry], [%next_c, %while.body]
+  %cmp = icmp eq i32 %c, 0
+  br i1 %cmp, label %exit, label %while.body
+
+while.body:
+  %offs32 = add i32 %tid, %c
+  %offs = zext i32 %offs32 to i64
+
+  %pp1 = getelementptr inbounds i64, ptr addrspace(1) %ptr1, i64 %offs
+  %val = load i32, ptr addrspace(1) %pp1, align 4
+
+  %sum = add i32 %val, 42
+
+  %pp2 = getelementptr inbounds i64, ptr addrspace(1) %ptr2, i64 %offs
+  store i32 %sum, ptr addrspace(1) %pp2, align 4
+
+  %next_c = sub i32 %c, 1
+  br label %while.cond
+
+exit:
+  ret void
+}
+
+attributes #0 = { nounwind readnone }
diff --git a/llvm/test/CodeGen/AMDGPU/asan_trivial.ll b/llvm/test/CodeGen/AMDGPU/asan_trivial.ll
new file mode 100644
index 000000000000000..c3b191b405dba97
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/asan_trivial.ll
@@ -0,0 +1,610 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 3
+; RUN: opt  -passes=asan -S < %s | FileCheck %s --check-prefix=OPT
+; RUN: opt < %s -passes='asan,default<O3>' -o - | llc -O3 -mtriple=amdgcn-hsa-amdhsa -mcpu=gfx90a -o - | FileCheck %s --check-prefix=LLC-W64
+; RUN: opt < %s -passes='asan,default<O3>' -o - | llc -mtriple=amdgcn-hsa-amdhsa -mcpu=gfx1100 -amdgpu-enable-delay-alu=0 -mattr=+wavefrontsize32,-wavefrontsize64 -o - | FileCheck %s --check-prefix=LLC-W32
+
+; This test contains checks for opt and llc, to update use:
+;   utils/update_test_checks.py --force-update
+;   utils/update_llc_test_checks.py --force-update
+;
+; --force-update allows to override "Assertions have been autogenerated by" guard
+target triple = "amdgcn-amd-amdhsa"
+
+declare i32 @llvm.amdgcn.workitem.id.x() #0
+
+define protected amdgpu_kernel void @global_loadstore_uniform(ptr addrspace(1) %ptr) sanitize_address {
+; OPT-LABEL: define protected amdgpu_kernel void @global_loadstore_uniform(
+; OPT-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR1:[0-9]+]] {
+; OPT-NEXT:  entry:
+; OPT-NEXT:    [[TMP0:%.*]] = ptrtoint ptr addrspace(1) [[PTR]] to i64
+; OPT-NEXT:    [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
+; OPT-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
+; OPT-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; OPT-NEXT:    [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
+; OPT-NEXT:    [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
+; OPT-NEXT:    [[TMP6:%.*]] = and i64 [[TMP0]], 7
+; OPT-NEXT:    [[TMP7:%.*]] = add i64 [[TMP6]], 3
+; OPT-NEXT:    [[TMP8:%.*]] = trunc i64 [[TMP7]] to i8
+; OPT-NEXT:    [[TMP9:%.*]] = icmp sge i8 [[TMP8]], [[TMP4]]
+; OPT-NEXT:    [[TMP10:%.*]] = and i1 [[TMP5]], [[TMP9]]
+; OPT-NEXT:    [[TMP11:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP10]])
+; OPT-NEXT:    [[TMP12:%.*]] = icmp ne i64 [[TMP11]], 0
+; OPT-NEXT:    br i1 [[TMP12]], label [[ASAN_REPORT:%.*]], label [[TMP15:%.*]], !prof [[PROF0:![0-9]+]]
+; OPT:       asan.report:
+; OPT-NEXT:    br i1 [[TMP10]], label [[TMP13:%.*]], label [[TMP14:%.*]]
+; OPT:       13:
+; OPT-NEXT:    call void @__asan_report_load4(i64 [[TMP0]]) #[[ATTR5:[0-9]+]]
+; OPT-NEXT:    call void @llvm.amdgcn.unreachable()
+; OPT-NEXT:    br label [[TMP14]]
+; OPT:       14:
+; OPT-NEXT:    br label [[TMP15]]
+; OPT:       15:
+; OPT-NEXT:    [[VAL:%.*]] = load volatile i32, ptr addrspace(1) [[PTR]], align 4
+; OPT-NEXT:    store volatile i32 [[VAL]], ptr addrspace(1) [[PTR]], align 4
+; OPT-NEXT:    ret void
+;
+; LLC-W64-LABEL: global_loadstore_uniform:
+; LLC-W64:       ; %bb.0: ; %entry
+; LLC-W64-NEXT:    s_load_dwordx2 s[34:35], s[8:9], 0x0
+; LLC-W64-NEXT:    s_add_u32 flat_scratch_lo, s12, s17
+; LLC-W64-NEXT:    s_addc_u32 flat_scratch_hi, s13, 0
+; LLC-W64-NEXT:    s_add_u32 s0, s0, s17
+; LLC-W64-NEXT:    s_addc_u32 s1, s1, 0
+; LLC-W64-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-W64-NEXT:    s_lshr_b64 s[12:13], s[34:35], 3
+; LLC-W64-NEXT:    v_mov_b32_e32 v1, s12
+; LLC-W64-NEXT:    v_add_co_u32_e32 v2, vcc, 0x7fff8000, v1
+; LLC-W64-NEXT:    v_mov_b32_e32 v1, s13
+; LLC-W64-NEXT:    v_addc_co_u32_e32 v3, vcc, 0, v1, vcc
+; LLC-W64-NEXT:    flat_load_sbyte v1, v[2:3]
+; LLC-W64-NEXT:    v_and_b32_e64 v2, s34, 7
+; LLC-W64-NEXT:    v_mov_b32_e32 v40, 0
+; LLC-W64-NEXT:    v_add_u16_e32 v2, 3, v2
+; LLC-W64-NEXT:    s_mov_b32 s32, 0
+; LLC-W64-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-W64-NEXT:    v_cmp_ne_u16_sdwa s[12:13], v1, v40 src0_sel:BYTE_0 src1_sel:DWORD
+; LLC-W64-NEXT:    v_cmp_ge_i16_e32 vcc, v2, v1
+; LLC-W64-NEXT:    s_and_b64 vcc, s[12:13], vcc
+; LLC-W64-NEXT:    s_cbranch_vccz .LBB0_4
+; LLC-W64-NEXT:  ; %bb.1: ; %asan.report
+; LLC-W64-NEXT:    s_and_saveexec_b64 s[36:37], vcc
+; LLC-W64-NEXT:    s_cbranch_execz .LBB0_3
+; LLC-W64-NEXT:  ; %bb.2:
+; LLC-W64-NEXT:    s_add_u32 s8, s8, 8
+; LLC-W64-NEXT:    s_addc_u32 s9, s9, 0
+; LLC-W64-NEXT:    s_getpc_b64 s[12:13]
+; LLC-W64-NEXT:    s_add_u32 s12, s12, __asan_report_load4@gotpcrel32@lo+4
+; LLC-W64-NEXT:    s_addc_u32 s13, s13, __asan_report_load4@gotpcrel32@hi+12
+; LLC-W64-NEXT:    s_load_dwordx2 s[18:19], s[12:13], 0x0
+; LLC-W64-NEXT:    s_mov_b32 s12, s14
+; LLC-W64-NEXT:    s_mov_b32 s13, s15
+; LLC-W64-NEXT:    s_mov_b32 s14, s16
+; LLC-W64-NEXT:    v_mov_b32_e32 v31, v0
+; LLC-W64-NEXT:    v_mov_b32_e32 v0, s34
+; LLC-W64-NEXT:    v_mov_b32_e32 v1, s35
+; LLC-W64-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-W64-NEXT:    s_swappc_b64 s[30:31], s[18:19]
+; LLC-W64-NEXT:    ; divergent unreachable
+; LLC-W64-NEXT:  .LBB0_3: ; %Flow
+; LLC-W64-NEXT:    s_or_b64 exec, exec, s[36:37]
+; LLC-W64-NEXT:  .LBB0_4:
+; LLC-W64-NEXT:    global_load_dword v0, v40, s[34:35] glc
+; LLC-W64-NEXT:    s_waitcnt vmcnt(0)
+; LLC-W64-NEXT:    global_store_dword v40, v0, s[34:35]
+; LLC-W64-NEXT:    s_waitcnt vmcnt(0)
+; LLC-W64-NEXT:    s_endpgm
+;
+; LLC-W32-LABEL: global_loadstore_uniform:
+; LLC-W32:       ; %bb.0: ; %entry
+; LLC-W32-NEXT:    s_load_b64 s[34:35], s[4:5], 0x0
+; LLC-W32-NEXT:    s_mov_b64 s[10:11], s[6:7]
+; LLC-W32-NEXT:    s_mov_b32 s9, 0
+; LLC-W32-NEXT:    s_mov_b32 s32, 0
+; LLC-W32-NEXT:    s_waitcnt lgkmcnt(0)
+; LLC-W32-NEXT:    s_lshr_b64 s[6:7], s[34:35], 3
+; LLC-W32-NEXT:    v_add_co_u32 v1, s6, 0x7fff8000, s6
+; LLC-W32-NEXT:    v_add_co_ci_u32_e64 v2, null, 0, s7, s6
+; LLC-W32-NEXT:    flat_load_i8 v1, v[1:2]
+; LLC-W32-NEXT:    v_and_b32_e64 v2, s34, 7
+; LLC-W32-NEXT:    v_add_nc_u16 v2, v2, 3
+; LLC-W32-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; LLC-W32-NEXT:    v_and_b32_e32 v3, 0xff, v1
+; LLC-W32-NEXT:    v_cmp_ge_i16_e32 vcc_lo, v2, v1
+; LLC-W32-NEXT:    v_cmp_ne_u16_e64 s6, 0, v3
+; LLC-W32-NEXT:    s_and_b32 s6, s6, vcc_lo
+; LLC-W32-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s6
+; LLC-W32-NEXT:    v_cmp_ne_u32_e64 s8, 0, v1
+; LLC-W32-NEXT:    s_cmp_eq_u64 s[8:9], 0
+; LLC-W32-NEXT:    s_cbranch_scc1 .LBB0_4
+; LLC-W32-NEXT:  ; %bb.1: ; %asan.report
+; LLC-W32-NEXT:    s_and_saveexec_b32 ...
[truncated]

Copy link
Collaborator

@nhaehnle nhaehnle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable to me.

@vpykhtin
Copy link
Contributor Author

Gentle ping.

@nhaehnle
Copy link
Collaborator

Just to be clear, consider it ack'd by me. (I guess you're trying to get somebody else's attention though.)

Copy link
Collaborator

@vitalybuka vitalybuka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice if you pre-commit new tests with the current state of generated IR
so in this patch we can clearly see a difference before/after

@vitalybuka
Copy link
Collaborator

I am happy to accept, still pre-committing tests would be nice

@vpykhtin
Copy link
Contributor Author

I am happy to accept, still pre-committing tests would be nice

Sure, I'll do that, thank you for the review!

@@ -0,0 +1,231 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 3
; RUN: opt -passes=asan -S < %s | FileCheck %s --check-prefix=OPT
; RUN: opt < %s -passes='asan,default<O3>' -o - | llc -O3 -mtriple=amdgcn-hsa-amdhsa -mcpu=gfx90a -o - | FileCheck %s --check-prefix=LLC
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vitalybuka do you think this is acceptable way to test ASAN functionality on the codegen level? I mean that it's a bit artificial, maybe it should be a clang test instead?

cc @arsenm

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not llvm-project/llvm/test/Instrumentation/AddressSanitizer/AMDGPU/ ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, #73857.

if (ClAlwaysSlowPath || (TypeStoreSize < 8 * Granularity)) {
bool GenSlowPath = (ClAlwaysSlowPath || (TypeStoreSize < 8 * Granularity));

if (TargetTriple.isAMDGPU()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should restrict to amdgcn, the intrinsics you are using won't work for r600

vitalybuka pushed a commit that referenced this pull request Dec 1, 2023
* Added 128 bit wide operation tests (fast path check).
* Added test for recovery mode.

Upcoming patch #72247.
; OPT-NEXT: br i1 [[TMP10]], label [[TMP13:%.*]], label [[TMP14:%.*]]
; OPT: 13:
; OPT-NEXT: call void @__asan_report_load4(i64 [[TMP0]]) #[[ATTR5:[0-9]+]]
; OPT-NEXT: call void @llvm.amdgcn.unreachable()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really need the unreachable call? Isn't the call noreturn?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

__asan_report functions aren't marked as noreturn.

llvm.amdgcn.unreachable() has no effect at the moment except for asm commentary. I thought about deleting the code after it somewhere between SILowerControlFlow and regalloc.

@vpykhtin
Copy link
Contributor Author

vpykhtin commented Dec 1, 2023

I am happy to accept, still pre-committing tests would be nice

Rebased on top of the updated tests.

vpykhtin added a commit that referenced this pull request Dec 4, 2023
This fixes incorrect source location in ASAN diagnostics for #72247.
@vpykhtin
Copy link
Contributor Author

Hi @vitalybuka, don't you mind submitting this?

Copy link
Collaborator

@vitalybuka vitalybuka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but I guess other reviewers do not see it, unless you click "re-request review" button.
I'll click and if no feedback, will submit next week.

Copy link
Collaborator

@nhaehnle nhaehnle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My ack still stands.

@vitalybuka vitalybuka merged commit cb7fe9a into llvm:main Jan 4, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants