-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Asan] Add TTI hook to provide memory reference information of target intrinsic. #133361
base: main
Are you sure you want to change the base?
[Asan] Add TTI hook to provide memory reference information of target intrinsic. #133361
Conversation
@llvm/pr-subscribers-compiler-rt-sanitizer @llvm/pr-subscribers-backend-risc-v Author: Hank Chang (HankChang736) ChangesPreviously, AddressSanitizer treated target memory intrinsics as black boxes, which prevented ASan from accurately instrumenting them. This patch introduces a TargetTransformInfo (TTI) hook Note:
Fixing #127753 . Patch is 174.37 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/133361.diff 7 Files Affected:
diff --git a/llvm/include/llvm/Analysis/TargetTransformInfo.h b/llvm/include/llvm/Analysis/TargetTransformInfo.h
index 99e21aca97631..a216d36a5d089 100644
--- a/llvm/include/llvm/Analysis/TargetTransformInfo.h
+++ b/llvm/include/llvm/Analysis/TargetTransformInfo.h
@@ -30,6 +30,7 @@
#include "llvm/Support/AtomicOrdering.h"
#include "llvm/Support/BranchProbability.h"
#include "llvm/Support/InstructionCost.h"
+#include "llvm/Transforms/Instrumentation/AddressSanitizerCommon.h"
#include <functional>
#include <optional>
#include <utility>
@@ -1664,6 +1665,11 @@ class TargetTransformInfo {
/// if false is returned.
bool getTgtMemIntrinsic(IntrinsicInst *Inst, MemIntrinsicInfo &Info) const;
+ /// \returns True if the intrinsic is a supported memory instrinsic,
+ /// and store the \p InterestingMemoryOperand into array \p Interesting
+ /// for Asan instrumentation.
+ bool getTgtMemIntrinsicOperand(IntrinsicInst *Inst, SmallVectorImpl<InterestingMemoryOperand> &Interesting) const;
+
/// \returns The maximum element size, in bytes, for an element
/// unordered-atomic memory intrinsic.
unsigned getAtomicMemIntrinsicMaxElementSize() const;
@@ -2285,6 +2291,8 @@ class TargetTransformInfo::Concept {
getCostOfKeepingLiveOverCall(ArrayRef<Type *> Tys) = 0;
virtual bool getTgtMemIntrinsic(IntrinsicInst *Inst,
MemIntrinsicInfo &Info) = 0;
+ virtual bool getTgtMemIntrinsicOperand(IntrinsicInst *Inst,
+ SmallVectorImpl<InterestingMemoryOperand> &Interesting) = 0;
virtual unsigned getAtomicMemIntrinsicMaxElementSize() const = 0;
virtual Value *getOrCreateResultFromMemIntrinsic(IntrinsicInst *Inst,
Type *ExpectedType) = 0;
@@ -3056,6 +3064,10 @@ class TargetTransformInfo::Model final : public TargetTransformInfo::Concept {
MemIntrinsicInfo &Info) override {
return Impl.getTgtMemIntrinsic(Inst, Info);
}
+ bool getTgtMemIntrinsicOperand(IntrinsicInst *Inst,
+ SmallVectorImpl<InterestingMemoryOperand> &Interesting) override {
+ return Impl.getTgtMemIntrinsicOperand(Inst, Interesting);
+ }
unsigned getAtomicMemIntrinsicMaxElementSize() const override {
return Impl.getAtomicMemIntrinsicMaxElementSize();
}
diff --git a/llvm/include/llvm/Analysis/TargetTransformInfoImpl.h b/llvm/include/llvm/Analysis/TargetTransformInfoImpl.h
index 745758426c714..cf2cc6fddb367 100644
--- a/llvm/include/llvm/Analysis/TargetTransformInfoImpl.h
+++ b/llvm/include/llvm/Analysis/TargetTransformInfoImpl.h
@@ -905,7 +905,9 @@ class TargetTransformInfoImplBase {
bool getTgtMemIntrinsic(IntrinsicInst *Inst, MemIntrinsicInfo &Info) const {
return false;
}
-
+ bool getTgtMemIntrinsicOperand(IntrinsicInst *Inst, SmallVectorImpl<InterestingMemoryOperand> &Interesting) const {
+ return false;
+ }
unsigned getAtomicMemIntrinsicMaxElementSize() const {
// Note for overrides: You must ensure for all element unordered-atomic
// memory intrinsics that all power-of-2 element sizes up to, and
diff --git a/llvm/lib/Analysis/TargetTransformInfo.cpp b/llvm/lib/Analysis/TargetTransformInfo.cpp
index 4df551aca30a7..08b4e5f77c5e5 100644
--- a/llvm/lib/Analysis/TargetTransformInfo.cpp
+++ b/llvm/lib/Analysis/TargetTransformInfo.cpp
@@ -1273,6 +1273,11 @@ bool TargetTransformInfo::getTgtMemIntrinsic(IntrinsicInst *Inst,
return TTIImpl->getTgtMemIntrinsic(Inst, Info);
}
+bool TargetTransformInfo::getTgtMemIntrinsicOperand(IntrinsicInst *Inst,
+ SmallVectorImpl<InterestingMemoryOperand> &Interesting) const {
+ return TTIImpl->getTgtMemIntrinsicOperand(Inst, Interesting);
+}
+
unsigned TargetTransformInfo::getAtomicMemIntrinsicMaxElementSize() const {
return TTIImpl->getAtomicMemIntrinsicMaxElementSize();
}
diff --git a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
index f49ad2f5bd20e..136a1cf310544 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
@@ -16,6 +16,7 @@
#include "llvm/CodeGen/ValueTypes.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/PatternMatch.h"
+#include "llvm/IR/IntrinsicsRISCV.h"
#include <cmath>
#include <optional>
using namespace llvm;
@@ -37,6 +38,83 @@ static cl::opt<unsigned> SLPMaxVF(
"exclusively by SLP vectorizer."),
cl::Hidden);
+bool RISCVTTIImpl::getTgtMemIntrinsicOperand(IntrinsicInst *Inst,
+ SmallVectorImpl<InterestingMemoryOperand> &Interesting) const {
+ const DataLayout &DL = getDataLayout();
+ Intrinsic::ID IID = Inst->getIntrinsicID();
+ LLVMContext &C = Inst->getContext();
+ bool HasMask = false;
+
+ switch (IID) {
+ case Intrinsic::riscv_vle_mask:
+ case Intrinsic::riscv_vse_mask:
+ HasMask = true;
+ [[fallthrough]];
+ case Intrinsic::riscv_vle:
+ case Intrinsic::riscv_vse: {
+ // Intrinsic interface:
+ // riscv_vle(merge, ptr, vl)
+ // riscv_vle_mask(merge, ptr, mask, vl, policy)
+ // riscv_vse(val, ptr, vl)
+ // riscv_vse_mask(val, ptr, mask, vl, policy)
+ bool IsWrite = Inst->getType()->isVoidTy();
+ Type *Ty = IsWrite ? Inst->getArgOperand(0)->getType() : Inst->getType();
+ const auto *RVVIInfo = RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(IID);
+ unsigned VLIndex = RVVIInfo->VLOperand;
+ unsigned PtrOperandNo = VLIndex - 1 - HasMask;
+ MaybeAlign Alignment =
+ Inst->getArgOperand(PtrOperandNo)->getPointerAlignment(DL);
+ Type *MaskType = Ty->getWithNewType(Type::getInt1Ty(C));
+ Value *Mask = ConstantInt::getTrue(MaskType);
+ if (HasMask)
+ Mask = Inst->getArgOperand(VLIndex - 1);
+ Value *EVL = Inst->getArgOperand(VLIndex);
+ Interesting.emplace_back(Inst, PtrOperandNo, IsWrite, Ty, Alignment, Mask,
+ EVL);
+ return true;
+ }
+ case Intrinsic::riscv_vlse_mask:
+ case Intrinsic::riscv_vsse_mask:
+ HasMask = true;
+ [[fallthrough]];
+ case Intrinsic::riscv_vlse:
+ case Intrinsic::riscv_vsse: {
+ // Intrinsic interface:
+ // riscv_vlse(merge, ptr, stride, vl)
+ // riscv_vlse_mask(merge, ptr, stride, mask, vl, policy)
+ // riscv_vsse(val, ptr, stride, vl)
+ // riscv_vsse_mask(val, ptr, stride, mask, vl, policy)
+ bool IsWrite = Inst->getType()->isVoidTy();
+ Type *Ty = IsWrite ? Inst->getArgOperand(0)->getType() : Inst->getType();
+ const auto *RVVIInfo = RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(IID);
+ unsigned VLIndex = RVVIInfo->VLOperand;
+ unsigned PtrOperandNo = VLIndex - 2 - HasMask;
+ MaybeAlign Alignment =
+ Inst->getArgOperand(PtrOperandNo)->getPointerAlignment(DL);
+
+ Value *Stride = Inst->getArgOperand(PtrOperandNo + 1);
+ // Use the pointer alignment as the element alignment if the stride is a
+ // multiple of the pointer alignment. Otherwise, the element alignment
+ // should be the greatest common divisor of pointer alignment and stride.
+ // For simplicity, just consider unalignment for elements.
+ unsigned PointerAlign = Alignment.valueOrOne().value();
+ if (!isa<ConstantInt>(Stride) ||
+ cast<ConstantInt>(Stride)->getZExtValue() % PointerAlign != 0)
+ Alignment = Align(1);
+
+ Type *MaskType = Ty->getWithNewType(Type::getInt1Ty(C));
+ Value *Mask = ConstantInt::getTrue(MaskType);
+ if (HasMask)
+ Mask = Inst->getArgOperand(VLIndex - 1);
+ Value *EVL = Inst->getArgOperand(VLIndex);
+ Interesting.emplace_back(Inst, PtrOperandNo, IsWrite, Ty, Alignment, Mask,
+ EVL, Stride);
+ return true;
+ }
+ }
+ return false;
+}
+
InstructionCost
RISCVTTIImpl::getRISCVInstructionCost(ArrayRef<unsigned> OpCodes, MVT VT,
TTI::TargetCostKind CostKind) {
diff --git a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h
index 8ffe1b08d1e26..8c4d8636829ae 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h
+++ b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h
@@ -151,6 +151,10 @@ class RISCVTTIImpl : public BasicTTIImplBase<RISCVTTIImpl> {
void getPeelingPreferences(Loop *L, ScalarEvolution &SE,
TTI::PeelingPreferences &PP);
+
+ bool getTgtMemIntrinsicOperand(IntrinsicInst *Inst,
+ SmallVectorImpl<InterestingMemoryOperand> &Interesting) const;
+
unsigned getMinVectorRegisterBitWidth() const {
return ST->useRVVForFixedLengthVectors() ? 16 : 0;
}
diff --git a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
index 62bfb7cec4ff0..17c2f1436b7b9 100644
--- a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
@@ -29,6 +29,7 @@
#include "llvm/Analysis/MemoryBuiltins.h"
#include "llvm/Analysis/StackSafetyAnalysis.h"
#include "llvm/Analysis/TargetLibraryInfo.h"
+#include "llvm/Analysis/TargetTransformInfo.h"
#include "llvm/Analysis/ValueTracking.h"
#include "llvm/BinaryFormat/MachO.h"
#include "llvm/Demangle/Demangle.h"
@@ -799,7 +800,8 @@ struct AddressSanitizer {
bool ignoreAccess(Instruction *Inst, Value *Ptr);
void getInterestingMemoryOperands(
- Instruction *I, SmallVectorImpl<InterestingMemoryOperand> &Interesting);
+ Instruction *I, SmallVectorImpl<InterestingMemoryOperand> &Interesting,
+ const TargetTransformInfo *TTI);
void instrumentMop(ObjectSizeOffsetVisitor &ObjSizeVis,
InterestingMemoryOperand &O, bool UseCalls,
@@ -839,7 +841,7 @@ struct AddressSanitizer {
void instrumentMemIntrinsic(MemIntrinsic *MI, RuntimeCallInserter &RTCI);
Value *memToShadow(Value *Shadow, IRBuilder<> &IRB);
bool suppressInstrumentationSiteForDebug(int &Instrumented);
- bool instrumentFunction(Function &F, const TargetLibraryInfo *TLI);
+ bool instrumentFunction(Function &F, const TargetLibraryInfo *TLI, const TargetTransformInfo *TTI);
bool maybeInsertAsanInitAtFunctionEntry(Function &F);
bool maybeInsertDynamicShadowAtFunctionEntry(Function &F);
void markEscapedLocalAllocas(Function &F);
@@ -1327,7 +1329,8 @@ PreservedAnalyses AddressSanitizerPass::run(Module &M,
Options.MaxInlinePoisoningSize, Options.CompileKernel, Options.Recover,
Options.UseAfterScope, Options.UseAfterReturn);
const TargetLibraryInfo &TLI = FAM.getResult<TargetLibraryAnalysis>(F);
- Modified |= FunctionSanitizer.instrumentFunction(F, &TLI);
+ const TargetTransformInfo &TTI = FAM.getResult<TargetIRAnalysis>(F);
+ Modified |= FunctionSanitizer.instrumentFunction(F, &TLI, &TTI);
}
Modified |= ModuleSanitizer.instrumentModule();
if (!Modified)
@@ -1465,7 +1468,8 @@ bool AddressSanitizer::ignoreAccess(Instruction *Inst, Value *Ptr) {
}
void AddressSanitizer::getInterestingMemoryOperands(
- Instruction *I, SmallVectorImpl<InterestingMemoryOperand> &Interesting) {
+ Instruction *I, SmallVectorImpl<InterestingMemoryOperand> &Interesting,
+ const TargetTransformInfo *TTI) {
// Do not instrument the load fetching the dynamic shadow address.
if (LocalDynamicShadow == I)
return;
@@ -1583,6 +1587,9 @@ void AddressSanitizer::getInterestingMemoryOperands(
break;
}
default:
+ if (auto *II = dyn_cast<IntrinsicInst>(I))
+ if (TTI->getTgtMemIntrinsicOperand(II, Interesting))
+ return;
for (unsigned ArgNo = 0; ArgNo < CI->arg_size(); ArgNo++) {
if (!ClInstrumentByval || !CI->isByValArgument(ArgNo) ||
ignoreAccess(I, CI->getArgOperand(ArgNo)))
@@ -2998,7 +3005,8 @@ bool AddressSanitizer::suppressInstrumentationSiteForDebug(int &Instrumented) {
}
bool AddressSanitizer::instrumentFunction(Function &F,
- const TargetLibraryInfo *TLI) {
+ const TargetLibraryInfo *TLI,
+ const TargetTransformInfo *TTI) {
bool FunctionModified = false;
// Do not apply any instrumentation for naked functions.
@@ -3051,7 +3059,7 @@ bool AddressSanitizer::instrumentFunction(Function &F,
if (Inst.hasMetadata(LLVMContext::MD_nosanitize))
continue;
SmallVector<InterestingMemoryOperand, 1> InterestingOperands;
- getInterestingMemoryOperands(&Inst, InterestingOperands);
+ getInterestingMemoryOperands(&Inst, InterestingOperands, TTI);
if (!InterestingOperands.empty()) {
for (auto &Operand : InterestingOperands) {
diff --git a/llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll b/llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll
new file mode 100644
index 0000000000000..7571679659533
--- /dev/null
+++ b/llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll
@@ -0,0 +1,2304 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
+; RUN: opt < %s -mtriple=riscv64 -mattr=+v -passes=asan \
+; RUN: -asan-instrumentation-with-call-threshold=0 -S | FileCheck %s
+
+declare <vscale x 1 x i32> @llvm.riscv.vle.nxv1i32(
+ <vscale x 1 x i32>,
+ <vscale x 1 x i32>*,
+ i64)
+define <vscale x 1 x i32> @intrinsic_vle_v_nxv1i32_nxv1i32(<vscale x 1 x i32>* align 4 %0, i64 %1) sanitize_address {
+; CHECK-LABEL: @intrinsic_vle_v_nxv1i32_nxv1i32(
+; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP2:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[TMP1:%.*]], 0
+; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]]
+; CHECK: 4:
+; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 [[TMP5]])
+; CHECK-NEXT: br label [[DOTSPLIT:%.*]]
+; CHECK: .split:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ]
+; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]]
+; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]]
+; CHECK: 8:
+; CHECK-NEXT: [[TMP9:%.*]] = getelementptr <vscale x 1 x i32>, ptr [[TMP0:%.*]], i64 0, i64 [[IV]]
+; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64
+; CHECK-NEXT: call void @__asan_load4(i64 [[TMP10]])
+; CHECK-NEXT: br label [[TMP11]]
+; CHECK: 11:
+; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
+; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]]
+; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
+; CHECK: .split.split:
+; CHECK-NEXT: br label [[TMP12]]
+; CHECK: 12:
+; CHECK-NEXT: [[A:%.*]] = call <vscale x 1 x i32> @llvm.riscv.vle.nxv1i32.i64(<vscale x 1 x i32> undef, ptr [[TMP0]], i64 [[TMP1]])
+; CHECK-NEXT: ret <vscale x 1 x i32> [[A]]
+;
+entry:
+ %a = call <vscale x 1 x i32> @llvm.riscv.vle.nxv1i32(
+ <vscale x 1 x i32> undef,
+ <vscale x 1 x i32>* %0,
+ i64 %1)
+ ret <vscale x 1 x i32> %a
+}
+
+declare <vscale x 1 x i32> @llvm.riscv.vle.mask.nxv1i32(
+ <vscale x 1 x i32>,
+ <vscale x 1 x i32>*,
+ <vscale x 1 x i1>,
+ i64,
+ i64)
+define <vscale x 1 x i32> @intrinsic_vle_mask_v_nxv1i32_nxv1i32(<vscale x 1 x i32> %0, <vscale x 1 x i32>* align 4 %1, <vscale x 1 x i1> %2, i64 %3) sanitize_address {
+; CHECK-LABEL: @intrinsic_vle_mask_v_nxv1i32_nxv1i32(
+; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP4:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8
+; CHECK-NEXT: [[TMP5:%.*]] = icmp ne i64 [[TMP3:%.*]], 0
+; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP14:%.*]]
+; CHECK: 6:
+; CHECK-NEXT: [[TMP7:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP8:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP3]], i64 [[TMP7]])
+; CHECK-NEXT: br label [[DOTSPLIT:%.*]]
+; CHECK: .split:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP6]] ], [ [[IV_NEXT:%.*]], [[TMP13:%.*]] ]
+; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x i1> [[TMP2:%.*]], i64 [[IV]]
+; CHECK-NEXT: br i1 [[TMP9]], label [[TMP10:%.*]], label [[TMP13]]
+; CHECK: 10:
+; CHECK-NEXT: [[TMP11:%.*]] = getelementptr <vscale x 1 x i32>, ptr [[TMP1:%.*]], i64 0, i64 [[IV]]
+; CHECK-NEXT: [[TMP12:%.*]] = ptrtoint ptr [[TMP11]] to i64
+; CHECK-NEXT: call void @__asan_load4(i64 [[TMP12]])
+; CHECK-NEXT: br label [[TMP13]]
+; CHECK: 13:
+; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
+; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP8]]
+; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
+; CHECK: .split.split:
+; CHECK-NEXT: br label [[TMP14]]
+; CHECK: 14:
+; CHECK-NEXT: [[A:%.*]] = call <vscale x 1 x i32> @llvm.riscv.vle.mask.nxv1i32.i64(<vscale x 1 x i32> [[TMP0:%.*]], ptr [[TMP1]], <vscale x 1 x i1> [[TMP2]], i64 [[TMP3]], i64 1)
+; CHECK-NEXT: ret <vscale x 1 x i32> [[A]]
+;
+entry:
+ %a = call <vscale x 1 x i32> @llvm.riscv.vle.mask.nxv1i32(
+ <vscale x 1 x i32> %0,
+ <vscale x 1 x i32>* %1,
+ <vscale x 1 x i1> %2,
+ i64 %3, i64 1)
+ ret <vscale x 1 x i32> %a
+}
+
+declare void @llvm.riscv.vse.nxv1i32(
+ <vscale x 1 x i32>,
+ <vscale x 1 x i32>*,
+ i64)
+define void @intrinsic_vse_v_nxv1i32_nxv1i32(<vscale x 1 x i32> %0, <vscale x 1 x i32>* align 4 %1, i64 %2) sanitize_address {
+; CHECK-LABEL: @intrinsic_vse_v_nxv1i32_nxv1i32(
+; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP3:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8
+; CHECK-NEXT: [[TMP4:%.*]] = icmp ne i64 [[TMP2:%.*]], 0
+; CHECK-NEXT: br i1 [[TMP4]], label [[TMP5:%.*]], label [[TMP13:%.*]]
+; CHECK: 5:
+; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP7:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP2]], i64 [[TMP6]])
+; CHECK-NEXT: br label [[DOTSPLIT:%.*]]
+; CHECK: .split:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP5]] ], [ [[IV_NEXT:%.*]], [[TMP12:%.*]] ]
+; CHECK-NEXT: [[TMP8:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]]
+; CHECK-NEXT: br i1 [[TMP8]], label [[TMP9:%.*]], label [[TMP12]]
+; CHECK: 9:
+; CHECK-NEXT: [[TMP10:%.*]] = getelementptr <vscale x 1 x i32>, ptr [[TMP1:%.*]], i64 0, i64 [[IV]]
+; CHECK-NEXT: [[TMP11:%.*]] = ptrtoint ptr [[TMP10]] to i64
+; CHECK-NEXT: call void @__asan_store4(i64 [[TMP11]])
+; CHECK-NEXT: br label [[TMP12]]
+; CHECK: 12:
+; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
+; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP7]]
+; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
+; CHECK: .split.split:
+; CHECK-NEXT: br label [[TMP13]]
+; CHECK: 13:
+; CHECK-NEXT: call void @llvm.riscv.vse.nxv1i32.i64(<vscale x 1 x i32> [[TMP0:%.*]], ptr [[TMP1]], i64 [[TMP2]])
+; CHECK-NEXT: ret void
+;
+entry:
+ call void @llvm.riscv.vse.nxv1i32(
+ <vscale x 1 x i32> %0,
+ <vscale x 1 x i32>* %1,
+ i64 %2)
+ ret void
+}
+
+declare void @llvm.riscv.vse.mask.nxv1i32(
+ <vscale x 1 x i32>,
+ <vscale x 1 x i32>*,
+ <vscale x 1 x i1>,
+ i64)
+define void @intrinsic_vse_mask_v_nxv1i32_nxv1i32(<vscale x 1 x i32> %0, <vscale x 1 x i32>* align 4 %1, <vscale x 1 x i1> %2, i64 %3) sanitize_address {
+; CHECK-LABEL: @intrinsic_vse_mask_v_nxv1i32_nxv1i32(
+; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP4:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8
+; CHECK-NEXT: [[TMP5:%.*]] = icmp ne i64 [[TMP3:%.*]], 0
+; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP14:%.*]]
+; CHECK: 6:
+; CHECK-NEXT: [[TMP7:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP8:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP3]], i64 [[TMP7]])
+; CHECK-NEXT: br label [[DOTSPLIT:%.*]]
+; CHECK: .split:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP6]] ], [ [[IV_NEXT:%.*]], [[TMP13:%.*]] ]
+; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x i1> [[TMP2:%.*]], i64 [[IV]]
+; CHECK-NEXT: br i1 [[TMP9]], label [[TMP10:%.*]], label [[TMP13]]
+; CHECK: 10:
+; CHECK-NEXT: [[TMP11:%.*]] = getelementptr <vscale x 1 x i32>, ptr [[TMP1:%.*]], i64 0, i64 [[IV]]
+; CHECK-NEXT: ...
[truncated]
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
You can test this locally with the following command:git diff -U0 --pickaxe-regex -S '([^a-zA-Z0-9#_-]undef[^a-zA-Z0-9_-]|UndefValue::get)' 81c6ce3b33ff462a23744c82f493e1e1ea3844de a236ac1901389e1a2b7396e6aa93be23604def69 llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll llvm/include/llvm/Analysis/TargetTransformInfo.h llvm/include/llvm/Analysis/TargetTransformInfoImpl.h llvm/lib/Analysis/TargetTransformInfo.cpp llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp The following files introduce new uses of undef:
Undef is now deprecated and should only be used in the rare cases where no replacement is possible. For example, a load of uninitialized memory yields In tests, avoid using For example, this is considered a bad practice: define void @fn() {
...
br i1 undef, ...
} Please use the following instead: define void @fn(i1 %cond) {
...
br i1 %cond, ...
} Please refer to the Undefined Behavior Manual for more information. |
In the previous patch (#127753), the suggested approach was to enhance getTgtMemIntrinsic(). However, I realized that getTgtMemIntrinsic() is already used for specific purposes (e.g., EarlyCSE), so in this patch, I'm still adding a new TTI hook instead. The main difference is that I'm no longer renaming |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having two nearly identical functions for the target to implement is unacceptable. Find a way to combine the interface
Previously, AddressSanitizer treated target memory intrinsics as black boxes, which prevented ASan from accurately instrumenting them. This patch introduces a TargetTransformInfo (TTI) hook
getTgtMemIntrinsicOperand
that allows targets to describe the behavior of their memory intrinsics to AddressSanitizer.Note:
Fixing #127753 .