-
Notifications
You must be signed in to change notification settings - Fork 10.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RISCV] Don't use pointer operand in MemoryLocation for RISC-V strided and indexed load/store intrinsics. #79890
Conversation
…d and indexed load/store intrinsics. It seems that even though we set the size to unknown, there is still an assumption in alias analysis somewhere that we will only access bytes *after* the pointer. Since a strided/indexed load/store can have negative indices, this is not accurate. This was found in our downstream when the scheduler reordered a strided load with negative stride above a scalar store that aliased with it.
@llvm/pr-subscribers-backend-risc-v Author: Craig Topper (topperc) ChangesIt seems that even though we set the size to unknown, there is still an assumption in alias analysis somewhere that we will only access bytes after the pointer. Since a strided/indexed load/store can have negative indices, this is not accurate. This was found in our downstream when the scheduler reordered a strided load with negative stride above a scalar store that aliased with it. Full diff: https://github.com/llvm/llvm-project/pull/79890.diff 1 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 05264f7fc42044b..40252a85879f605 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -1464,9 +1464,15 @@ bool RISCVTargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
auto &DL = I.getModule()->getDataLayout();
auto SetRVVLoadStoreInfo = [&](unsigned PtrOp, bool IsStore,
- bool IsUnitStrided) {
+ bool IsUnitStrided, bool UsePtrVal = false) {
Info.opc = IsStore ? ISD::INTRINSIC_VOID : ISD::INTRINSIC_W_CHAIN;
- Info.ptrVal = I.getArgOperand(PtrOp);
+ // We can't use ptrVal if the intrinsic can access memory before the
+ // pointer. This means we can't use it for strided or indexed intrinsics.
+ if (UsePtrVal)
+ Info.ptrVal = I.getArgOperand(PtrOp);
+ else
+ Info.fallbackAddressSpace =
+ I.getArgOperand(PtrOp)->getType()->getPointerAddressSpace();
Type *MemTy;
if (IsStore) {
// Store value is the first operand.
@@ -1526,7 +1532,7 @@ bool RISCVTargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
case Intrinsic::riscv_seg7_load:
case Intrinsic::riscv_seg8_load:
return SetRVVLoadStoreInfo(/*PtrOp*/ 0, /*IsStore*/ false,
- /*IsUnitStrided*/ false);
+ /*IsUnitStrided*/ false, /*UsePtrVal*/ true);
case Intrinsic::riscv_seg2_store:
case Intrinsic::riscv_seg3_store:
case Intrinsic::riscv_seg4_store:
@@ -1537,19 +1543,21 @@ bool RISCVTargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
// Operands are (vec, ..., vec, ptr, vl)
return SetRVVLoadStoreInfo(/*PtrOp*/ I.arg_size() - 2,
/*IsStore*/ true,
- /*IsUnitStrided*/ false);
+ /*IsUnitStrided*/ false, /*UsePtrVal*/ true);
case Intrinsic::riscv_vle:
case Intrinsic::riscv_vle_mask:
case Intrinsic::riscv_vleff:
case Intrinsic::riscv_vleff_mask:
return SetRVVLoadStoreInfo(/*PtrOp*/ 1,
/*IsStore*/ false,
- /*IsUnitStrided*/ true);
+ /*IsUnitStrided*/ true,
+ /*UsePtrVal*/ true);
case Intrinsic::riscv_vse:
case Intrinsic::riscv_vse_mask:
return SetRVVLoadStoreInfo(/*PtrOp*/ 1,
/*IsStore*/ true,
- /*IsUnitStrided*/ true);
+ /*IsUnitStrided*/ true,
+ /*UsePtrVal*/ true);
case Intrinsic::riscv_vlse:
case Intrinsic::riscv_vlse_mask:
case Intrinsic::riscv_vloxei:
@@ -1584,7 +1592,7 @@ bool RISCVTargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
case Intrinsic::riscv_vlseg8ff:
return SetRVVLoadStoreInfo(/*PtrOp*/ I.arg_size() - 2,
/*IsStore*/ false,
- /*IsUnitStrided*/ false);
+ /*IsUnitStrided*/ false, /*UsePtrVal*/ true);
case Intrinsic::riscv_vlseg2_mask:
case Intrinsic::riscv_vlseg3_mask:
case Intrinsic::riscv_vlseg4_mask:
@@ -1601,7 +1609,7 @@ bool RISCVTargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
case Intrinsic::riscv_vlseg8ff_mask:
return SetRVVLoadStoreInfo(/*PtrOp*/ I.arg_size() - 4,
/*IsStore*/ false,
- /*IsUnitStrided*/ false);
+ /*IsUnitStrided*/ false, /*UsePtrVal*/ true);
case Intrinsic::riscv_vlsseg2:
case Intrinsic::riscv_vlsseg3:
case Intrinsic::riscv_vlsseg4:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Do you have a test case for this you can share? I'm fine with this as a workaround, but AA should have correct handling for an unknown size (i.e. both before and after, and inprecise). The fact that you have a case which shows otherwise, I'd like to investigate. |
I don't have a test case currently, but I can point you towards where I think the issue is. Though I haven't confirmed. MemOperandsHaveAlias in MachineInstrInfo creates a
The large value is interpreted as being more than |
It seems that even though we set the size to unknown, there is still an assumption in alias analysis somewhere that we will only access bytes after the pointer. Since a strided/indexed load/store can have negative indices, this is not accurate.
This was found in our downstream when the scheduler reordered a strided load with negative stride above a scalar store that aliased with it.