-
Notifications
You must be signed in to change notification settings - Fork 15.4k
[CIR][X86] Implement lowering for readwritebarrier builtins
#171094
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Support CIR codegen for follow X86 builtin: `_ReadWriteBarrier` `_ReadBarrier` `_WriteBarrier` `__faststorefence` - CIR dialect operations maybe have no results, no value will return after `emitTargetBuiltinExpr` even if it executes successfully. Using `std::optional<mlir::Value>` as return type to reslove this problem.
|
@llvm/pr-subscribers-clang Author: Haocong Lu (Luhaocong) Changes
Patch is 35.91 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/171094.diff 7 Files Affected:
diff --git a/clang/include/clang/CIR/Dialect/IR/CIROps.td b/clang/include/clang/CIR/Dialect/IR/CIROps.td
index fcc7585cf81a5..e39bc31c3eafa 100644
--- a/clang/include/clang/CIR/Dialect/IR/CIROps.td
+++ b/clang/include/clang/CIR/Dialect/IR/CIROps.td
@@ -5493,7 +5493,7 @@ def CIR_AtomicClearOp : CIR_Op<"atomic.clear"> {
}];
}
-def CIR_AtomicFence : CIR_Op<"atomic.fence"> {
+def CIR_AtomicFenceOp : CIR_Op<"atomic.fence"> {
let summary = "Atomic thread fence";
let description = [{
C/C++ Atomic thread fence synchronization primitive. Implements the builtin
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
index 16c006df6853e..87408fdae80d8 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
@@ -80,7 +80,7 @@ static mlir::Value makeAtomicFenceValue(CIRGenFunction &cgf,
auto ordering = static_cast<cir::MemOrder>(constOrderingAttr.getUInt());
- cir::AtomicFence::create(
+ cir::AtomicFenceOp::create(
builder, cgf.getLoc(expr->getSourceRange()), ordering,
cir::SyncScopeKindAttr::get(&cgf.getMLIRContext(), syncScope));
@@ -1352,7 +1352,14 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
}
// Now see if we can emit a target-specific builtin.
- if (mlir::Value v = emitTargetBuiltinExpr(builtinID, e, returnValue)) {
+ if (std::optional<mlir::Value> rst =
+ emitTargetBuiltinExpr(builtinID, e, returnValue)) {
+ mlir::Value v = rst.value();
+ // CIR dialect operations maybe have no results, no value will return
+ // even if it executes successfully.
+ if (!v)
+ return RValue::get(nullptr);
+
switch (evalKind) {
case cir::TEK_Scalar:
if (mlir::isa<cir::VoidType>(v.getType()))
@@ -1373,11 +1380,10 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
return getUndefRValue(e->getType());
}
-static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
- unsigned builtinID,
- const CallExpr *e,
- ReturnValueSlot &returnValue,
- llvm::Triple::ArchType arch) {
+static std::optional<mlir::Value>
+emitTargetArchBuiltinExpr(CIRGenFunction *cgf, unsigned builtinID,
+ const CallExpr *e, ReturnValueSlot &returnValue,
+ llvm::Triple::ArchType arch) {
// When compiling in HipStdPar mode we have to be conservative in rejecting
// target specific features in the FE, and defer the possible error to the
// AcceleratorCodeSelection pass, wherein iff an unsupported target builtin is
@@ -1386,7 +1392,7 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
// EmitStdParUnsupportedBuiltin.
if (cgf->getLangOpts().HIPStdPar && cgf->getLangOpts().CUDAIsDevice &&
arch != cgf->getTarget().getTriple().getArch())
- return {};
+ return std::nullopt;
switch (arch) {
case llvm::Triple::arm:
@@ -1395,7 +1401,7 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
case llvm::Triple::thumbeb:
// These are actually NYI, but that will be reported by emitBuiltinExpr.
// At this point, we don't even know that the builtin is target-specific.
- return nullptr;
+ return std::nullopt;
case llvm::Triple::aarch64:
case llvm::Triple::aarch64_32:
case llvm::Triple::aarch64_be:
@@ -1404,7 +1410,7 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
case llvm::Triple::bpfel:
// These are actually NYI, but that will be reported by emitBuiltinExpr.
// At this point, we don't even know that the builtin is target-specific.
- return nullptr;
+ return std::nullopt;
case llvm::Triple::x86:
case llvm::Triple::x86_64:
@@ -1426,13 +1432,13 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
case llvm::Triple::riscv64:
// These are actually NYI, but that will be reported by emitBuiltinExpr.
// At this point, we don't even know that the builtin is target-specific.
- return {};
+ return std::nullopt;
default:
- return {};
+ return std::nullopt;
}
}
-mlir::Value
+std::optional<mlir::Value>
CIRGenFunction::emitTargetBuiltinExpr(unsigned builtinID, const CallExpr *e,
ReturnValueSlot &returnValue) {
if (getContext().BuiltinInfo.isAuxBuiltinID(builtinID)) {
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp b/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp
index 5a9ae59ca253a..bfd16c78e369f 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp
@@ -30,21 +30,22 @@ using namespace clang;
using namespace clang::CIRGen;
using namespace llvm;
-mlir::Value CIRGenFunction::emitAArch64SVEBuiltinExpr(unsigned builtinID,
- const CallExpr *expr) {
+std::optional<mlir::Value>
+CIRGenFunction::emitAArch64SVEBuiltinExpr(unsigned builtinID,
+ const CallExpr *expr) {
if (builtinID >= SVE::BI__builtin_sve_reinterpret_s8_s8 &&
builtinID <= SVE::BI__builtin_sve_reinterpret_f64_f64_x4) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
assert(!cir::MissingFeatures::aarch64SVEIntrinsics());
switch (builtinID) {
default:
- return {};
+ return std::nullopt;
case SVE::BI__builtin_sve_svreinterpret_b:
case SVE::BI__builtin_sve_svreinterpret_c:
@@ -163,20 +164,21 @@ mlir::Value CIRGenFunction::emitAArch64SVEBuiltinExpr(unsigned builtinID,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Unreachable: All cases in the switch above return.
}
-mlir::Value CIRGenFunction::emitAArch64SMEBuiltinExpr(unsigned builtinID,
- const CallExpr *expr) {
+std::optional<mlir::Value>
+CIRGenFunction::emitAArch64SMEBuiltinExpr(unsigned builtinID,
+ const CallExpr *expr) {
assert(!cir::MissingFeatures::aarch64SMEIntrinsics());
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Some intrinsics are equivalent for codegen.
@@ -574,7 +576,7 @@ static const std::pair<unsigned, unsigned> neonEquivalentIntrinsicMap[] = {
NEON::BI__builtin_neon_vstl1q_lane_s64},
};
-mlir::Value
+std::optional<mlir::Value>
CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
ReturnValueSlot returnValue,
llvm::Triple::ArchType arch) {
@@ -590,7 +592,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
switch (builtinID) {
@@ -610,34 +612,34 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_trap) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_get_sme_state) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rbit) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rbit64) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_clz ||
@@ -645,20 +647,20 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_cls) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_cls64) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint32zf ||
@@ -666,7 +668,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint64zf ||
@@ -674,7 +676,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint32xf ||
@@ -682,7 +684,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint64xf ||
@@ -690,14 +692,14 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_jcvt) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_ld64b ||
@@ -707,7 +709,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rndr ||
@@ -715,14 +717,14 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__clear_cache) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if ((builtinID == clang::AArch64::BI__builtin_arm_ldrex ||
@@ -731,14 +733,14 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_ldrex ||
builtinID == clang::AArch64::BI__builtin_arm_ldaex) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if ((builtinID == clang::AArch64::BI__builtin_arm_strex ||
@@ -747,7 +749,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_strex ||
@@ -755,35 +757,35 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__getReg) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__break) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_clrex) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI_ReadWriteBarrier) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// CRC32
@@ -819,7 +821,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Memory Operations (MOPS)
@@ -827,7 +829,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Memory Tagging Extensions (MTE) Intrinsics
@@ -857,7 +859,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rsr ||
@@ -871,7 +873,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI_ReadStatusReg ||
@@ -880,21 +882,21 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI_AddressOfReturnAddress) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_sponentry) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__mulh ||
@@ -902,7 +904,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI__writex18byte ||
@@ -912,7 +914,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI__readx18byte ||
@@ -922,7 +924,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI__addx18byte ||
@@ -936,7 +938,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI_CopyDoubleFromInt64 ||
@@ -946,7 +948,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI_CountLeadingOnes ||
@@ -956,7 +958,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
...
[truncated]
|
|
@llvm/pr-subscribers-clangir Author: Haocong Lu (Luhaocong) Changes
Patch is 35.91 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/171094.diff 7 Files Affected:
diff --git a/clang/include/clang/CIR/Dialect/IR/CIROps.td b/clang/include/clang/CIR/Dialect/IR/CIROps.td
index fcc7585cf81a5..e39bc31c3eafa 100644
--- a/clang/include/clang/CIR/Dialect/IR/CIROps.td
+++ b/clang/include/clang/CIR/Dialect/IR/CIROps.td
@@ -5493,7 +5493,7 @@ def CIR_AtomicClearOp : CIR_Op<"atomic.clear"> {
}];
}
-def CIR_AtomicFence : CIR_Op<"atomic.fence"> {
+def CIR_AtomicFenceOp : CIR_Op<"atomic.fence"> {
let summary = "Atomic thread fence";
let description = [{
C/C++ Atomic thread fence synchronization primitive. Implements the builtin
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
index 16c006df6853e..87408fdae80d8 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
@@ -80,7 +80,7 @@ static mlir::Value makeAtomicFenceValue(CIRGenFunction &cgf,
auto ordering = static_cast<cir::MemOrder>(constOrderingAttr.getUInt());
- cir::AtomicFence::create(
+ cir::AtomicFenceOp::create(
builder, cgf.getLoc(expr->getSourceRange()), ordering,
cir::SyncScopeKindAttr::get(&cgf.getMLIRContext(), syncScope));
@@ -1352,7 +1352,14 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
}
// Now see if we can emit a target-specific builtin.
- if (mlir::Value v = emitTargetBuiltinExpr(builtinID, e, returnValue)) {
+ if (std::optional<mlir::Value> rst =
+ emitTargetBuiltinExpr(builtinID, e, returnValue)) {
+ mlir::Value v = rst.value();
+ // CIR dialect operations maybe have no results, no value will return
+ // even if it executes successfully.
+ if (!v)
+ return RValue::get(nullptr);
+
switch (evalKind) {
case cir::TEK_Scalar:
if (mlir::isa<cir::VoidType>(v.getType()))
@@ -1373,11 +1380,10 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
return getUndefRValue(e->getType());
}
-static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
- unsigned builtinID,
- const CallExpr *e,
- ReturnValueSlot &returnValue,
- llvm::Triple::ArchType arch) {
+static std::optional<mlir::Value>
+emitTargetArchBuiltinExpr(CIRGenFunction *cgf, unsigned builtinID,
+ const CallExpr *e, ReturnValueSlot &returnValue,
+ llvm::Triple::ArchType arch) {
// When compiling in HipStdPar mode we have to be conservative in rejecting
// target specific features in the FE, and defer the possible error to the
// AcceleratorCodeSelection pass, wherein iff an unsupported target builtin is
@@ -1386,7 +1392,7 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
// EmitStdParUnsupportedBuiltin.
if (cgf->getLangOpts().HIPStdPar && cgf->getLangOpts().CUDAIsDevice &&
arch != cgf->getTarget().getTriple().getArch())
- return {};
+ return std::nullopt;
switch (arch) {
case llvm::Triple::arm:
@@ -1395,7 +1401,7 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
case llvm::Triple::thumbeb:
// These are actually NYI, but that will be reported by emitBuiltinExpr.
// At this point, we don't even know that the builtin is target-specific.
- return nullptr;
+ return std::nullopt;
case llvm::Triple::aarch64:
case llvm::Triple::aarch64_32:
case llvm::Triple::aarch64_be:
@@ -1404,7 +1410,7 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
case llvm::Triple::bpfel:
// These are actually NYI, but that will be reported by emitBuiltinExpr.
// At this point, we don't even know that the builtin is target-specific.
- return nullptr;
+ return std::nullopt;
case llvm::Triple::x86:
case llvm::Triple::x86_64:
@@ -1426,13 +1432,13 @@ static mlir::Value emitTargetArchBuiltinExpr(CIRGenFunction *cgf,
case llvm::Triple::riscv64:
// These are actually NYI, but that will be reported by emitBuiltinExpr.
// At this point, we don't even know that the builtin is target-specific.
- return {};
+ return std::nullopt;
default:
- return {};
+ return std::nullopt;
}
}
-mlir::Value
+std::optional<mlir::Value>
CIRGenFunction::emitTargetBuiltinExpr(unsigned builtinID, const CallExpr *e,
ReturnValueSlot &returnValue) {
if (getContext().BuiltinInfo.isAuxBuiltinID(builtinID)) {
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp b/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp
index 5a9ae59ca253a..bfd16c78e369f 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenBuiltinAArch64.cpp
@@ -30,21 +30,22 @@ using namespace clang;
using namespace clang::CIRGen;
using namespace llvm;
-mlir::Value CIRGenFunction::emitAArch64SVEBuiltinExpr(unsigned builtinID,
- const CallExpr *expr) {
+std::optional<mlir::Value>
+CIRGenFunction::emitAArch64SVEBuiltinExpr(unsigned builtinID,
+ const CallExpr *expr) {
if (builtinID >= SVE::BI__builtin_sve_reinterpret_s8_s8 &&
builtinID <= SVE::BI__builtin_sve_reinterpret_f64_f64_x4) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
assert(!cir::MissingFeatures::aarch64SVEIntrinsics());
switch (builtinID) {
default:
- return {};
+ return std::nullopt;
case SVE::BI__builtin_sve_svreinterpret_b:
case SVE::BI__builtin_sve_svreinterpret_c:
@@ -163,20 +164,21 @@ mlir::Value CIRGenFunction::emitAArch64SVEBuiltinExpr(unsigned builtinID,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Unreachable: All cases in the switch above return.
}
-mlir::Value CIRGenFunction::emitAArch64SMEBuiltinExpr(unsigned builtinID,
- const CallExpr *expr) {
+std::optional<mlir::Value>
+CIRGenFunction::emitAArch64SMEBuiltinExpr(unsigned builtinID,
+ const CallExpr *expr) {
assert(!cir::MissingFeatures::aarch64SMEIntrinsics());
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Some intrinsics are equivalent for codegen.
@@ -574,7 +576,7 @@ static const std::pair<unsigned, unsigned> neonEquivalentIntrinsicMap[] = {
NEON::BI__builtin_neon_vstl1q_lane_s64},
};
-mlir::Value
+std::optional<mlir::Value>
CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
ReturnValueSlot returnValue,
llvm::Triple::ArchType arch) {
@@ -590,7 +592,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
switch (builtinID) {
@@ -610,34 +612,34 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_trap) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_get_sme_state) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rbit) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rbit64) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_clz ||
@@ -645,20 +647,20 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_cls) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_cls64) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint32zf ||
@@ -666,7 +668,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint64zf ||
@@ -674,7 +676,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint32xf ||
@@ -682,7 +684,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rint64xf ||
@@ -690,14 +692,14 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_jcvt) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_ld64b ||
@@ -707,7 +709,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rndr ||
@@ -715,14 +717,14 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__clear_cache) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if ((builtinID == clang::AArch64::BI__builtin_arm_ldrex ||
@@ -731,14 +733,14 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_ldrex ||
builtinID == clang::AArch64::BI__builtin_arm_ldaex) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if ((builtinID == clang::AArch64::BI__builtin_arm_strex ||
@@ -747,7 +749,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_strex ||
@@ -755,35 +757,35 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__getReg) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__break) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_clrex) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI_ReadWriteBarrier) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// CRC32
@@ -819,7 +821,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Memory Operations (MOPS)
@@ -827,7 +829,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
// Memory Tagging Extensions (MTE) Intrinsics
@@ -857,7 +859,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_arm_rsr ||
@@ -871,7 +873,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI_ReadStatusReg ||
@@ -880,21 +882,21 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI_AddressOfReturnAddress) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__builtin_sponentry) {
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == clang::AArch64::BI__mulh ||
@@ -902,7 +904,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI__writex18byte ||
@@ -912,7 +914,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI__readx18byte ||
@@ -922,7 +924,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI__addx18byte ||
@@ -936,7 +938,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI_CopyDoubleFromInt64 ||
@@ -946,7 +948,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
if (builtinID == AArch64::BI_CountLeadingOnes ||
@@ -956,7 +958,7 @@ CIRGenFunction::emitAArch64BuiltinExpr(unsigned builtinID, const CallExpr *expr,
cgm.errorNYI(expr->getSourceRange(),
std::string("unimplemented AArch64 builtin call: ") +
getContext().BuiltinInfo.getName(builtinID));
- return {};
+ return std::nullopt;
}
...
[truncated]
|
Lancern
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general, some nit comments inline.
| } | ||
|
|
||
| def CIR_AtomicFence : CIR_Op<"atomic.fence"> { | ||
| def CIR_AtomicFenceOp : CIR_Op<"atomic.fence"> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, but maybe this could be a separate NFC PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I revert this change and will submit a new PR
| // CIR dialect operations maybe have no results, no value will return | ||
| // even if it executes successfully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // CIR dialect operations maybe have no results, no value will return | |
| // even if it executes successfully. | |
| // CIR dialect operations may have no results, no values will be returned | |
| // even if it executes successfully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
andykaylor
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is OK, but we'd need some cleanup around the optional return type when we've already issued diagnostics.
| // Now see if we can emit a target-specific builtin. | ||
| if (mlir::Value v = emitTargetBuiltinExpr(builtinID, e, returnValue)) { | ||
| if (std::optional<mlir::Value> rst = | ||
| emitTargetBuiltinExpr(builtinID, e, returnValue)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed changing the return type of this function to std::optional<mlir::Value> here: #168051. @bcardosolopes was against using optional because mlir::Value already has similar semantics, but I think maybe we need to revisit that discussion. Unfortunately, Bruno is on vacation for the next week and a half.
I also initially objected when @HendrikHuebner suggested returning std::optional<mlir::Value> because we were returning {} (i.e. a null mlir::Value) in cases where we had already issued a diagnostic. Thinking about it again, we could continue to return {} for cases where we have issued the diagnostic and return std::nullopt in cases where we didn't issue a diagnostic but also didn't handle the builtin. That should work.
Given that this is a temporary mechanism that will go away once everything is implemented, I think we can probably live with the odd double-optional semantics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I put your suggestions in the annotation and apply them to the relevant code.
| std::string("unimplemented AArch64 builtin call: ") + | ||
| getContext().BuiltinInfo.getName(builtinID)); | ||
| return {}; | ||
| return std::nullopt; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would lead to the unimplemented builtin diagnostic being reported twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prior to this PR, the unimplemented builtin diagnostic also were reported twice? I think return mlir::Value{} under the change will avoid this issue temporarily.
andykaylor
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should hold this until #171248 is commited and then rebase this with that change. Otherwise, this looks good.
_ReadWriteBarrier_ReadBarrier_WriteBarrier__faststorefenceemitTargetBuiltinExpreven if it executes successfully. Usingstd::optional<mlir::Value>as return type to reslove this problem.