Skip to content

Conversation

@Lancern
Copy link
Member

@Lancern Lancern commented Dec 8, 2025

This patch upstreams scoped variant of atomic load operations from incubator.

@llvmbot llvmbot added clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project labels Dec 8, 2025
@llvmbot
Copy link
Member

llvmbot commented Dec 8, 2025

@llvm/pr-subscribers-clangir

@llvm/pr-subscribers-clang

Author: Sirui Mu (Lancern)

Changes

This patch upstreams scoped variant of atomic load operations from incubator.


Full diff: https://github.com/llvm/llvm-project/pull/171134.diff

7 Files Affected:

  • (modified) clang/include/clang/CIR/Dialect/Builder/CIRBaseBuilder.h (+3-2)
  • (modified) clang/include/clang/CIR/Dialect/IR/CIROps.td (+11-5)
  • (modified) clang/include/clang/CIR/MissingFeatures.h (+1)
  • (modified) clang/lib/CIR/CodeGen/CIRGenAtomic.cpp (+56-21)
  • (modified) clang/lib/CIR/CodeGen/CIRGenBuilder.h (+2)
  • (modified) clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp (+5-2)
  • (added) clang/test/CIR/CodeGen/atomic-scoped.c (+40)
diff --git a/clang/include/clang/CIR/Dialect/Builder/CIRBaseBuilder.h b/clang/include/clang/CIR/Dialect/Builder/CIRBaseBuilder.h
index aa47c4bce189b..b286e4b4ba1eb 100644
--- a/clang/include/clang/CIR/Dialect/Builder/CIRBaseBuilder.h
+++ b/clang/include/clang/CIR/Dialect/Builder/CIRBaseBuilder.h
@@ -205,7 +205,8 @@ class CIRBaseBuilderTy : public mlir::OpBuilder {
                          bool isVolatile = false, uint64_t alignment = 0) {
     mlir::IntegerAttr alignmentAttr = getAlignmentAttr(alignment);
     return cir::LoadOp::create(*this, loc, ptr, /*isDeref=*/false, isVolatile,
-                               alignmentAttr, cir::MemOrderAttr{});
+                               alignmentAttr, cir::SyncScopeKindAttr{},
+                               cir::MemOrderAttr{});
   }
 
   mlir::Value createAlignedLoad(mlir::Location loc, mlir::Value ptr,
@@ -366,7 +367,7 @@ class CIRBaseBuilderTy : public mlir::OpBuilder {
     auto addr = createAlloca(loc, getPointerTo(type), type, {}, alignmentAttr);
     return cir::LoadOp::create(*this, loc, addr, /*isDeref=*/false,
                                /*isVolatile=*/false, alignmentAttr,
-                               /*mem_order=*/{});
+                               /*sync_scope=*/{}, /*mem_order=*/{});
   }
 
   cir::PtrStrideOp createPtrStride(mlir::Location loc, mlir::Value base,
diff --git a/clang/include/clang/CIR/Dialect/IR/CIROps.td b/clang/include/clang/CIR/Dialect/IR/CIROps.td
index fcc7585cf81a5..08043f597398b 100644
--- a/clang/include/clang/CIR/Dialect/IR/CIROps.td
+++ b/clang/include/clang/CIR/Dialect/IR/CIROps.td
@@ -446,6 +446,15 @@ def CIR_MemOrder : CIR_I32EnumAttr<
     I32EnumAttrCase<"SequentiallyConsistent", 5, "seq_cst">
 ]>;
 
+//===----------------------------------------------------------------------===//
+// C/C++ sync scope definitions
+//===----------------------------------------------------------------------===//
+
+def CIR_SyncScopeKind : CIR_I32EnumAttr<"SyncScopeKind", "sync scope kind", [
+  I32EnumAttrCase<"SingleThread", 0, "single_thread">,
+  I32EnumAttrCase<"System", 1, "system">
+]>;
+
 //===----------------------------------------------------------------------===//
 // AllocaOp
 //===----------------------------------------------------------------------===//
@@ -586,6 +595,7 @@ def CIR_LoadOp : CIR_Op<"load", [
                        UnitAttr:$isDeref,
                        UnitAttr:$is_volatile,
                        OptionalAttr<I64Attr>:$alignment,
+                       OptionalAttr<CIR_SyncScopeKind>:$sync_scope,
                        OptionalAttr<CIR_MemOrder>:$mem_order);
   let results = (outs CIR_AnyType:$result);
 
@@ -593,6 +603,7 @@ def CIR_LoadOp : CIR_Op<"load", [
     (`deref` $isDeref^)?
     (`volatile` $is_volatile^)?
     (`align` `(` $alignment^ `)`)?
+    (`syncscope` `(` $sync_scope^ `)`)?
     (`atomic` `(` $mem_order^ `)`)?
     $addr `:` qualified(type($addr)) `,` type($result) attr-dict
   }];
@@ -5265,11 +5276,6 @@ def CIR_AtomicFetchKind : CIR_I32EnumAttr<
     I32EnumAttrCase<"Min", 7, "min">
 ]>;
 
-def CIR_SyncScopeKind : CIR_I32EnumAttr<"SyncScopeKind", "sync scope kind", [
-  I32EnumAttrCase<"SingleThread", 0, "single_thread">,
-  I32EnumAttrCase<"System", 1, "system">
-]>;
-
 def CIR_AtomicFetchOp : CIR_Op<"atomic.fetch", [
   AllTypesMatch<["result", "val"]>,
   TypesMatchWith<"type of 'val' must match the pointee type of 'ptr'",
diff --git a/clang/include/clang/CIR/MissingFeatures.h b/clang/include/clang/CIR/MissingFeatures.h
index 826a4b13f5c0c..3d230edbc6156 100644
--- a/clang/include/clang/CIR/MissingFeatures.h
+++ b/clang/include/clang/CIR/MissingFeatures.h
@@ -180,6 +180,7 @@ struct MissingFeatures {
   static bool atomicInfoGetAtomicAddress() { return false; }
   static bool atomicScope() { return false; }
   static bool atomicSyncScopeID() { return false; }
+  static bool atomicMapTargetSyncScope() { return false; }
   static bool atomicTypes() { return false; }
   static bool atomicUseLibCall() { return false; }
   static bool atomicMicrosoftVolatile() { return false; }
diff --git a/clang/lib/CIR/CodeGen/CIRGenAtomic.cpp b/clang/lib/CIR/CodeGen/CIRGenAtomic.cpp
index 4c94db5ddd457..78792b53a0b2f 100644
--- a/clang/lib/CIR/CodeGen/CIRGenAtomic.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenAtomic.cpp
@@ -399,20 +399,14 @@ static void emitAtomicCmpXchgFailureSet(CIRGenFunction &cgf, AtomicExpr *e,
 static void emitAtomicOp(CIRGenFunction &cgf, AtomicExpr *expr, Address dest,
                          Address ptr, Address val1, Address val2,
                          Expr *isWeakExpr, Expr *failureOrderExpr, int64_t size,
-                         cir::MemOrder order) {
-  std::unique_ptr<AtomicScopeModel> scopeModel = expr->getScopeModel();
-  if (scopeModel) {
-    assert(!cir::MissingFeatures::atomicScope());
-    cgf.cgm.errorNYI(expr->getSourceRange(), "emitAtomicOp: atomic scope");
-    return;
-  }
-
+                         cir::MemOrder order, cir::SyncScopeKind scope) {
   assert(!cir::MissingFeatures::atomicSyncScopeID());
   llvm::StringRef opName;
 
   CIRGenBuilderTy &builder = cgf.getBuilder();
   mlir::Location loc = cgf.getLoc(expr->getSourceRange());
   auto orderAttr = cir::MemOrderAttr::get(builder.getContext(), order);
+  auto scopeAttr = cir::SyncScopeKindAttr::get(builder.getContext(), scope);
   cir::AtomicFetchKindAttr fetchAttr;
   bool fetchFirst = true;
 
@@ -446,13 +440,14 @@ static void emitAtomicOp(CIRGenFunction &cgf, AtomicExpr *expr, Address dest,
 
   case AtomicExpr::AO__c11_atomic_load:
   case AtomicExpr::AO__atomic_load_n:
-  case AtomicExpr::AO__atomic_load: {
+  case AtomicExpr::AO__atomic_load:
+  case AtomicExpr::AO__scoped_atomic_load_n:
+  case AtomicExpr::AO__scoped_atomic_load: {
     cir::LoadOp load =
         builder.createLoad(loc, ptr, /*isVolatile=*/expr->isVolatile());
 
-    assert(!cir::MissingFeatures::atomicSyncScopeID());
-
     load->setAttr("mem_order", orderAttr);
+    load->setAttr("sync_scope", scopeAttr);
 
     builder.createStore(loc, load->getResult(0), dest);
     return;
@@ -586,8 +581,6 @@ static void emitAtomicOp(CIRGenFunction &cgf, AtomicExpr *expr, Address dest,
 
   case AtomicExpr::AO__opencl_atomic_load:
   case AtomicExpr::AO__hip_atomic_load:
-  case AtomicExpr::AO__scoped_atomic_load_n:
-  case AtomicExpr::AO__scoped_atomic_load:
 
   case AtomicExpr::AO__opencl_atomic_store:
   case AtomicExpr::AO__hip_atomic_store:
@@ -686,9 +679,10 @@ static bool isMemOrderValid(uint64_t order, bool isStore, bool isLoad) {
 }
 
 static void emitAtomicExprWithDynamicMemOrder(
-    CIRGenFunction &cgf, mlir::Value order, AtomicExpr *e, Address dest,
-    Address ptr, Address val1, Address val2, Expr *isWeakExpr,
-    Expr *orderFailExpr, uint64_t size, bool isStore, bool isLoad) {
+    CIRGenFunction &cgf, mlir::Value order, cir::SyncScopeKind syncScope,
+    AtomicExpr *e, Address dest, Address ptr, Address val1, Address val2,
+    Expr *isWeakExpr, Expr *orderFailExpr, uint64_t size, bool isStore,
+    bool isLoad) {
   // The memory order is not known at compile-time.  The atomic operations
   // can't handle runtime memory orders; the memory order must be hard coded.
   // Generate a "switch" statement that converts a runtime value into a
@@ -706,7 +700,7 @@ static void emitAtomicExprWithDynamicMemOrder(
           else
             emitMemOrderCaseLabel(builder, loc, order.getType(), caseOrders);
           emitAtomicOp(cgf, e, dest, ptr, val1, val2, isWeakExpr, orderFailExpr,
-                       size, actualOrder);
+                       size, actualOrder, syncScope);
           builder.createBreak(loc);
           builder.setInsertionPointToEnd(switchBlock);
         };
@@ -749,6 +743,21 @@ static void emitAtomicExprWithDynamicMemOrder(
       });
 }
 
+// Map clang sync scope to CIR sync scope.
+static cir::SyncScopeKind convertSyncScopeToCIR(clang::SyncScope scope) {
+  switch (scope) {
+  default: {
+    assert(!cir::MissingFeatures::atomicSyncScopeID());
+    return cir::SyncScopeKind::System;
+  }
+
+  case clang::SyncScope::SingleScope:
+    return cir::SyncScopeKind::SingleThread;
+  case clang::SyncScope::SystemScope:
+    return cir::SyncScopeKind::System;
+  }
+}
+
 RValue CIRGenFunction::emitAtomicExpr(AtomicExpr *e) {
   QualType atomicTy = e->getPtr()->getType()->getPointeeType();
   QualType memTy = atomicTy;
@@ -778,6 +787,23 @@ RValue CIRGenFunction::emitAtomicExpr(AtomicExpr *e) {
   if (!e->getOrder()->EvaluateAsInt(orderConst, getContext()))
     order = emitScalarExpr(e->getOrder());
 
+  cir::SyncScopeKind scopeConst;
+  mlir::Value scope;
+  if (std::unique_ptr<AtomicScopeModel> scopeModel = e->getScopeModel()) {
+    Expr::EvalResult scopeEvalResult;
+    Expr *scopeExpr = e->getScope();
+    if (scopeExpr->EvaluateAsInt(scopeEvalResult, getContext())) {
+      // TODO(cir): map clang sync scope to target-specific sync scope.
+      assert(!cir::MissingFeatures::atomicSyncScopeID());
+      scopeConst = convertSyncScopeToCIR(
+          scopeModel->map(scopeEvalResult.Val.getInt().getZExtValue()));
+    } else {
+      scope = emitScalarExpr(scopeExpr);
+    }
+  } else {
+    scopeConst = cir::SyncScopeKind::System;
+  }
+
   bool shouldCastToIntPtrTy = true;
 
   switch (e->getOp()) {
@@ -789,12 +815,14 @@ RValue CIRGenFunction::emitAtomicExpr(AtomicExpr *e) {
     llvm_unreachable("already handled above with emitAtomicInit");
 
   case AtomicExpr::AO__atomic_load_n:
+  case AtomicExpr::AO__scoped_atomic_load_n:
   case AtomicExpr::AO__c11_atomic_load:
   case AtomicExpr::AO__atomic_test_and_set:
   case AtomicExpr::AO__atomic_clear:
     break;
 
   case AtomicExpr::AO__atomic_load:
+  case AtomicExpr::AO__scoped_atomic_load:
     dest = emitPointerWithAlignment(e->getVal1());
     break;
 
@@ -927,6 +955,13 @@ RValue CIRGenFunction::emitAtomicExpr(AtomicExpr *e) {
                 e->getOp() == AtomicExpr::AO__scoped_atomic_load ||
                 e->getOp() == AtomicExpr::AO__scoped_atomic_load_n;
 
+  // TODO(cir): support dynamic sync scope
+  if (scope) {
+    assert(!cir::MissingFeatures::atomicSyncScopeID());
+    cgm.errorNYI(e->getSourceRange(), "emitAtomicExpr: dynamic sync scope");
+    return RValue::get(nullptr);
+  }
+
   if (!order) {
     // We have evaluated the memory order as an integer constant in orderConst.
     // We should not ever get to a case where the ordering isn't a valid CABI
@@ -934,11 +969,11 @@ RValue CIRGenFunction::emitAtomicExpr(AtomicExpr *e) {
     uint64_t ord = orderConst.Val.getInt().getZExtValue();
     if (isMemOrderValid(ord, isStore, isLoad))
       emitAtomicOp(*this, e, dest, ptr, val1, val2, isWeakExpr, orderFailExpr,
-                   size, static_cast<cir::MemOrder>(ord));
+                   size, static_cast<cir::MemOrder>(ord), scopeConst);
   } else {
-    emitAtomicExprWithDynamicMemOrder(*this, order, e, dest, ptr, val1, val2,
-                                      isWeakExpr, orderFailExpr, size, isStore,
-                                      isLoad);
+    emitAtomicExprWithDynamicMemOrder(*this, order, scopeConst, e, dest, ptr,
+                                      val1, val2, isWeakExpr, orderFailExpr,
+                                      size, isStore, isLoad);
   }
 
   if (resultTy->isVoidType())
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuilder.h b/clang/lib/CIR/CodeGen/CIRGenBuilder.h
index 85b38120169fd..edcb7a494fed6 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuilder.h
+++ b/clang/lib/CIR/CodeGen/CIRGenBuilder.h
@@ -462,6 +462,7 @@ class CIRGenBuilderTy : public cir::CIRBaseBuilderTy {
     mlir::IntegerAttr align = getAlignmentAttr(addr.getAlignment());
     return cir::LoadOp::create(*this, loc, addr.getPointer(), /*isDeref=*/false,
                                isVolatile, /*alignment=*/align,
+                               /*sync_scope=*/cir::SyncScopeKindAttr{},
                                /*mem_order=*/cir::MemOrderAttr{});
   }
 
@@ -473,6 +474,7 @@ class CIRGenBuilderTy : public cir::CIRBaseBuilderTy {
     mlir::IntegerAttr alignAttr = getAlignmentAttr(alignment);
     return cir::LoadOp::create(*this, loc, ptr, /*isDeref=*/false,
                                /*isVolatile=*/false, alignAttr,
+                               /*sync_scope=*/cir::SyncScopeKindAttr{},
                                /*mem_order=*/cir::MemOrderAttr{});
   }
 
diff --git a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
index 97bd3cf850daa..34179164b5dac 100644
--- a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
+++ b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
@@ -1643,12 +1643,15 @@ mlir::LogicalResult CIRToLLVMLoadOpLowering::matchAndRewrite(
 
   assert(!cir::MissingFeatures::lowerModeOptLevel());
 
-  // TODO: nontemporal, syncscope.
+  // TODO: nontemporal.
   assert(!cir::MissingFeatures::opLoadStoreNontemporal());
+  std::optional<llvm::StringRef> syncScope =
+      getLLVMSyncScope(op.getSyncScope());
   mlir::LLVM::LoadOp newLoad = mlir::LLVM::LoadOp::create(
       rewriter, op->getLoc(), llvmTy, adaptor.getAddr(), alignment,
       op.getIsVolatile(), /*isNonTemporal=*/false,
-      /*isInvariant=*/false, /*isInvariantGroup=*/false, ordering);
+      /*isInvariant=*/false, /*isInvariantGroup=*/false, ordering,
+      syncScope.value_or(llvm::StringRef()));
 
   // Convert adapted result to its original type if needed.
   mlir::Value result =
diff --git a/clang/test/CIR/CodeGen/atomic-scoped.c b/clang/test/CIR/CodeGen/atomic-scoped.c
new file mode 100644
index 0000000000000..04989589bee26
--- /dev/null
+++ b/clang/test/CIR/CodeGen/atomic-scoped.c
@@ -0,0 +1,40 @@
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-cir %s -o %t.cir
+// RUN: FileCheck --input-file=%t.cir %s -check-prefix=CIR
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-llvm %s -o %t-cir.ll
+// RUN: FileCheck --input-file=%t-cir.ll %s -check-prefix=LLVM
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -Wno-unused-value -emit-llvm %s -o %t.ll
+// RUN: FileCheck --input-file=%t.ll %s -check-prefix=OGCG
+
+void scoped_atomic_load(int *ptr) {
+  // CIR-LABEL: @scoped_atomic_load
+  // LLVM-LABEL: @scoped_atomic_load
+  // OGCG-LABEL: @scoped_atomic_load
+
+  int x;
+  __scoped_atomic_load(ptr, &x, __ATOMIC_RELAXED, __MEMORY_SCOPE_SINGLE);
+  // CIR: %{{.+}} = cir.load align(4) syncscope(single_thread) atomic(relaxed) %{{.+}} : !cir.ptr<!s32i>, !s32i
+  // LLVM: %{{.+}} = load atomic i32, ptr %{{.+}} syncscope("singlethread") monotonic, align 4
+  // OGCG: %{{.+}} = load atomic i32, ptr %{{.+}} monotonic, align 4
+
+  __scoped_atomic_load(ptr, &x, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+  // CIR: %{{.+}} = cir.load align(4) syncscope(system) atomic(relaxed) %{{.+}} : !cir.ptr<!s32i>, !s32i
+  // LLVM: %{{.+}} = load atomic i32, ptr %{{.+}} monotonic, align 4
+  // OGCG: %{{.+}} = load atomic i32, ptr %{{.+}} monotonic, align 4
+}
+
+void scoped_atomic_load_n(int *ptr) {
+  // CIR-LABEL: @scoped_atomic_load_n
+  // LLVM-LABEL: @scoped_atomic_load_n
+  // OGCG-LABEL: @scoped_atomic_load_n
+
+  int x;
+  x = __scoped_atomic_load_n(ptr, __ATOMIC_RELAXED, __MEMORY_SCOPE_SINGLE);
+  // CIR: %{{.+}} = cir.load align(4) syncscope(single_thread) atomic(relaxed) %{{.+}} : !cir.ptr<!s32i>, !s32i
+  // LLVM: %{{.+}} = load atomic i32, ptr %{{.+}} syncscope("singlethread") monotonic, align 4
+  // OGCG: %{{.+}} = load atomic i32, ptr %{{.+}} monotonic, align 4
+
+  x = __scoped_atomic_load_n(ptr, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+  // CIR: %{{.+}} = cir.load align(4) syncscope(system) atomic(relaxed) %{{.+}} : !cir.ptr<!s32i>, !s32i
+  // LLVM: %{{.+}} = load atomic i32, ptr %{{.+}} monotonic, align 4
+  // OGCG: %{{.+}} = load atomic i32, ptr %{{.+}} monotonic, align 4
+}

@github-actions
Copy link

github-actions bot commented Dec 8, 2025

🐧 Linux x64 Test Results

  • 112658 tests passed
  • 4095 tests skipped

✅ The build succeeded and all tests passed.

@Lancern Lancern force-pushed the cir/scoped-atomic-load branch from cf48ffb to 09e659f Compare December 8, 2025 14:56
static cir::SyncScopeKind convertSyncScopeToCIR(clang::SyncScope scope) {
switch (scope) {
default: {
assert(!cir::MissingFeatures::atomicSyncScopeID());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like it should have an errorNYI diagnostic. There are a lot of cases covered by this default.

Expr *isWeakExpr, Expr *failureOrderExpr, int64_t size,
cir::MemOrder order) {
std::unique_ptr<AtomicScopeModel> scopeModel = expr->getScopeModel();
if (scopeModel) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This divergence from the structure of classic codegen worries me. It looks like we need for the handling to be done here to cover the non-constant scope value case properly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit worried that the changes to align the code structure may not fit in this PR. Should this be a separate PR?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The divergence I was talking about is introduced by this PR. That is, handling the scope model in emitAtomicExpr as opposed to handling it here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The divergence I was talking about is introduced by this PR. That is, handling the scope model in emitAtomicExpr as opposed to handling it here.

I'll address this in the next revision of this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. Aligned CIRGen code structure with the classic CG.

int x;
__scoped_atomic_load(ptr, &x, __ATOMIC_RELAXED, __MEMORY_SCOPE_SINGLE);
// CIR: %{{.+}} = cir.load align(4) syncscope(single_thread) atomic(relaxed) %{{.+}} : !cir.ptr<!s32i>, !s32i
// LLVM: %{{.+}} = load atomic i32, ptr %{{.+}} syncscope("singlethread") monotonic, align 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have this difference from OGCG?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is due to a difference between the incubator and classic codegen. I think we could address this in a later PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer not to introduce the problem in the first place if possible. It looks like a difference between getLLVMSyncScopeID (which has target-specific behavior) and convertSyncScopeToCIR (which is returning cir::SyncScopeKind::SingleThread for clang::SyncScope::SingleScope).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem that stops me from including this directly in this PR is that this could be placed either in CIRGen or in LLVMLowering. I'm not sure it's a good idea to put this in CIRGen.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a fair point.

@Lancern Lancern force-pushed the cir/scoped-atomic-load branch from 09e659f to f0665b2 Compare December 9, 2025 13:40
Copy link
Contributor

@andykaylor andykaylor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants