Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[libc++] Implement C++20 atomic_ref #76647

Merged
merged 99 commits into from
May 21, 2024
Merged

[libc++] Implement C++20 atomic_ref #76647

merged 99 commits into from
May 21, 2024

Conversation

dalg24
Copy link
Contributor

@dalg24 dalg24 commented Dec 31, 2023

Implement atomic_ref class template by reusing atomic_base_impl.
Based on the work from https://reviews.llvm.org/D72240

@dalg24 dalg24 requested a review from a team as a code owner December 31, 2023 03:31
Copy link

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be
notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write
permissions for the repository. In which case you can instead tag reviewers by
name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review
by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate
is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot llvmbot added the libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi. label Dec 31, 2023
@llvmbot
Copy link
Collaborator

llvmbot commented Dec 31, 2023

@llvm/pr-subscribers-libcxx

Author: Damien L-G (dalg24)

Changes

Implement atomic_ref class template by reusing atomic_base_impl.
Based on the work from https://reviews.llvm.org/D72240


Patch is 99.23 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/76647.diff

40 Files Affected:

  • (modified) libcxx/include/CMakeLists.txt (+1)
  • (added) libcxx/include/__atomic/atomic_ref.h (+286)
  • (modified) libcxx/include/__atomic/check_memory_order.h (+4)
  • (modified) libcxx/include/__atomic/cxx_atomic_impl.h (+73-350)
  • (modified) libcxx/include/__config (+3-1)
  • (modified) libcxx/include/atomic (+1)
  • (added) libcxx/test/libcxx/atomics/atomics.ref/assert.compare_exchange_strong.pass.cpp (+63)
  • (added) libcxx/test/libcxx/atomics/atomics.ref/assert.compare_exchange_weak.pass.cpp (+63)
  • (added) libcxx/test/libcxx/atomics/atomics.ref/assert.ctor.pass.cpp (+39)
  • (added) libcxx/test/libcxx/atomics/atomics.ref/assert.load.pass.cpp (+60)
  • (added) libcxx/test/libcxx/atomics/atomics.ref/assert.store.pass.cpp (+68)
  • (added) libcxx/test/libcxx/atomics/atomics.ref/assert.wait.pass.cpp (+60)
  • (added) libcxx/test/std/atomics/atomics.ref/assign.pass.cpp (+50)
  • (added) libcxx/test/std/atomics/atomics.ref/bitwise_and_assign.pass.cpp (+49)
  • (added) libcxx/test/std/atomics/atomics.ref/bitwise_or_assign.pass.cpp (+49)
  • (added) libcxx/test/std/atomics/atomics.ref/bitwise_xor_assign.pass.cpp (+49)
  • (added) libcxx/test/std/atomics/atomics.ref/compare_exchange_strong.pass.cpp (+83)
  • (added) libcxx/test/std/atomics/atomics.ref/compare_exchange_weak.pass.cpp (+84)
  • (added) libcxx/test/std/atomics/atomics.ref/convert.pass.cpp (+47)
  • (added) libcxx/test/std/atomics/atomics.ref/ctor.explicit.verify.cpp (+34)
  • (added) libcxx/test/std/atomics/atomics.ref/ctor.pass.cpp (+46)
  • (added) libcxx/test/std/atomics/atomics.ref/deduction.pass.cpp (+39)
  • (added) libcxx/test/std/atomics/atomics.ref/exchange.pass.cpp (+48)
  • (added) libcxx/test/std/atomics/atomics.ref/fetch_add.pass.cpp (+75)
  • (added) libcxx/test/std/atomics/atomics.ref/fetch_and.pass.cpp (+56)
  • (added) libcxx/test/std/atomics/atomics.ref/fetch_or.pass.cpp (+54)
  • (added) libcxx/test/std/atomics/atomics.ref/fetch_sub.pass.cpp (+75)
  • (added) libcxx/test/std/atomics/atomics.ref/fetch_xor.pass.cpp (+54)
  • (added) libcxx/test/std/atomics/atomics.ref/increment_decrement.pass.cpp (+85)
  • (added) libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp (+47)
  • (added) libcxx/test/std/atomics/atomics.ref/load.pass.cpp (+48)
  • (added) libcxx/test/std/atomics/atomics.ref/member_types.pass.cpp (+134)
  • (added) libcxx/test/std/atomics/atomics.ref/notify_all.pass.cpp (+86)
  • (added) libcxx/test/std/atomics/atomics.ref/notify_one.pass.cpp (+54)
  • (added) libcxx/test/std/atomics/atomics.ref/operator_minus_equals.pass.cpp (+66)
  • (added) libcxx/test/std/atomics/atomics.ref/operator_plus_equals.pass.cpp (+66)
  • (added) libcxx/test/std/atomics/atomics.ref/required_alignment.pass.cpp (+34)
  • (added) libcxx/test/std/atomics/atomics.ref/store.pass.cpp (+50)
  • (added) libcxx/test/std/atomics/atomics.ref/type.verify.cpp (+26)
  • (added) libcxx/test/std/atomics/atomics.ref/wait.pass.cpp (+65)
diff --git a/libcxx/include/CMakeLists.txt b/libcxx/include/CMakeLists.txt
index 0fe3ab44d2466e..5d7e4ca98b1f75 100644
--- a/libcxx/include/CMakeLists.txt
+++ b/libcxx/include/CMakeLists.txt
@@ -236,6 +236,7 @@ set(files
   __atomic/atomic_flag.h
   __atomic/atomic_init.h
   __atomic/atomic_lock_free.h
+  __atomic/atomic_ref.h
   __atomic/atomic_sync.h
   __atomic/check_memory_order.h
   __atomic/contention_t.h
diff --git a/libcxx/include/__atomic/atomic_ref.h b/libcxx/include/__atomic/atomic_ref.h
new file mode 100644
index 00000000000000..6f2467cad5a24e
--- /dev/null
+++ b/libcxx/include/__atomic/atomic_ref.h
@@ -0,0 +1,286 @@
+// -*- C++ -*-
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//                        Kokkos v. 4.0
+//       Copyright (2022) National Technology & Engineering
+//               Solutions of Sandia, LLC (NTESS).
+//
+// Under the terms of Contract DE-NA0003525 with NTESS,
+// the U.S. Government retains certain rights in this software.
+//
+//===---------------------------------------------------------------------===//
+
+#ifndef _LIBCPP___ATOMIC_ATOMIC_REF_H
+#define _LIBCPP___ATOMIC_ATOMIC_REF_H
+
+#include <__assert>
+#include <__atomic/check_memory_order.h>
+#include <__atomic/cxx_atomic_impl.h>
+#include <__atomic/is_always_lock_free.h>
+#include <__config>
+#include <__memory/addressof.h>
+#include <__type_traits/is_floating_point.h>
+#include <__type_traits/is_function.h>
+#include <__type_traits/is_nothrow_constructible.h>
+#include <__type_traits/is_same.h>
+#include <cinttypes>
+#include <concepts>
+#include <cstddef>
+#include <limits>
+
+#if !defined(_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER)
+#  pragma GCC system_header
+#endif
+
+_LIBCPP_PUSH_MACROS
+#include <__undef_macros>
+
+_LIBCPP_BEGIN_NAMESPACE_STD
+
+#if _LIBCPP_STD_VER >= 20
+
+template <class _Tp, bool = is_integral_v<_Tp> && !is_same_v<_Tp, bool>, bool = is_floating_point_v<_Tp>>
+struct __atomic_ref_base {
+  mutable __cxx_atomic_impl<_Tp&> __a_;
+
+  using value_type = _Tp;
+
+  static constexpr size_t required_alignment = alignof(_Tp);
+
+  static constexpr bool is_always_lock_free = __libcpp_is_always_lock_free<_Tp>::__value;
+
+  _LIBCPP_HIDE_FROM_ABI
+  bool is_lock_free() const noexcept { return __cxx_atomic_is_lock_free(sizeof(_Tp)); }
+
+  _LIBCPP_HIDE_FROM_ABI
+  void store(_Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept
+      _LIBCPP_CHECK_STORE_MEMORY_ORDER(__order) {
+    _LIBCPP_ASSERT_UNCATEGORIZED(
+        __order == memory_order::relaxed || __order == memory_order::release || __order == memory_order::seq_cst,
+        "memory order argument to atomic store operation is invalid");
+    __cxx_atomic_store(&__a_, __desired, __order);
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator=(_Tp __desired) const noexcept {
+    store(__desired);
+    return __desired;
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp load(memory_order __order = memory_order::seq_cst) const noexcept _LIBCPP_CHECK_LOAD_MEMORY_ORDER(__order) {
+    _LIBCPP_ASSERT_UNCATEGORIZED(
+        __order == memory_order::relaxed || __order == memory_order::consume || __order == memory_order::acquire ||
+            __order == memory_order::seq_cst,
+        "memory order argument to atomic load operation is invalid");
+    return __cxx_atomic_load(&__a_, __order);
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  operator _Tp() const noexcept { return load(); }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp exchange(_Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept {
+    return __cxx_atomic_exchange(&__a_, __desired, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  bool
+  compare_exchange_weak(_Tp& __expected, _Tp __desired, memory_order __success, memory_order __failure) const noexcept
+      _LIBCPP_CHECK_EXCHANGE_MEMORY_ORDER(__success, __failure) {
+    _LIBCPP_ASSERT_UNCATEGORIZED(
+        __failure == memory_order::relaxed || __failure == memory_order::consume ||
+            __failure == memory_order::acquire || __failure == memory_order::seq_cst,
+        "failure memory order argument to weak atomic compare-and-exchange operation is invalid");
+    return __cxx_atomic_compare_exchange_weak(&__a_, &__expected, __desired, __success, __failure);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  bool
+  compare_exchange_strong(_Tp& __expected, _Tp __desired, memory_order __success, memory_order __failure) const noexcept
+      _LIBCPP_CHECK_EXCHANGE_MEMORY_ORDER(__success, __failure) {
+    _LIBCPP_ASSERT_UNCATEGORIZED(
+        __failure == memory_order::relaxed || __failure == memory_order::consume ||
+            __failure == memory_order::acquire || __failure == memory_order::seq_cst,
+        "failure memory order argument to strong atomic compare-and-exchange operation is invalid");
+    return __cxx_atomic_compare_exchange_strong(&__a_, &__expected, __desired, __success, __failure);
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  bool
+  compare_exchange_weak(_Tp& __expected, _Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept {
+    return __cxx_atomic_compare_exchange_weak(&__a_, &__expected, __desired, __order, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  bool
+  compare_exchange_strong(_Tp& __expected, _Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept {
+    return __cxx_atomic_compare_exchange_strong(&__a_, &__expected, __desired, __order, __order);
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  void wait(_Tp __old, memory_order __order = memory_order::seq_cst) const noexcept
+      _LIBCPP_CHECK_WAIT_MEMORY_ORDER(__order) {
+    _LIBCPP_ASSERT_UNCATEGORIZED(
+        __order == memory_order::relaxed || __order == memory_order::consume || __order == memory_order::acquire ||
+            __order == memory_order::seq_cst,
+        "memory order argument to atomic wait operation is invalid");
+    __cxx_atomic_wait(addressof(__a_), __old, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  void notify_one() const noexcept { __cxx_atomic_notify_one(addressof(__a_)); }
+  _LIBCPP_HIDE_FROM_ABI
+  void notify_all() const noexcept { __cxx_atomic_notify_all(addressof(__a_)); }
+
+  _LIBCPP_HIDE_FROM_ABI
+  __atomic_ref_base(_Tp& __obj) : __a_(__obj) {}
+};
+
+template <class _Tp>
+struct __atomic_ref_base<_Tp, /*_IsIntegral=*/true, /*_IsFloatingPoint=*/false>
+    : public __atomic_ref_base<_Tp, false, false> {
+  using __base = __atomic_ref_base<_Tp, false, false>;
+
+  using difference_type = __base::value_type;
+
+  _LIBCPP_HIDE_FROM_ABI
+  __atomic_ref_base(_Tp& __obj) : __base(__obj) {}
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator=(_Tp __desired) const noexcept { return __base::operator=(__desired); }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp fetch_add(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_add(&this->__a_, __arg, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp fetch_sub(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_sub(&this->__a_, __arg, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp fetch_and(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_and(&this->__a_, __arg, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp fetch_or(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_or(&this->__a_, __arg, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp fetch_xor(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_xor(&this->__a_, __arg, __order);
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator++(int) const noexcept { return fetch_add(_Tp(1)); }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator--(int) const noexcept { return fetch_sub(_Tp(1)); }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator++() const noexcept { return fetch_add(_Tp(1)) + _Tp(1); }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator--() const noexcept { return fetch_sub(_Tp(1)) - _Tp(1); }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator+=(_Tp __arg) const noexcept { return fetch_add(__arg) + __arg; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator-=(_Tp __arg) const noexcept { return fetch_sub(__arg) - __arg; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator&=(_Tp __arg) const noexcept { return fetch_and(__arg) & __arg; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator|=(_Tp __arg) const noexcept { return fetch_or(__arg) | __arg; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator^=(_Tp __arg) const noexcept { return fetch_xor(__arg) ^ __arg; }
+};
+
+template <class _Tp>
+struct __atomic_ref_base<_Tp, /*_IsIntegral=*/false, /*_IsFloatingPoint=*/true>
+    : public __atomic_ref_base<_Tp, false, false> {
+  using __base = __atomic_ref_base<_Tp, false, false>;
+
+  using difference_type = __base::value_type;
+
+  _LIBCPP_HIDE_FROM_ABI
+  __atomic_ref_base(_Tp& __obj) : __base(__obj) {}
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator=(_Tp __desired) const noexcept { return __base::operator=(__desired); }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp fetch_add(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_add(&this->__a_, __arg, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp fetch_sub(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_sub(&this->__a_, __arg, __order);
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator+=(_Tp __arg) const noexcept { return fetch_add(__arg) + __arg; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator-=(_Tp __arg) const noexcept { return fetch_sub(__arg) - __arg; }
+};
+
+template <class _Tp>
+struct atomic_ref : public __atomic_ref_base<_Tp> {
+  static_assert(is_trivially_copyable<_Tp>::value, "std::atomic_ref<T> requires that 'T' be a trivially copyable type");
+
+  using __base = __atomic_ref_base<_Tp>;
+
+  _LIBCPP_HIDE_FROM_ABI
+  explicit atomic_ref(_Tp& __obj) : __base(__obj) {
+    _LIBCPP_ASSERT_UNCATEGORIZED((uintptr_t)addressof(__obj) % __base::required_alignment == 0,
+                                 "atomic_ref ctor: referenced object must be aligned to required_alignment");
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  atomic_ref(const atomic_ref&) noexcept = default;
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp operator=(_Tp __desired) const noexcept { return __base::operator=(__desired); }
+
+  atomic_ref& operator=(const atomic_ref&) = delete;
+};
+
+template <class _Tp>
+struct atomic_ref<_Tp*> : public __atomic_ref_base<_Tp*> {
+  using __base = __atomic_ref_base<_Tp*>;
+
+  using difference_type = ptrdiff_t;
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* fetch_add(ptrdiff_t __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_add(&this->__a_, __arg, __order);
+  }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* fetch_sub(ptrdiff_t __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+    return __cxx_atomic_fetch_sub(&this->__a_, __arg, __order);
+  }
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* operator++(int) const noexcept { return fetch_add(1); }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* operator--(int) const noexcept { return fetch_sub(1); }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* operator++() const noexcept { return fetch_add(1) + 1; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* operator--() const noexcept { return fetch_sub(1) - 1; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* operator+=(ptrdiff_t __arg) const noexcept { return fetch_add(__arg) + __arg; }
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* operator-=(ptrdiff_t __arg) const noexcept { return fetch_sub(__arg) - __arg; }
+
+  _LIBCPP_HIDE_FROM_ABI
+  explicit atomic_ref(_Tp*& __ptr) : __base(__ptr) {}
+
+  _LIBCPP_HIDE_FROM_ABI
+  _Tp* operator=(_Tp* __desired) const noexcept { return __base::operator=(__desired); }
+
+  atomic_ref& operator=(const atomic_ref&) = delete;
+};
+
+#endif // _LIBCPP_STD_VER >= 20
+
+_LIBCPP_END_NAMESPACE_STD
+
+_LIBCPP_POP_MACROS
+
+#endif // _LIBCPP__ATOMIC_ATOMIC_REF_H
diff --git a/libcxx/include/__atomic/check_memory_order.h b/libcxx/include/__atomic/check_memory_order.h
index 3012aec0521b38..536f764a619026 100644
--- a/libcxx/include/__atomic/check_memory_order.h
+++ b/libcxx/include/__atomic/check_memory_order.h
@@ -27,4 +27,8 @@
   _LIBCPP_DIAGNOSE_WARNING(__f == memory_order_release || __f == memory_order_acq_rel,                                 \
                            "memory order argument to atomic operation is invalid")
 
+#define _LIBCPP_CHECK_WAIT_MEMORY_ORDER(__m)                                                                           \
+  _LIBCPP_DIAGNOSE_WARNING(__m == memory_order_release || __m == memory_order_acq_rel,                                 \
+                           "memory order argument to atomic operation is invalid")
+
 #endif // _LIBCPP___ATOMIC_CHECK_MEMORY_ORDER_H
diff --git a/libcxx/include/__atomic/cxx_atomic_impl.h b/libcxx/include/__atomic/cxx_atomic_impl.h
index 1a0b808a0cb1c4..56cd703c258944 100644
--- a/libcxx/include/__atomic/cxx_atomic_impl.h
+++ b/libcxx/include/__atomic/cxx_atomic_impl.h
@@ -15,8 +15,12 @@
 #include <__memory/addressof.h>
 #include <__type_traits/conditional.h>
 #include <__type_traits/is_assignable.h>
+#include <__type_traits/is_pointer.h>
+#include <__type_traits/is_reference.h>
 #include <__type_traits/is_trivially_copyable.h>
+#include <__type_traits/is_volatile.h>
 #include <__type_traits/remove_const.h>
+#include <__type_traits/remove_reference.h>
 #include <cstddef>
 #include <cstring>
 
@@ -58,9 +62,27 @@ struct __cxx_atomic_base_impl {
   }
 #  endif // _LIBCPP_CXX03_LANG
   _LIBCPP_CONSTEXPR explicit __cxx_atomic_base_impl(_Tp value) _NOEXCEPT : __a_value(value) {}
+  using __contained_t = _Tp;
   _Tp __a_value;
 };
 
+template <typename _Tp, template <typename> class _TemplateTp>
+struct __is_instantiation_of : false_type {};
+
+template <typename _Tp, template <typename> class _TemplateTp>
+struct __is_instantiation_of<_TemplateTp<_Tp>, _TemplateTp> : true_type {};
+
+template <typename _Tp,
+          typename = typename enable_if<__is_instantiation_of<typename remove_volatile<typename _Tp::__base>::type,
+                                                              __cxx_atomic_base_impl>::value,
+                                        bool>::type>
+struct __cxx_atomic_base_impl_traits {
+  static constexpr bool __is_value_volatile = is_volatile<_Tp>::value;
+  static constexpr bool __is_value_ref      = is_reference<typename _Tp::__contained_t>::value;
+  using __underlying_t = typename remove_volatile<typename remove_reference<typename _Tp::__contained_t>::type>::type;
+  static constexpr bool __is_value_pointer = is_pointer<__underlying_t>::value;
+};
+
 _LIBCPP_HIDE_FROM_ABI inline _LIBCPP_CONSTEXPR int __to_gcc_order(memory_order __order) {
   // Avoid switch statement to make this a constexpr.
   return __order == memory_order_relaxed
@@ -87,13 +109,15 @@ _LIBCPP_HIDE_FROM_ABI inline _LIBCPP_CONSTEXPR int __to_gcc_failure_order(memory
                                 : (__order == memory_order_acq_rel ? __ATOMIC_ACQUIRE : __ATOMIC_CONSUME))));
 }
 
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_init(volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp __val) {
+template <typename _Tp, typename enable_if<__cxx_atomic_base_impl_traits<_Tp>::__is_value_volatile, bool>::type = 0>
+_LIBCPP_HIDE_FROM_ABI void
+__cxx_atomic_init(_Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __val) {
   __cxx_atomic_assign_volatile(__a->__a_value, __val);
 }
 
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_init(__cxx_atomic_base_impl<_Tp>* __a, _Tp __val) {
+template <typename _Tp, typename enable_if<!__cxx_atomic_base_impl_traits<_Tp>::__is_value_volatile, bool>::type = 0>
+_LIBCPP_HIDE_FROM_ABI void
+__cxx_atomic_init(_Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __val) {
   __a->__a_value = __val;
 }
 
@@ -107,77 +131,46 @@ _LIBCPP_HIDE_FROM_ABI inline void __cxx_atomic_signal_fence(memory_order __order
 
 template <typename _Tp>
 _LIBCPP_HIDE_FROM_ABI void
-__cxx_atomic_store(volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp __val, memory_order __order) {
+__cxx_atomic_store(_Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __val, memory_order __order) {
   __atomic_store(std::addressof(__a->__a_value), std::addressof(__val), __to_gcc_order(__order));
 }
 
 template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_store(__cxx_atomic_base_impl<_Tp>* __a, _Tp __val, memory_order __order) {
-  __atomic_store(std::addressof(__a->__a_value), std::addressof(__val), __to_gcc_order(__order));
+_LIBCPP_HIDE_FROM_ABI typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t
+__cxx_atomic_load(const _Tp* __a, memory_order __order) {
+  using _Ret = typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t;
+  alignas(alignof(_Ret)) unsigned char __mem[sizeof(_Ret)];
+  __atomic_load(
+      std::addressof(__a->__a_value), std::addressof(*reinterpret_cast<_Ret*>(__mem)), __to_gcc_order(__order));
+  return *reinterpret_cast<_Ret*>(__mem);
 }
 
 template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp __cxx_atomic_load(const volatile __cxx_atomic_base_impl<_Tp>* __a, memory_order __order) {
-  _Tp __ret;
-  __atomic_load(std::addressof(__a->__a_value), std::addressof(__ret), __to_gcc_order(__order));
-  return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void
-__cxx_atomic_load_inplace(const volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp* __dst, memory_order __order) {
+_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_load_inplace(
+    const _Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t* __dst, memory_order __order) {
   __atomic_load(std::addressof(__a->__a_value), __dst, __to_gcc_order(__order));
 }
 
 template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void
-__cxx_atomic_load_inplace(const __cxx_atomic_base_impl<_Tp>* __a, _Tp* __dst, memory_order __order) {
-  __atomic_load(std::addressof(__a->__a_value), __dst, __to_gcc_order(__order));
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp __cxx_atomic_load(const __cxx_atomic_base_impl<_Tp>* __a, memory_order __order) {
-  _Tp __ret;
-  __atomic_load(std::addressof(__a->__a_value), std::addressof(__ret), __to_gcc_order(__order));
-  return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp
-__cxx_atomic_exchange(volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp __value, memory_order __order) {
-  _Tp __ret;
+_LIBCPP_HIDE_FROM_ABI typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __cxx_atomic_exchange(
+    _Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __value, memory_order __order) {
+  using _Ret = typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t;
+  alignas(alignof(_Ret)) unsigned char __mem[sizeof(_Ret)];
   __atomic_exchange(
-      std::addressof(__a->__a_value), std::addressof(__value), std::addressof(__ret), __to_gcc_order(__order));
-  return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp __cxx_atomic_exchange(__cxx_atomic_base_impl<_Tp>* __a, _Tp __value, memory_order __order) {
-  _Tp __ret;
-  __atomic_exchange(
-      std::addressof(__a->__a_value), std::addressof(__value), std::addressof(__ret), __to_gcc_order(__order));
-  return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI bool __cxx_atomic_compare_exchange_strong(
-    volatile __cxx_atomic_base_impl<_Tp>* __a,
-    _Tp* __expected,
-    _Tp __value,
-    memory_order __success,
-    memory_order __failure) {
-  return __atomic_compare_exchange(
       std::addressof(__a->__a_value),
-      __expected,
       std::addressof(__value),
-      false,
-      __to_gcc_order(__success),
-      __to_gcc_failure_order(__failure));
+      std::addressof(*reinterpret_cast<_Ret*>(__mem)),
+      __to_gcc_order(__order));
+  return *reinterpret_cast<_Ret*>(__mem);
 }
 
 template <typename _Tp>
 _LIBCPP_HIDE_FROM_ABI bool __cxx_atomic_compare_exchange_strong(
-    __cxx_atomic_base_impl<_Tp>* __a, _Tp* __expected, _Tp __value, memory_order __success, memory_order __failure) {
+    _Tp* __a,
+    typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t* __expected,
+    typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __value,
+    memory_order __success,
+    memory_order __failure) {
   return __atomic_compare_...
[truncated]

Copy link

github-actions bot commented Dec 31, 2023

✅ With the latest revision this PR passed the C/C++ code formatter.

@jyknight
Copy link
Member

It's important to note that this PR deletes the codepath in cxx_atomic_impl.h which uses the __c11_* builtins and C _Atomic(T) to implement std::atomic.

That's a wonderful simplification, and I would love to see it done. BUT, it changes the ABI for std::atomic. It both makes it incompatible with previous libc++, and with Clang's C11 _Atomic(T) type.

Unfortunately, the current situation is a total mess.

Mess 1: _Atomic ABI

Clang's ABI for _Atomic(T) is complex: it attempts to round up both size and alignment of T, but only up to a per-platform limit, to ensure that a lock-free atomic can be used if possible. That is: given struct X { char n[3] };, on some platforms, sizeof(std::atomic<X>) and alignof(std::atomic<X>), will be 4 and 4, but on others, it'll be 3 and 1. See the MaxAtomicPromoteWidth values set by clang/lib/Basic/Targets/*.

Which means C11 atomics in Clang are ABI-incompatible with GCC. GCC never adds padding, but instead simply increases alignment for objects of size 1, 2, 4, 8, 16 to match their size. Quite unfortunate situation, since usually GCC and Clang are ABI-compatible for C code.

So: the ABI for C11 _Atomic differs between Clang and GCC. The ABI for libc++'s std::atomic also changes depending on which compiler you build with (since it changes which underlying implementation strategy is used). It matches C11 _Atomic ABI when built on Clang, but when built on GCC, uses an ABI incompatible with both.

GCC's libstdc++ does have a consistent ABI between Clang and GCC, because it always uses GCC's rule, even when built with Clang. But that means it doesn't match C11 _Atomic on Clang, only on GCC.

Making everything worse is that, as of C++23's addition of <stdatomic.h>, expectation of ABI-compatibility between C++ std::atomic<T> and C11 _Atomic(T) is heightened.

This is something that I think we ought to fix -- but it would require an ABI break. A long time ago, I attempted to bring this up as something the psABI documents should define, but didn't manage to push hard enough to get it actually done. My expectation is that if the documents did specify it, they would specify what GCC does. Clang's option is, in principle, nice from a QoI standpoint (it allows more types to be lock-free), but in practice is simply unnecessary complexity.

Mess 2: libc++ atomic configuration options

OK, so, that's a huge minefield. But for atomic_ref, we can actually ignore all of the above complexity, since we cannot change the layout of the underlying type. Thus: we can simply implement atomic_ref directly on the __atomic_* builtins, without going through libc++'s __cxx_atomic_* abstraction layer -- leaving fixing that mess for another day. The abstraction layer really doesn't get us anything!

Well, except that libc++ has a third implementation strategy that std::atomic can use. Namely, 2145054 added a custom locked impl under _LIBCPP_ATOMIC_ONLY_USE_BUILTINS.

IMO, that is effectively superfluous and incompatible with the goal of C and C++ equivalency, and I don't really understand the rationale for adding it in the first place. If someone wants to use locked-atomics in a freestanding implementation, they should simply provide the requisite libatomic libcalls for their environment, end of story.

So, I think we should remove this third option. Or, at least, ignore it when implementing atomic_ref, and do just use the __atomic_* builtins directly.

@dalg24
Copy link
Contributor Author

dalg24 commented Jan 1, 2024

  • Fixed the format (was using the wrong clang-format version...)
  • Fixed some shadow warnings in the test
  • Followed James' suggestion in [libc++] Implement C++20 atomic_ref #76647 (comment) and dropped all the changes to atomic_base_impl and implemented atomic_ref directly in terms of the GCC __atomic_* builtins
  • Haven't implemented wait, notify_one, nor notify_all yet

@dalg24 dalg24 force-pushed the atomic_ref branch 2 times, most recently from 7dd7930 to 970e5cc Compare January 1, 2024 17:09
Copy link
Contributor

@huixie90 huixie90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not yet reviewed the code. Just to bring up some issues we encountered previously:

there are some weird downstream setup. IIRC, there is Fushia where it is Linux but without gcc installed (thus cannot -latomic).

there is also Windows clang-cl which lacks of some builtins.

On Mac, do we have the gcc builtin at all? At least, on my old laptop all the code path are going through the c11 branch.

I’d also like to mention that I am working on several atomic related things, including making types with padding bytes in between member and/or after the tail works with atomic. (btw, does your atomic_ref compare_exchange_strong works for those types?) I don’t think we are allowed to touch the padding bits since atomic_ref does not own the object

Also the current c11_compare_exchange is broken for types whose size is not power of two. And I am also working on fixing that.

and I don’t think breaking existing std::atomic ABI is feasible. even though we could Abi tag std::atomic, there is no way we could tag user defined classes that contain std::atomic and appear in Abi boundary.

@jyknight
Copy link
Member

jyknight commented Jan 2, 2024

there are some weird downstream setup. IIRC, there is Fushia where it is Linux but without gcc installed (thus cannot -latomic).

That doesn't affect whether you can use the __atomic_* builtin functions.

there is also Windows clang-cl which lacks of some builtins.

Fortunately, it doesn't lack these.

On Mac, do we have the gcc builtin at all? At least, on my old laptop all the code path are going through the c11 branch.

Yep, they exist on all Clang.

I’d also like to mention that I am working on several atomic related things, including making types with padding bytes in between member and/or after the tail works with atomic. (btw, does your atomic_ref compare_exchange_strong works for those types?) I don’t think we are allowed to touch the padding bits since atomic_ref does not own the object

This PR indeed does not implement that part of the spec yet. That'll need to wait until __builtin_clear_padding is implemented. I think it's fine to implement this type without that requirement, initially. (note that when the time comes, it will need to use a different implementation than std::atomic (and it'll be extremely inefficient, since it cannot assume the padding is cleared to start with!)

and I don’t think breaking existing std::atomic ABI is feasible. even though we could Abi tag std::atomic, there is no way we could tag user defined classes that contain std::atomic and appear in Abi boundary.

Recall that long-term it's not only std::atomic whose ABI we need to change, but also C11 _Atomic. However, this PR no longer touches it, so that's moot for the purposes of this PR.

@huixie90
Copy link
Contributor

huixie90 commented Jan 3, 2024

there are some weird downstream setup. IIRC, there is Fushia where it is Linux but without gcc installed (thus cannot -latomic).

That doesn't affect whether you can use the __atomic_* builtin functions.

there is also Windows clang-cl which lacks of some builtins.

Fortunately, it doesn't lack these.

On Mac, do we have the gcc builtin at all? At least, on my old laptop all the code path are going through the c11 branch.

Yep, they exist on all Clang.

I’d also like to mention that I am working on several atomic related things, including making types with padding bytes in between member and/or after the tail works with atomic. (btw, does your atomic_ref compare_exchange_strong works for those types?) I don’t think we are allowed to touch the padding bits since atomic_ref does not own the object

This PR indeed does not implement that part of the spec yet. That'll need to wait until __builtin_clear_padding is implemented. I think it's fine to implement this type without that requirement, initially. (note that when the time comes, it will need to use a different implementation than std::atomic (and it'll be extremely inefficient, since it cannot assume the padding is cleared to start with!)

and I don’t think breaking existing std::atomic ABI is feasible. even though we could Abi tag std::atomic, there is no way we could tag user defined classes that contain std::atomic and appear in Abi boundary.

Recall that long-term it's not only std::atomic whose ABI we need to change, but also C11 _Atomic. However, this PR no longer touches it, so that's moot for the purposes of this PR.

just to put some related in progress work I have:

I think your patch is orthogonal to all these bugs we have so we can try to land this one first.

@dalg24 dalg24 force-pushed the atomic_ref branch 2 times, most recently from e651c05 to 54d1920 Compare January 8, 2024 02:06
@ldionne
Copy link
Member

ldionne commented Jan 9, 2024

Just cross-referencing for posterity, there was once an implementation here https://reviews.llvm.org/D72240 that eventually got dropped. There's nothing actionable to do here, I'm just dropping the link for cross-referencing purposes.

@ldionne
Copy link
Member

ldionne commented Feb 5, 2024

IMO, that is effectively superfluous and incompatible with the goal of C and C++ equivalency, and I don't really understand the rationale for adding it in the first place. If someone wants to use locked-atomics in a freestanding implementation, they should simply provide the requisite libatomic libcalls for their environment, end of story.

So, I think we should remove this third option. Or, at least, ignore it when implementing atomic_ref, and do just use the __atomic_* builtins directly.

I would support that too. https://reviews.llvm.org/D56913 happened a long time ago, and if it were to be done again, I might push back against that approach as being too ad-hoc. I'm not even certain we use this anywhere, and I'd argue that Freestanding does not equate there being no builtins. So +1 from me for removing it, at least at first glance.

Copy link
Member

@ldionne ldionne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the patch! The review doesn't go over the implementation in detail since I think we need to consider the overall state of atomic_wait primitives and GCC vs Clang atomics before we commit to an implementation strategy here.

libcxx/include/CMakeLists.txt Show resolved Hide resolved
libcxx/include/__atomic/atomic_ref.h Outdated Show resolved Hide resolved
libcxx/test/std/atomics/atomics.ref/convert.pass.cpp Outdated Show resolved Hide resolved
libcxx/test/std/atomics/atomics.ref/convert.pass.cpp Outdated Show resolved Hide resolved
libcxx/test/std/atomics/atomics.ref/type.verify.cpp Outdated Show resolved Hide resolved
libcxx/include/__atomic/atomic_ref.h Outdated Show resolved Hide resolved
libcxx/include/__atomic/atomic_ref.h Show resolved Hide resolved
Copy link
Member

@mordante mordante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some proposed CI fixes.

Copy link
Member

@ldionne ldionne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This basically LGTM once CI is all passing. Please don't hesitate to ping me on Discord if you need help with specific CI failures and how they should map to UNSUPPORTED and/or XFAIL. This is tricky to get right and atomic is full of them.

Thanks for the patch, there was a lot of work needed to get it into its current state but I think this is really good!

dalg24 added 21 commits May 7, 2024 21:53
…bility-synchronization_library-missing for wait/notify
…couple places to see if it fixes the Armv7-M picolibc builds
@dalg24 dalg24 requested a review from huixie90 May 13, 2024 10:52
@ldionne ldionne merged commit 42ba740 into llvm:main May 21, 2024
53 checks passed
mub-at-arm pushed a commit to mub-at-arm/llvm-project that referenced this pull request May 22, 2024
Implement the std::atomic_ref class template by reusing atomic_base_impl.
Based on the work from https://reviews.llvm.org/D72240
VyacheslavLevytskyy pushed a commit to VyacheslavLevytskyy/llvm-project that referenced this pull request May 22, 2024
Implement the std::atomic_ref class template by reusing atomic_base_impl.
Based on the work from https://reviews.llvm.org/D72240
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

C++20 std::atomic_ref support missing
8 participants