Skip to content

Conversation

ldionne
Copy link
Member

@ldionne ldionne commented Sep 12, 2025

The atomic_wait benchmarks are great, but in fact they are a bit too great. Indeed, they are so thorough that they overload the system they're running on, meaning they can't be run on our CI infrastructure. The NxN benchmarks even crash my system on macOS.

This patch removes the benchmarks since we're trying to move towards more continuous benchmarking, which requires making a tradeoff between exhaustiveness and ability to run the benchmarks on a regular basis.

Instead, it would be better to rework the benchmarks to make them more lightweight, but I think it is reasonable to do that at a later time in order to unblock continuous performance monitoring ASAP.

The atomic_wait benchmarks are great, but in fact they are a bit
too great. Indeed, they are so thorough that they overload the system
they're running on, meaning they can't be run on our CI infrastructure.
The NxN benchmarks even crash my system on macOS.

This patch removes the benchmarks since we're trying to move towards
more continuous benchmarking, which requires making a tradeoff between
exhaustiveness and ability to run the benchmarks on a regular basis.

Instead, it would be better to rework the benchmarks to make them
more lightweight, but I think it is reasonable to do that at a later
time in order to unblock continuous performance monitoring ASAP.
@ldionne ldionne requested a review from a team as a code owner September 12, 2025 13:03
@llvmbot llvmbot added the libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi. label Sep 12, 2025
@llvmbot
Copy link
Member

llvmbot commented Sep 12, 2025

@llvm/pr-subscribers-libcxx

Author: Louis Dionne (ldionne)

Changes

The atomic_wait benchmarks are great, but in fact they are a bit too great. Indeed, they are so thorough that they overload the system they're running on, meaning they can't be run on our CI infrastructure. The NxN benchmarks even crash my system on macOS.

This patch removes the benchmarks since we're trying to move towards more continuous benchmarking, which requires making a tradeoff between exhaustiveness and ability to run the benchmarks on a regular basis.

Instead, it would be better to rework the benchmarks to make them more lightweight, but I think it is reasonable to do that at a later time in order to unblock continuous performance monitoring ASAP.


Patch is 22.71 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/158289.diff

5 Files Affected:

  • (removed) libcxx/test/benchmarks/atomic_wait_1_waiter_1_notifier.bench.cpp (-74)
  • (removed) libcxx/test/benchmarks/atomic_wait_N_waiter_N_notifier.bench.cpp (-167)
  • (removed) libcxx/test/benchmarks/atomic_wait_helper.h (-92)
  • (removed) libcxx/test/benchmarks/atomic_wait_multi_waiter_1_notifier.bench.cpp (-167)
  • (removed) libcxx/test/benchmarks/atomic_wait_vs_mutex_lock.bench.cpp (-109)
diff --git a/libcxx/test/benchmarks/atomic_wait_1_waiter_1_notifier.bench.cpp b/libcxx/test/benchmarks/atomic_wait_1_waiter_1_notifier.bench.cpp
deleted file mode 100644
index c3d7e6511925d..0000000000000
--- a/libcxx/test/benchmarks/atomic_wait_1_waiter_1_notifier.bench.cpp
+++ /dev/null
@@ -1,74 +0,0 @@
-//===----------------------------------------------------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-
-// UNSUPPORTED: c++03, c++11, c++14, c++17
-
-#include "atomic_wait_helper.h"
-
-#include <atomic>
-#include <array>
-#include <chrono>
-#include <cstdint>
-#include <numeric>
-#include <stop_token>
-#include <thread>
-
-#include "benchmark/benchmark.h"
-#include "make_test_thread.h"
-
-using namespace std::chrono_literals;
-
-template <class NotifyPolicy, class NumPrioTasks>
-void BM_1_atomic_1_waiter_1_notifier(benchmark::State& state) {
-  [[maybe_unused]] std::array<HighPrioTask, NumPrioTasks::value> tasks{};
-  std::atomic<std::uint64_t> a;
-  auto thread_func = [&](std::stop_token st) { NotifyPolicy::notify(a, st); };
-
-  std::uint64_t total_loop_test_param = state.range(0);
-
-  auto thread = support::make_test_jthread(thread_func);
-
-  for (auto _ : state) {
-    for (std::uint64_t i = 0; i < total_loop_test_param; ++i) {
-      auto old = a.load(std::memory_order_relaxed);
-      a.wait(old);
-    }
-  }
-}
-
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<KeepNotifying, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 16, 1 << 18);
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<NotifyEveryNus<50>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<NotifyEveryNus<100>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<KeepNotifying, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 16, 1 << 18);
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<NotifyEveryNus<50>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<NotifyEveryNus<100>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<KeepNotifying, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 4, 1 << 6);
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<NotifyEveryNus<50>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 3, 1 << 5);
-BENCHMARK(BM_1_atomic_1_waiter_1_notifier<NotifyEveryNus<100>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 3, 1 << 5);
-
-BENCHMARK_MAIN();
diff --git a/libcxx/test/benchmarks/atomic_wait_N_waiter_N_notifier.bench.cpp b/libcxx/test/benchmarks/atomic_wait_N_waiter_N_notifier.bench.cpp
deleted file mode 100644
index d9b9aa212f602..0000000000000
--- a/libcxx/test/benchmarks/atomic_wait_N_waiter_N_notifier.bench.cpp
+++ /dev/null
@@ -1,167 +0,0 @@
-//===----------------------------------------------------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-
-// UNSUPPORTED: c++03, c++11, c++14, c++17
-
-#include "atomic_wait_helper.h"
-
-#include <atomic>
-#include <cstdint>
-#include <numeric>
-#include <stop_token>
-#include <pthread.h>
-#include <sched.h>
-#include <thread>
-#include <chrono>
-#include <array>
-
-#include "benchmark/benchmark.h"
-#include "make_test_thread.h"
-
-using namespace std::chrono_literals;
-
-template <class NotifyPolicy, class NumberOfAtomics, class NumPrioTasks>
-void BM_N_atomics_N_waiter_N_notifier(benchmark::State& state) {
-  [[maybe_unused]] std::array<HighPrioTask, NumPrioTasks::value> tasks{};
-  const std::uint64_t total_loop_test_param = state.range(0);
-  constexpr std::uint64_t num_atomics       = NumberOfAtomics::value;
-  std::vector<std::atomic<std::uint64_t>> atomics(num_atomics);
-
-  auto notify_func = [&](std::stop_token st, size_t idx) {
-    while (!st.stop_requested()) {
-      NotifyPolicy::notify(atomics[idx], st);
-    }
-  };
-
-  std::atomic<std::uint64_t> start_flag = 0;
-  std::atomic<std::uint64_t> done_count = 0;
-
-  auto wait_func = [&, total_loop_test_param](std::stop_token st, size_t idx) {
-    auto old_start = 0;
-    while (!st.stop_requested()) {
-      start_flag.wait(old_start);
-      old_start = start_flag.load();
-      for (std::uint64_t i = 0; i < total_loop_test_param; ++i) {
-        auto old = atomics[idx].load(std::memory_order_relaxed);
-        atomics[idx].wait(old);
-      }
-      done_count.fetch_add(1);
-    }
-  };
-
-  std::vector<std::jthread> notify_threads;
-  notify_threads.reserve(num_atomics);
-
-  std::vector<std::jthread> wait_threads;
-  wait_threads.reserve(num_atomics);
-
-  for (size_t i = 0; i < num_atomics; ++i) {
-    notify_threads.emplace_back(support::make_test_jthread(notify_func, i));
-  }
-
-  for (size_t i = 0; i < num_atomics; ++i) {
-    wait_threads.emplace_back(support::make_test_jthread(wait_func, i));
-  }
-
-  for (auto _ : state) {
-    done_count = 0;
-    start_flag.fetch_add(1);
-    start_flag.notify_all();
-    while (done_count < num_atomics) {
-      std::this_thread::yield();
-    }
-  }
-  for (auto& t : wait_threads) {
-    t.request_stop();
-  }
-  start_flag.fetch_add(1);
-  start_flag.notify_all();
-  for (auto& t : wait_threads) {
-    t.join();
-  }
-}
-
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<2>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 12, 1 << 14);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<3>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<5>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<7>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<2>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<3>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<5>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<7>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<2>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<3>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<5>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 7, 1 << 9);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<7>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<2>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 7, 1 << 9);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<3>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 7, 1 << 9);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<5>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<KeepNotifying, NumberOfAtomics<7>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 4, 1 << 6);
-
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<2>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 7, 1 << 9);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<3>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 7, 1 << 9);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<5>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 5, 1 << 7);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<50>, NumberOfAtomics<7>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 3, 1 << 5);
-
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<2>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<3>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<5>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 5, 1 << 7);
-BENCHMARK(BM_N_atomics_N_waiter_N_notifier<NotifyEveryNus<100>, NumberOfAtomics<7>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 3, 1 << 5);
-
-BENCHMARK_MAIN();
diff --git a/libcxx/test/benchmarks/atomic_wait_helper.h b/libcxx/test/benchmarks/atomic_wait_helper.h
deleted file mode 100644
index cfdacf9e01688..0000000000000
--- a/libcxx/test/benchmarks/atomic_wait_helper.h
+++ /dev/null
@@ -1,92 +0,0 @@
-//===----------------------------------------------------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef TEST_BENCHMARK_ATOMIC_WAIT_HELPER_H
-#define TEST_BENCHMARK_ATOMIC_WAIT_HELPER_H
-
-#include <atomic>
-#include <chrono>
-#include <exception>
-#include <stop_token>
-#include <thread>
-
-struct HighPrioTask {
-  sched_param param;
-  pthread_attr_t attr_t;
-  pthread_t thread;
-  std::atomic_bool stopped{false};
-
-  HighPrioTask(const HighPrioTask&) = delete;
-
-  HighPrioTask() {
-    pthread_attr_init(&attr_t);
-    pthread_attr_setschedpolicy(&attr_t, SCHED_FIFO);
-    param.sched_priority = sched_get_priority_max(SCHED_FIFO);
-    pthread_attr_setschedparam(&attr_t, &param);
-    pthread_attr_setinheritsched(&attr_t, PTHREAD_EXPLICIT_SCHED);
-
-    auto thread_fun = [](void* arg) -> void* {
-      auto* stop = reinterpret_cast<std::atomic_bool*>(arg);
-      while (!stop->load(std::memory_order_relaxed)) {
-        // spin
-      }
-      return nullptr;
-    };
-
-    if (pthread_create(&thread, &attr_t, thread_fun, &stopped) != 0) {
-      throw std::runtime_error("failed to create thread");
-    }
-  }
-
-  ~HighPrioTask() {
-    stopped = true;
-    pthread_attr_destroy(&attr_t);
-    pthread_join(thread, nullptr);
-  }
-};
-
-template <std::size_t N>
-struct NumHighPrioTasks {
-  static constexpr auto value = N;
-};
-
-template <std::size_t N>
-struct NumWaitingThreads {
-  static constexpr auto value = N;
-};
-
-template <std::size_t N>
-struct NumberOfAtomics {
-  static constexpr auto value = N;
-};
-
-struct KeepNotifying {
-  template <class Atomic>
-  static void notify(Atomic& a, std::stop_token st) {
-    while (!st.stop_requested()) {
-      a.fetch_add(1, std::memory_order_relaxed);
-      a.notify_all();
-    }
-  }
-};
-
-template <std::size_t N>
-struct NotifyEveryNus {
-  template <class Atomic>
-  static void notify(Atomic& a, std::stop_token st) {
-    while (!st.stop_requested()) {
-      auto start = std::chrono::system_clock::now();
-      a.fetch_add(1, std::memory_order_relaxed);
-      a.notify_all();
-      while (std::chrono::system_clock::now() - start < std::chrono::microseconds{N}) {
-      }
-    }
-  }
-};
-
-#endif // TEST_BENCHMARK_ATOMIC_WAIT_HELPER_H
\ No newline at end of file
diff --git a/libcxx/test/benchmarks/atomic_wait_multi_waiter_1_notifier.bench.cpp b/libcxx/test/benchmarks/atomic_wait_multi_waiter_1_notifier.bench.cpp
deleted file mode 100644
index a14a6a2ad9c98..0000000000000
--- a/libcxx/test/benchmarks/atomic_wait_multi_waiter_1_notifier.bench.cpp
+++ /dev/null
@@ -1,167 +0,0 @@
-//===----------------------------------------------------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-
-// UNSUPPORTED: c++03, c++11, c++14, c++17
-
-#include "atomic_wait_helper.h"
-
-#include <atomic>
-#include <cstdint>
-#include <numeric>
-#include <stop_token>
-#include <thread>
-#include <chrono>
-#include <array>
-
-#include "benchmark/benchmark.h"
-#include "make_test_thread.h"
-
-using namespace std::chrono_literals;
-
-template <class NotifyPolicy, class NumWaitingThreads, class NumPrioTasks>
-void BM_1_atomic_multi_waiter_1_notifier(benchmark::State& state) {
-  [[maybe_unused]] std::array<HighPrioTask, NumPrioTasks::value> tasks{};
-
-  std::atomic<std::uint64_t> a;
-  auto notify_func = [&](std::stop_token st) { NotifyPolicy::notify(a, st); };
-
-  std::uint64_t total_loop_test_param = state.range(0);
-  constexpr auto num_waiting_threads  = NumWaitingThreads::value;
-  std::vector<std::jthread> wait_threads;
-  wait_threads.reserve(num_waiting_threads);
-
-  auto notify_thread = support::make_test_jthread(notify_func);
-
-  std::atomic<std::uint64_t> start_flag = 0;
-  std::atomic<std::uint64_t> done_count = 0;
-  auto wait_func                        = [&a, &start_flag, &done_count, total_loop_test_param](std::stop_token st) {
-    auto old_start = 0;
-    while (!st.stop_requested()) {
-      start_flag.wait(old_start);
-      old_start = start_flag.load();
-      for (std::uint64_t i = 0; i < total_loop_test_param; ++i) {
-        auto old = a.load(std::memory_order_relaxed);
-        a.wait(old);
-      }
-      done_count.fetch_add(1);
-    }
-  };
-
-  for (size_t i = 0; i < num_waiting_threads; ++i) {
-    wait_threads.emplace_back(support::make_test_jthread(wait_func));
-  }
-
-  for (auto _ : state) {
-    done_count = 0;
-    start_flag.fetch_add(1);
-    start_flag.notify_all();
-    while (done_count < num_waiting_threads) {
-      std::this_thread::yield();
-    }
-  }
-  for (auto& t : wait_threads) {
-    t.request_stop();
-  }
-  start_flag.fetch_add(1);
-  start_flag.notify_all();
-  for (auto& t : wait_threads) {
-    t.join();
-  }
-}
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<3>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 14, 1 << 16);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<7>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 12, 1 << 14);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<15>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<3>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 12);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<7>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<15>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<3>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<7>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<15>, NumHighPrioTasks<0>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 4, 1 << 6);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<3>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<7>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<15>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 4, 1 << 6);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<3>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<7>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<15>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 4, 1 << 6);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<3>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 8, 1 << 10);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<7>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 6, 1 << 8);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<15>, NumHighPrioTasks<4>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 4, 1 << 6);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<3>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 4, 1 << 6);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<7>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 3, 1 << 5);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<KeepNotifying, NumWaitingThreads<15>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 2, 1 << 4);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<3>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 3, 1 << 5);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<7>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 2, 1 << 4);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<50>, NumWaitingThreads<15>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 1, 1 << 3);
-
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<3>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 3, 1 << 5);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<7>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 2, 1 << 4);
-BENCHMARK(BM_1_atomic_multi_waiter_1_notifier<NotifyEveryNus<100>, NumWaitingThreads<15>, NumHighPrioTasks<7>>)
-    ->RangeMultiplier(2)
-    ->Range(1 << 1, 1 << 3);
-
-BENCHMARK_MAIN();
diff --git a/libcxx/test/benchmarks/atomic_wait_vs_mutex_lock.bench.cpp b/libcxx/test/benchmarks/atomic_wait_vs_mutex_lock.bench.cpp
deleted file mode 100644
index a554c721df017..0000000000000
--- a/libcxx/test/benchmarks/atomic_wait_vs_mutex_lock.bench.cpp
+++ /dev/null
@@ -1,109 +0,0 @@
-//===----------------------------------------------------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-
-// UNSUPPORTED...
[truncated]

@ldionne
Copy link
Member Author

ldionne commented Sep 12, 2025

CC @huixie90

To be clear, removing these benchmarks is just my straw man proposal to unblock more continuous performance benchmarking. If you think you'd be able to tweak the benchmarks to make them lighter while retaining their coverage, we should do that instead.

@huixie90
Copy link
Member

LGTM. These benchmarks were added because I was actively developing atomic wait. We don’t have to run them in general. I wonder if there are ways to keep them but not running them in usual CI, in case we need to work on it again

@ldionne
Copy link
Member Author

ldionne commented Sep 15, 2025

The problem I see is that if we don't run them on a regular basis, the code will likely rot. It feels wrong to remove them cause they can be really useful, but at the same time it feels wrong to keep something that we can't run on a regular basis.

Maybe an option would be to only run these benchmarks in dry-run mode, that way we ensure they don't rot too much (we make sure they at least compile).

ldionne added a commit to ldionne/llvm-project that referenced this pull request Sep 15, 2025
The atomic_wait benchmarks are great, but they tend to overload the
system they're running on. For that reason, we can't run them on our
CI infrastructure on a regular basis.

Instead of removing them, make them unsupported outside of dry-running,
which allows keeping the benchmarks around and ensuring they don't rot,
but doesn't run them along with the other benchmarks. If we need to
investigate atomic_wait performance, it's trivial to mark the benchmark
as supported and run it for local investigations.

This is an alternative to llvm#158289.
@ldionne
Copy link
Member Author

ldionne commented Sep 15, 2025

Actually, I think that's a superior approach and it's easy to implement. Closing in favour of #158631.

@ldionne ldionne closed this Sep 15, 2025
@ldionne ldionne deleted the review/remove-atomic-wait-benchmarks branch September 15, 2025 12:57
ldionne added a commit that referenced this pull request Sep 19, 2025
)

The atomic_wait benchmarks are great, but they tend to overload the
system they're running on. For that reason, we can't run them on our CI
infrastructure on a regular basis.

Instead of removing them, make them unsupported outside of dry-running,
which allows keeping the benchmarks around and ensuring they don't rot,
but doesn't run them along with the other benchmarks. If we need to
investigate atomic_wait performance, it's trivial to mark the benchmark
as supported and run it for local investigations.

This is an alternative to #158289.
llvm-sync bot pushed a commit to arm/arm-toolchain that referenced this pull request Sep 19, 2025
… mode (#158631)

The atomic_wait benchmarks are great, but they tend to overload the
system they're running on. For that reason, we can't run them on our CI
infrastructure on a regular basis.

Instead of removing them, make them unsupported outside of dry-running,
which allows keeping the benchmarks around and ensuring they don't rot,
but doesn't run them along with the other benchmarks. If we need to
investigate atomic_wait performance, it's trivial to mark the benchmark
as supported and run it for local investigations.

This is an alternative to llvm/llvm-project#158289.
SeongjaeP pushed a commit to SeongjaeP/llvm-project that referenced this pull request Sep 23, 2025
…#158631)

The atomic_wait benchmarks are great, but they tend to overload the
system they're running on. For that reason, we can't run them on our CI
infrastructure on a regular basis.

Instead of removing them, make them unsupported outside of dry-running,
which allows keeping the benchmarks around and ensuring they don't rot,
but doesn't run them along with the other benchmarks. If we need to
investigate atomic_wait performance, it's trivial to mark the benchmark
as supported and run it for local investigations.

This is an alternative to llvm#158289.
YixingZhang007 pushed a commit to YixingZhang007/llvm-project that referenced this pull request Sep 27, 2025
…#158631)

The atomic_wait benchmarks are great, but they tend to overload the
system they're running on. For that reason, we can't run them on our CI
infrastructure on a regular basis.

Instead of removing them, make them unsupported outside of dry-running,
which allows keeping the benchmarks around and ensuring they don't rot,
but doesn't run them along with the other benchmarks. If we need to
investigate atomic_wait performance, it's trivial to mark the benchmark
as supported and run it for local investigations.

This is an alternative to llvm#158289.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants