Skip to content

Conversation

@boomanaiden154
Copy link
Contributor

To start using the more recently build containers.

To start using the more recently build containers.
@llvmbot llvmbot added libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi. github:workflow labels Nov 14, 2025
@llvmbot
Copy link
Member

llvmbot commented Nov 14, 2025

@llvm/pr-subscribers-libcxx

Author: Aiden Grossman (boomanaiden154)

Changes

To start using the more recently build containers.


Full diff: https://github.com/llvm/llvm-project/pull/168122.diff

1 Files Affected:

  • (modified) .github/workflows/libcxx-build-and-test.yaml (+8-8)
diff --git a/.github/workflows/libcxx-build-and-test.yaml b/.github/workflows/libcxx-build-and-test.yaml
index 7dad30f994fd1..6b80d4291c0ee 100644
--- a/.github/workflows/libcxx-build-and-test.yaml
+++ b/.github/workflows/libcxx-build-and-test.yaml
@@ -36,7 +36,7 @@ concurrency:
 jobs:
   stage1:
     if: github.repository_owner == 'llvm'
-    runs-on: llvm-premerge-libcxx-runners
+    runs-on: llvm-premerge-libcxx-next-runners
     continue-on-error: false
     strategy:
       fail-fast: false
@@ -73,7 +73,7 @@ jobs:
             **/crash_diagnostics/*
   stage2:
     if: github.repository_owner == 'llvm'
-    runs-on: llvm-premerge-libcxx-runners
+    runs-on: llvm-premerge-libcxx-next-runners
     needs: [ stage1 ]
     continue-on-error: false
     strategy:
@@ -148,19 +148,19 @@ jobs:
           'generic-static',
           'bootstrapping-build'
         ]
-        machine: [ 'llvm-premerge-libcxx-runners' ]
+        machine: [ 'llvm-premerge-libcxx-next-runners' ]
         include:
         - config: 'generic-cxx26'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         - config: 'generic-asan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         - config: 'generic-tsan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         - config: 'generic-ubsan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         # Use a larger machine for MSAN to avoid timeout and memory allocation issues.
         - config: 'generic-msan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
     runs-on: ${{ matrix.machine }}
     steps:
       - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0

@llvmbot
Copy link
Member

llvmbot commented Nov 14, 2025

@llvm/pr-subscribers-github-workflow

Author: Aiden Grossman (boomanaiden154)

Changes

To start using the more recently build containers.


Full diff: https://github.com/llvm/llvm-project/pull/168122.diff

1 Files Affected:

  • (modified) .github/workflows/libcxx-build-and-test.yaml (+8-8)
diff --git a/.github/workflows/libcxx-build-and-test.yaml b/.github/workflows/libcxx-build-and-test.yaml
index 7dad30f994fd1..6b80d4291c0ee 100644
--- a/.github/workflows/libcxx-build-and-test.yaml
+++ b/.github/workflows/libcxx-build-and-test.yaml
@@ -36,7 +36,7 @@ concurrency:
 jobs:
   stage1:
     if: github.repository_owner == 'llvm'
-    runs-on: llvm-premerge-libcxx-runners
+    runs-on: llvm-premerge-libcxx-next-runners
     continue-on-error: false
     strategy:
       fail-fast: false
@@ -73,7 +73,7 @@ jobs:
             **/crash_diagnostics/*
   stage2:
     if: github.repository_owner == 'llvm'
-    runs-on: llvm-premerge-libcxx-runners
+    runs-on: llvm-premerge-libcxx-next-runners
     needs: [ stage1 ]
     continue-on-error: false
     strategy:
@@ -148,19 +148,19 @@ jobs:
           'generic-static',
           'bootstrapping-build'
         ]
-        machine: [ 'llvm-premerge-libcxx-runners' ]
+        machine: [ 'llvm-premerge-libcxx-next-runners' ]
         include:
         - config: 'generic-cxx26'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         - config: 'generic-asan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         - config: 'generic-tsan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         - config: 'generic-ubsan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
         # Use a larger machine for MSAN to avoid timeout and memory allocation issues.
         - config: 'generic-msan'
-          machine: llvm-premerge-libcxx-runners
+          machine: llvm-premerge-libcxx-next-runners
     runs-on: ${{ matrix.machine }}
     steps:
       - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0

@ldionne
Copy link
Member

ldionne commented Nov 17, 2025

I might have broken something by not using an absolute path for Ninja. I'll have a look.

@ldionne
Copy link
Member

ldionne commented Nov 18, 2025

I think this is caused by some weird clang-tidy version mismatch happening because the new image is using a slightly more up-to-date incarnation of clang-22. Still looking into it, but I am able to reproduce it in a Docker image.

@ldionne
Copy link
Member

ldionne commented Nov 18, 2025

Yeah, it's definitely the clang-tidy version or the clang version, or something like that. If you do sudo apt-get upgrade llvm-22 in the old Docker image, the same modules test starts failing.

It's a bummer that I can't run clang-tidy under lldb in my Docker container.

@boomanaiden154
Copy link
Contributor Author

It's a bummer that I can't run clang-tidy under lldb in my Docker container.

Assuming you have debug info, why not? You might need to add ptrace capabilities (or just do --privileged on container launch as the big hammer), but it should work.

@ldionne
Copy link
Member

ldionne commented Nov 20, 2025

Well, I think the LLVM releases we install from the debian packages don't have debug info? I did try reproducing on a EC2 instance and that worked but I started having issues with building the clang-tidy plugins. I also asked @philnik777 who uses Linux primarily and is always using a recent version of Clang/clang-tidy/etc, and he could see the failure as well. I believe that means something changed with the AST and the matcher doesn't match anymore. However, curiously, I did diff the ASTs in both Clang version with -Xclang -dump-ast and both were the same (modulo pointer addresses).

I'm going to be off for a week starting tomorrow, so I won't have time to finish this investigation right now. But in case someone wants to jump in, here's a dump of my reproducer (run as ./repro.sh <name>, for example ./repro.sh old and ./repro.sh new from two Docker images to show the two behaviours):

#!/usr/bin/env bash

# sudo apt-get update
# sudo apt-get install -y ninja-build cmake python3-psutil
# wget https://apt.llvm.org/llvm.sh -O /tmp/llvm.sh
# chmod +x /tmp/llvm.sh
# sudo /tmp/llvm.sh 22 all
# git clone https://github.com/llvm/llvm-project.git --depth=1
# cd llvm-project

cat <<EOF > ctime.cpp
#include <ctime>
EOF

cat <<EOF > ctime.cppm
module;
#include <ctime>

// Use __libcpp_module_<HEADER> to ensure that modules
// are not named as keywords or reserved names.
export module std:__libcpp_module_ctime;
#include "ctime.inc"
EOF

cat <<EOF > ctime.inc
export namespace std {
  using std::clock_t;
  using std::size_t;
  using std::time_t;

  using std::timespec;
  using std::tm;

  using std::asctime;
  using std::clock;
  using std::ctime;
  using std::difftime;
  using std::gmtime;
  using std::localtime;
  using std::mktime;
  using std::strftime;
  using std::time;
  using std::timespec_get;
} // namespace std

export {
  using ::clock_t;
  using ::size_t;
  using ::time_t;

  using ::timespec;
  using ::tm;

  using ::asctime;
  using ::clock;
  using ::ctime;
  using ::difftime;
  using ::gmtime;
  using ::localtime;
  using ::mktime;
  using ::strftime;
  using ::time;
  using ::timespec_get;
} // export
EOF

export CC=clang-22
export CXX=clang++-22
BUILD_DIR=$PWD/build/${1}

# This can be commented out after the first setup for faster iteration
rm -rf "${BUILD_DIR}"
cmake -S runtimes -B "${BUILD_DIR}" \
    -GNinja -DCMAKE_MAKE_PROGRAM="$(which ninja)" \
    -DCMAKE_BUILD_TYPE=Debug \
    -DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi;libunwind" \
    -DLIBCXX_CXX_ABI=libcxxabi -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -C "libcxx/cmake/caches/Generic-cxx26.cmake"

ninja -C ${BUILD_DIR} cxx-test-depends

# Run the test but also generate the necessary PCM files for the reproduction commands below.
# This only needs to be done once, after that the clang-tidy commands can be run for faster iteration.
./libcxx/utils/libcxx-lit ${BUILD_DIR} -sv libcxx/test/libcxx/module_std_compat.gen.py
STD_COMPAT_PCM=${BUILD_DIR}/libcxx/test/libcxx/module_std_compat.gen.py/Output/module_std_compat.sh.cpp.dir/std.compat.pcm
STD_PCM=${BUILD_DIR}/libcxx/test/libcxx/module_std_compat.gen.py/Output/module_std_compat.sh.cpp.dir/std.pcm

clang-tidy-22 ctime.cppm --checks='-*,libcpp-header-exportable-declarations' \
                         -config='{CheckOptions: [{key: libcpp-header-exportable-declarations.Filename, value: ctime}, {key: libcpp-header-exportable-declarations.FileType, value: CompatModulePartition}]}' \
                         --load=${BUILD_DIR}/libcxx/test/tools/clang_tidy_checks/libcxx-tidy.plugin \
                         -- --target=x86_64-unknown-linux-gnu -nostdinc++ -I ${BUILD_DIR}/libcxx/test-suite-install/include/c++/v1 \
                         -std=c++26 -D_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER \
                         -fmodule-file=std.compat=${STD_COMPAT_PCM} ${STD_COMPAT_PCM} \
                         -fmodule-file=std=${STD_PCM} ${STD_PCM} | sort > ctime.module

clang-tidy-22 ctime.cpp --checks='-*,libcpp-header-exportable-declarations' \
                        -config='{CheckOptions: [{key: libcpp-header-exportable-declarations.Filename, value: ctime}, {key: libcpp-header-exportable-declarations.FileType, value: CHeader}, {key: libcpp-header-exportable-declarations.ExtraHeader, value: "v1/__cstddef/size_t.h$" }]}' \
                        --load=${BUILD_DIR}/libcxx/test/tools/clang_tidy_checks/libcxx-tidy.plugin \
                        -- --target=x86_64-unknown-linux-gnu -nostdinc++ -I ${BUILD_DIR}/libcxx/test-suite-install/include/c++/v1 \
                        -std=c++26 -D_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER \
                        -fmodule-file=std.compat=${STD_COMPAT_PCM} ${STD_COMPAT_PCM} \
                        -fmodule-file=std=${STD_PCM} ${STD_PCM} | sort > ctime.include

echo "======= ctime.module ======="
cat ctime.module

echo "======= ctime.include ======="
cat ctime.include

echo "======= diff ======="
diff -u ctime.module ctime.include

if [[ $? -eq 0 ]]; then
    echo "SUCCESS"
else
    echo "FAILURE"
fi

Running that from pretty much any Linux or from our Docker image should do the trick. It's possible to see the difference in behavior:

docker run -it ghcr.io/llvm/libcxx-linux-builder:d6b22a347f813cf4a9832627323a43074f57bbcf # old image, should pass the test
docker run -it ghcr.io/llvm/libcxx-linux-builder:b80656b35af57619923b5bc5e453f42a19ed6c83 # new image, should fail the test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

github:workflow libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants