-
Notifications
You must be signed in to change notification settings - Fork 15.3k
[libcxx][Github] Bump Runners to Next Group #168122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
To start using the more recently build containers.
|
@llvm/pr-subscribers-libcxx Author: Aiden Grossman (boomanaiden154) ChangesTo start using the more recently build containers. Full diff: https://github.com/llvm/llvm-project/pull/168122.diff 1 Files Affected:
diff --git a/.github/workflows/libcxx-build-and-test.yaml b/.github/workflows/libcxx-build-and-test.yaml
index 7dad30f994fd1..6b80d4291c0ee 100644
--- a/.github/workflows/libcxx-build-and-test.yaml
+++ b/.github/workflows/libcxx-build-and-test.yaml
@@ -36,7 +36,7 @@ concurrency:
jobs:
stage1:
if: github.repository_owner == 'llvm'
- runs-on: llvm-premerge-libcxx-runners
+ runs-on: llvm-premerge-libcxx-next-runners
continue-on-error: false
strategy:
fail-fast: false
@@ -73,7 +73,7 @@ jobs:
**/crash_diagnostics/*
stage2:
if: github.repository_owner == 'llvm'
- runs-on: llvm-premerge-libcxx-runners
+ runs-on: llvm-premerge-libcxx-next-runners
needs: [ stage1 ]
continue-on-error: false
strategy:
@@ -148,19 +148,19 @@ jobs:
'generic-static',
'bootstrapping-build'
]
- machine: [ 'llvm-premerge-libcxx-runners' ]
+ machine: [ 'llvm-premerge-libcxx-next-runners' ]
include:
- config: 'generic-cxx26'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
- config: 'generic-asan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
- config: 'generic-tsan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
- config: 'generic-ubsan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
# Use a larger machine for MSAN to avoid timeout and memory allocation issues.
- config: 'generic-msan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
runs-on: ${{ matrix.machine }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
|
@llvm/pr-subscribers-github-workflow Author: Aiden Grossman (boomanaiden154) ChangesTo start using the more recently build containers. Full diff: https://github.com/llvm/llvm-project/pull/168122.diff 1 Files Affected:
diff --git a/.github/workflows/libcxx-build-and-test.yaml b/.github/workflows/libcxx-build-and-test.yaml
index 7dad30f994fd1..6b80d4291c0ee 100644
--- a/.github/workflows/libcxx-build-and-test.yaml
+++ b/.github/workflows/libcxx-build-and-test.yaml
@@ -36,7 +36,7 @@ concurrency:
jobs:
stage1:
if: github.repository_owner == 'llvm'
- runs-on: llvm-premerge-libcxx-runners
+ runs-on: llvm-premerge-libcxx-next-runners
continue-on-error: false
strategy:
fail-fast: false
@@ -73,7 +73,7 @@ jobs:
**/crash_diagnostics/*
stage2:
if: github.repository_owner == 'llvm'
- runs-on: llvm-premerge-libcxx-runners
+ runs-on: llvm-premerge-libcxx-next-runners
needs: [ stage1 ]
continue-on-error: false
strategy:
@@ -148,19 +148,19 @@ jobs:
'generic-static',
'bootstrapping-build'
]
- machine: [ 'llvm-premerge-libcxx-runners' ]
+ machine: [ 'llvm-premerge-libcxx-next-runners' ]
include:
- config: 'generic-cxx26'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
- config: 'generic-asan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
- config: 'generic-tsan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
- config: 'generic-ubsan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
# Use a larger machine for MSAN to avoid timeout and memory allocation issues.
- config: 'generic-msan'
- machine: llvm-premerge-libcxx-runners
+ machine: llvm-premerge-libcxx-next-runners
runs-on: ${{ matrix.machine }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
|
I might have broken something by not using an absolute path for Ninja. I'll have a look. |
|
I think this is caused by some weird clang-tidy version mismatch happening because the new image is using a slightly more up-to-date incarnation of |
|
Yeah, it's definitely the clang-tidy version or the clang version, or something like that. If you do It's a bummer that I can't run |
Assuming you have debug info, why not? You might need to add ptrace capabilities (or just do |
|
Well, I think the LLVM releases we install from the debian packages don't have debug info? I did try reproducing on a EC2 instance and that worked but I started having issues with building the clang-tidy plugins. I also asked @philnik777 who uses Linux primarily and is always using a recent version of Clang/clang-tidy/etc, and he could see the failure as well. I believe that means something changed with the AST and the matcher doesn't match anymore. However, curiously, I did diff the ASTs in both Clang version with I'm going to be off for a week starting tomorrow, so I won't have time to finish this investigation right now. But in case someone wants to jump in, here's a dump of my reproducer (run as #!/usr/bin/env bash
# sudo apt-get update
# sudo apt-get install -y ninja-build cmake python3-psutil
# wget https://apt.llvm.org/llvm.sh -O /tmp/llvm.sh
# chmod +x /tmp/llvm.sh
# sudo /tmp/llvm.sh 22 all
# git clone https://github.com/llvm/llvm-project.git --depth=1
# cd llvm-project
cat <<EOF > ctime.cpp
#include <ctime>
EOF
cat <<EOF > ctime.cppm
module;
#include <ctime>
// Use __libcpp_module_<HEADER> to ensure that modules
// are not named as keywords or reserved names.
export module std:__libcpp_module_ctime;
#include "ctime.inc"
EOF
cat <<EOF > ctime.inc
export namespace std {
using std::clock_t;
using std::size_t;
using std::time_t;
using std::timespec;
using std::tm;
using std::asctime;
using std::clock;
using std::ctime;
using std::difftime;
using std::gmtime;
using std::localtime;
using std::mktime;
using std::strftime;
using std::time;
using std::timespec_get;
} // namespace std
export {
using ::clock_t;
using ::size_t;
using ::time_t;
using ::timespec;
using ::tm;
using ::asctime;
using ::clock;
using ::ctime;
using ::difftime;
using ::gmtime;
using ::localtime;
using ::mktime;
using ::strftime;
using ::time;
using ::timespec_get;
} // export
EOF
export CC=clang-22
export CXX=clang++-22
BUILD_DIR=$PWD/build/${1}
# This can be commented out after the first setup for faster iteration
rm -rf "${BUILD_DIR}"
cmake -S runtimes -B "${BUILD_DIR}" \
-GNinja -DCMAKE_MAKE_PROGRAM="$(which ninja)" \
-DCMAKE_BUILD_TYPE=Debug \
-DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi;libunwind" \
-DLIBCXX_CXX_ABI=libcxxabi -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -C "libcxx/cmake/caches/Generic-cxx26.cmake"
ninja -C ${BUILD_DIR} cxx-test-depends
# Run the test but also generate the necessary PCM files for the reproduction commands below.
# This only needs to be done once, after that the clang-tidy commands can be run for faster iteration.
./libcxx/utils/libcxx-lit ${BUILD_DIR} -sv libcxx/test/libcxx/module_std_compat.gen.py
STD_COMPAT_PCM=${BUILD_DIR}/libcxx/test/libcxx/module_std_compat.gen.py/Output/module_std_compat.sh.cpp.dir/std.compat.pcm
STD_PCM=${BUILD_DIR}/libcxx/test/libcxx/module_std_compat.gen.py/Output/module_std_compat.sh.cpp.dir/std.pcm
clang-tidy-22 ctime.cppm --checks='-*,libcpp-header-exportable-declarations' \
-config='{CheckOptions: [{key: libcpp-header-exportable-declarations.Filename, value: ctime}, {key: libcpp-header-exportable-declarations.FileType, value: CompatModulePartition}]}' \
--load=${BUILD_DIR}/libcxx/test/tools/clang_tidy_checks/libcxx-tidy.plugin \
-- --target=x86_64-unknown-linux-gnu -nostdinc++ -I ${BUILD_DIR}/libcxx/test-suite-install/include/c++/v1 \
-std=c++26 -D_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER \
-fmodule-file=std.compat=${STD_COMPAT_PCM} ${STD_COMPAT_PCM} \
-fmodule-file=std=${STD_PCM} ${STD_PCM} | sort > ctime.module
clang-tidy-22 ctime.cpp --checks='-*,libcpp-header-exportable-declarations' \
-config='{CheckOptions: [{key: libcpp-header-exportable-declarations.Filename, value: ctime}, {key: libcpp-header-exportable-declarations.FileType, value: CHeader}, {key: libcpp-header-exportable-declarations.ExtraHeader, value: "v1/__cstddef/size_t.h$" }]}' \
--load=${BUILD_DIR}/libcxx/test/tools/clang_tidy_checks/libcxx-tidy.plugin \
-- --target=x86_64-unknown-linux-gnu -nostdinc++ -I ${BUILD_DIR}/libcxx/test-suite-install/include/c++/v1 \
-std=c++26 -D_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER \
-fmodule-file=std.compat=${STD_COMPAT_PCM} ${STD_COMPAT_PCM} \
-fmodule-file=std=${STD_PCM} ${STD_PCM} | sort > ctime.include
echo "======= ctime.module ======="
cat ctime.module
echo "======= ctime.include ======="
cat ctime.include
echo "======= diff ======="
diff -u ctime.module ctime.include
if [[ $? -eq 0 ]]; then
echo "SUCCESS"
else
echo "FAILURE"
fiRunning that from pretty much any Linux or from our Docker image should do the trick. It's possible to see the difference in behavior: |
To start using the more recently build containers.