Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default compiler_check setting does not seem optimal for Github Actions #94

Open
cmelchior opened this issue Aug 17, 2022 · 14 comments
Open

Comments

@cmelchior
Copy link

This is the default config for ccache:

Run ccache --show-config
(default) base_dir = 
(/home/runner/.ccache/ccache.conf) cache_dir = /home/runner/work/realm-kotlin/realm-kotlin/.ccache
(default) cache_dir_levels = 2
(default) compiler = 
(default) compiler_check = mtime
(/home/runner/.ccache/ccache.conf) compression = true
(default) compression_level = 6
(default) cpp_extension = 
(default) debug = false
(default) depend_mode = false
(default) direct_mode = true
(default) disable = false
(default) extra_files_to_hash = 
(default) hard_link = false
(default) hash_dir = true
(default) ignore_headers_in_manifest = 
(default) keep_comments_cpp = false
(default) limit_multiple = 0.8
(default) log_file = 
(default) max_files = 0
(/home/runner/.ccache/ccache.conf) max_size = 500.0M
(default) path = 
(default) pch_external_checksum = false
(default) prefix_command = 
(default) prefix_command_cpp = 
(default) read_only = false
(default) read_only_direct = false
(default) recache = false
(default) run_second_cpp = true
(default) sloppiness = 
(default) stats = true
(default) temporary_dir = 
(default) umask = 

But using compiler_check = mtime is suboptimal for CI as the timestamp changes on every checkout. Shouldn't the default setting be compiler_check = content?

@jonashaag
Copy link
Contributor

If you’re using Git then I think you are getting “constant” mtimes because Git keeps track of them.

@cmelchior
Copy link
Author

We are using Git and actions/checkout@v3, but I was seeing a cache hit rate of 0%, it went up to 100% (as expected) after adding this change.

@jonashaag
Copy link
Contributor

Interesting. Would you mind creating a minimal reproducing example that we could use as a test for that fix?

@cmelchior
Copy link
Author

My experiments are Open Source. The branch is here: https://github.com/realm/realm-kotlin/pull/881/files#diff-15c806aa509538190832852f439e9921a23bec2da81f95ed0e4bf13c14e5b160R52. You should be able to clone the repo and just run it, but the full build is currently ~35 minutes.

If that isn't working for you I can try to cut it down to something more manageable?

@jonashaag
Copy link
Contributor

That would be great! Would be perfect if you could boil it down to using a single file in this repo (or if that doesn’t work, a demo repo)

@hendrikmuhs
Copy link
Owner

hendrikmuhs commented Aug 18, 2022 via email

@hendrikmuhs
Copy link
Owner

@cmelchior

I think this is a misunderstanding of ccache. The setting controls how the compiler binary is matched. I had a quick look at your PR, you seem to use the ootb ubuntu compilers. Using content means that ccache creates a checksum of the binary on every invocation, that slows down caching.

I don't know the reason why your didn't see cache hits, the reason must be something else. It would be surprising, I haven't received similar feedback. Note that caches of PR branches are isolated from e.g. the main branch, therefore I am not sure how good caching works in a PR branch, e.g. they might get evicted faster than for main.

If you want to follow up on this, I need something smaller and isolated, e.g. a you could create a small project.

@cmelchior
Copy link
Author

Thank you for the clarification. I'll see if I can reproduce it on a much simpler project. For what it is worth, I got the hint from the React Native docs here: https://reactnative.dev/docs/build-speed#using-this-approach-on-a-ci

@jonashaag
Copy link
Contributor

Seems like a mistake on that page. Care to open a bug report there?

@cmelchior
Copy link
Author

I tried to reproduce it in a simple project here: cmelchior/ccache-action-bug#1 but I cannot reproduce the behavior there and the action seems to work as you describe.

So right now my guess is that it is a problem with how the Android NDK build interacts with ccache, but I need to investigate further.

You can probably close this issue, I will open a new issue if I discover the root cause.

Since React Native also uses the Android NDK, maybe they are seeing the same as I am, so I will wait a little before opening an issue with them as their fix actually seem to work even if it is suboptimal.

@whoozle
Copy link

whoozle commented Dec 29, 2022

Just hit the same issue on the private project (just pure gh actions, no docker, matrix.os ubuntu.latest).

Ccache computes compiler hash by including compiler's mtime/size, for compiler_check=content, it's reading compiler binary instead. This would slow down cache, but if you're not using

I think you might provide and option to switch compiler_check to content instead.

@mathiaswagner
Copy link

I hit a similar issues seeing 0% direct hits and then changing to content it goes to 100% direct hits. Overhead seems to be negligible as compile times for individual files are long enough that hashing the compiler binary is not an issue. Not sure why mtime does not work for a default Ubuntu 22.04 gcc 12 but as setting it to content fixes it ... (Weirdly I had better hit rates with clang but also not direct but just preprocessed ...).

@hendrikmuhs
Copy link
Owner

@mathiaswagner do you have a link or is this a private repo?

default Ubuntu 22.04 gcc 12

The default gcc in 22.04 is gcc 11. How do you install gcc 12? Do you "apt install" it as part of your build?

It is strange that the mtime is different for every run, to further debug this, can you e.g. run stat /usr/bin/gcc-12 and share the results?

just installed it, mtime is set according to package creation, not installation date:

stat /usr/bin/gcc-12 
  File: /usr/bin/gcc-12 -> x86_64-linux-gnu-gcc-12
  Size: 23              Blocks: 0          IO Block: 4096   symbolic link
Device: fd00h/64768d    Inode: 49042184    Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2023-07-19 08:13:12.000000000 +0200
Modify: 2022-05-13 13:11:55.000000000 +0200
Change: 2023-07-19 08:13:19.045350880 +0200
 Birth: 2023-07-19 08:13:19.013350315 +0200

copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits, likely because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (see discussion [here](hendrikmuhs/ccache-action#94)).
With this change, the NDK itself is cached, which hopefully maintains the `mtime` across builds.

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
Currently, `ccache` is not getting any hits because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (it copies the binaries to the `tool-cache`, which sets their `mtime`s to the current time, see discussion [here](hendrikmuhs/ccache-action#94)).

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
…d-*` workflow builds.

Currently, `ccache` is not getting any hits because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (it copies the binaries to the `tool-cache`, which sets their `mtime`s to the current time, see discussion [here](hendrikmuhs/ccache-action#94)).

PiperOrigin-RevId: 662464572
copybara-service bot pushed a commit to google/XNNPACK that referenced this issue Aug 13, 2024
…d-*` workflow builds.

Currently, `ccache` is not getting any hits because the `mtime` of the compiler binaries changes every time `sttld/setup-ndk` is called (it copies the binaries to the `tool-cache`, which sets their `mtime`s to the current time, see discussion [here](hendrikmuhs/ccache-action#94)).

PiperOrigin-RevId: 662517571
@gonnet
Copy link

gonnet commented Aug 13, 2024

Hi all,

I'm seeing the same issue in our builds here.

I'm using nttld/setup-ndk@v1 to set up the Android NDK, and hendrikmuhs/ccache-action@v1.2 to set up ccache, and explicitly setting CMAKE_C_COMPILER_LAUNCHER and CMAKE_CXX_COMPILER_LAUNCHER in my CMakeLists.txt file.

I have other non-NDK builds in the same workflow that use hendrikmuhs/ccache-action@v1.2 successfully.

What I have noticed is that in the logs, cmake tells me:

 -- Android: Targeting API '21' with architecture 'arm64', ABI 'arm64-v8a', and processor 'aarch64'
 -- Android: Selected unified Clang toolchain
 -- The C compiler identification is Clang 12.0.8
 -- The CXX compiler identification is Clang 12.0.8
 -- The ASM compiler identification is Clang with GNU-like command-line
 -- Found assembler: /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang
 -- Detecting C compiler ABI info
 -- Detecting C compiler ABI info - done
 -- Check for working C compiler: /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang - skipped
 -- Detecting C compile features
 -- Detecting C compile features - done
 -- Detecting CXX compiler ABI info
 -- Detecting CXX compiler ABI info - done
 -- Check for working CXX compiler: /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ - skipped
 -- Detecting CXX compile features
 -- Detecting CXX compile features - done
 -- Using ccache: /usr/bin/ccache
 -- Building for XNNPACK_TARGET_PROCESSOR: arm64

And if I add the following to my workflow:

run: |
  ls -l /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang*
  file /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang*

I can see that the compiler's mtime is now:

lrwxrwxrwx 1 runner runner        8 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -> clang-12
lrwxrwxrwx 1 runner runner        5 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ -> clang
-rwxr-xr-x 1 runner runner 91449576 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-12
-rwxr-xr-x 1 runner runner 5835892 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-check
-rwxr-xr-x 1 runner runner  1727192 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-format
-rwxr-xr-x 1 runner runner  3020407 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-tidy
-rwxr-xr-x 1 runner runner 3536264 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-tidy.real
-rwxr-xr-x 1 runner runner 32893216 Aug 13 12:47 /opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clangd
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang:           symbolic link to clang-12
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++:         symbolic link to clang
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-12:        ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.24, not stripped
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-check:     ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.24, not stripped
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-format:    ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.24, not stripped
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-tidy:      ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=2v0XuL-cOoJkiOQuuy9e/qWjvnObM8MaYFs_NkWAO/46crAzoN3EUa6d1o5UzK/2i6U6LB3LIitzkAuirUC, not stripped
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-tidy.real: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.24, not stripped
/opt/hostedtoolcache/ndk/r23b/x64/toolchains/llvm/prebuilt/linux-x86_64/bin/clangd:          ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.24, not stripped

This seems to be a result of nttld/setup-ndk@v1 copying the downloaded and extracted NDK to the tool-cache here, and the tool-cache copying the files with io.cp, which does not preserve timestamps (but apparently preserves symlinks).

My current hack to get around this is to touch the compiler binaries with a fixed timestamp, hoping that a change in compiler version will always be accompanied with a change in compiler path (different NDK version), which would trigger a cache miss:

      - name: Force compiler binary mtime
        run: |
          find ${{ steps.setup-ndk.outputs.ndk-path }} -wholename '*/bin/clang*' -executable -type f,l -exec touch -h -t 202408130000 {} +

If anybody has a better solution, I'm all ears/eyes!

Cheers, Pedro

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants