-
Notifications
You must be signed in to change notification settings - Fork 585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretty big cache #252
Comments
Yes you're right atm caches are copied over the existing cache so it keeps growing. Can you open an issue on buildkit repo about that please? In the meantime you can do this: [...]
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
[...]
- name: Build
uses: docker/build-push-action@v2
with:
push: false
tags: ${{ steps.prepare.outputs.image }}
platforms: ${{ env.DOCKER_PLATFORMS }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
context: .
[...]
- name: Move cache
run:
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache cc. @tonistiigi |
@crazy-max that means that the |
|
Ok, thanks for the help, I will create an issue in the builtkit repo. For implementing an option to clean the cache/remove old versions of the cache. |
Hi @crazy-max However, I'm not sure about GitHub service container is able to create volume for service container and your GitHub action support that (because I'm only see caching |
This is done using Docker's own released actions in combination with GitHub's caching actions. A local package registry is used, and has been added to Dockerfile(s) as ARGs. These ARGs do not interfere with local Docker builds in my testing. Individual caches are used to speed up caching (see: docker/build-push-action#252 (comment))
This is done using Docker's own released actions in combination with GitHub's caching actions. A local package registry is used, and has been added to Dockerfile(s) as ARGs. These ARGs do not interfere with local Docker builds in my testing. Individual caches are used to speed up caching (see: docker/build-push-action#252 (comment))
* Explicit platform * Temp cache fix docker/build-push-action#252
docker/buildx#535 should fix this and make using github cache a breeze: [...]
- name: Build
uses: docker/build-push-action@v2
with:
tags: user/app:latest
cache-from: type=gha
cache-to: type=gha |
I used it for caching |
@malobre how to fix this?
moby/buildkit:buildx-stable-1 => buildkitd github.com/moby/buildkit v0.8.3 81c2cbd8a418918d62b71e347a00034189eea455
|
The server side is not implemented yet: moby/buildkit#1974. |
FWIW, there's a syntax error in the code block below. There is a missing "|" that took me a while to find and correct. This should be the correct invocation:
|
Given that moby/buildkit#1974 was just merged, what does the timeline look like to use cache |
You can test the gha cache exporter using this workflow while waiting for buildx 0.6 and BuildKit 0.9 to be GA. Feel free to give us your feedback, thanks! |
I just tried this out on one of my workflows. See https://github.com/jauderho/dockerfiles/actions/workflows/cloudflared.yml Looks like the "Setup Buildx" step now takes longer but I'm assuming that's due to the rest not yet being merged in. More importantly, the "Build and push" step looks to be much faster. Nice job! |
Yes that's it, buildx is built on-fly atm, that's why it takes more time for the setup step. |
Awesome. Looking forward to everything being merged in. |
Buildx 0.6.0-rc1 has been released. I've updated the workflow to use it so now it should be faster than building from source. |
Hmm, not sure if I am doing something wrong here but after updating to call buildx 0.6.0-rc1, it does not seem to trigger the caching. Here is my action: https://github.com/jauderho/dockerfiles/blob/main/.github/workflows/cloudflared.yml With buildx 0.6.0-rc1
Compare this to 2 days ago (which has the expected behavior)
|
Use |
Per your suggestion, updated to use image=moby/buildkit:master but it does not appear to make a difference.
|
As you're on a monorepo building multi Docker images with different context, you should use a specific cache-from: type=gha,scope=cloudflared
cache-to: type=gha,scope=cloudflared |
Appears there is an issue where the build cache grows unbounded until it hits Github's limit (docker/build-push-action#252, moby/buildkit#1896). Clear the cache after builds to prevent this.
Appears there is an issue where the build cache grows unbounded until it hits Github's limit (docker/build-push-action#252, moby/buildkit#1896). Clear the cache after builds to prevent this.
Otherwise, the cache is effectively discarded since the same images aren't used for both builds and the cache therefore assumes they're no longer in use and garbage-collects them. See docker/build-push-action#252 (comment)
This is a workaround for a known problem docker/build-push-action#252
This is a workaround for a known problem docker/build-push-action#252
Based mostly on information from: 1. https://docs.docker.com/build/building/cache/backends/gha/#using-dockerbuild-push-action 2. docker/build-push-action#252 (comment)
#756 mentions a comment where the "old" way is used. I switched to cache-from/cache-to:gha BUT if the whole job fails (or the build is not entirely successful), the cache does not seem to be used. In that regard, it seems still suboptimal or am I potentially doing anything wrong? |
If the cache hits then it won't write a new entry and then overwrites the existing cache with an empty dir. Adding an if statement checking the new directory has contents may help with this. |
- Changed cache type from registry to local for better performance - Added cache move step as temporary fix for docker/build-push-action#252 - Inherited secrets in docker-build-all.yml workflow
Description
The cache can grow very quickly with large images, since old entries are not deleted.
Configuration
Logs
logs_37.zip
My solution
Add a
clean-cache
configuration option that runs the following command before exporting the layersdocker system prune -f --filter "until=5h"
.The text was updated successfully, but these errors were encountered: