New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using cache with build arg in FROM statement #637
Comments
While debugging why caching isn't working for one of my images, I've just found out that apparently the build args are included in the cache key, so if any build arg is different from the cached build it won't be used. |
With debug logging enabled I get:
Where TAG is my build-arg. |
@discordianfish, thanks for your reply! While build running, kaniko prints info messages about using cache or adding new layers and always prints these messages at the end.
Seems these steps changing hash of final image, thus derived images builds again |
@bottleneko Yes, I think there are actually two issues:
So while my fix works and causes kaniko to reuse cached layers when I build the same image multiple time with different build args, I can't use the resulting images in FROM without busting the cache for them. |
Okay I can confirm, it's simply that kaniko builds are simply not reproducable. Even without build args or anything. Building the same image twice (when using cache at least, but that should even make it easier), results in two different SHAs. And yeah, probably due to the side effects you mentioned that didn't get filtered out properly. First run (not cached)
Second run (cached)
|
I am seeing this as well. Even when the cache hits on every layer, I end up with a new image and SHA. |
I'm having the same issue with google cloud build First run:
Seccond run:
|
Nevermind...it works |
@oscarAmarello what did you change to make it work? |
I had an error on my cloudbuild config file |
Same here
And the Dockerfile is
|
yes. I get a lot of this errors for the cache. I don't have ARG. use kaniko-project/executor:v0.12.0 |
I have the same issue when using docker staging build, eg.: ARG/ENV and command on single stage, cache is working: # single stage
FROM alpine:3.10
RUN apk add --no-cache wget
ARG QBEC_VER=0.7.5
RUN wget -O- https://github.com/splunk/qbec/releases/download/v${QBEC_VER}/qbec-linux-amd64.tar.gz \
| tar -C /usr/local/bin -xzf - ARG/ENV and command on second stage, cache is not working: # first stage
FROM alpine:3.10 as builder
RUN apk add --no-cache wget
# second stage
FROM builder
ARG QBEC_VER=0.7.5
RUN wget -O- https://github.com/splunk/qbec/releases/download/v${QBEC_VER}/qbec-linux-amd64.tar.gz \
| tar -C /usr/local/bin -xzf - ENV on first stage, command on second stage, cache is not working: # first stage
FROM alpine:3.10 as builder
RUN apk add --no-cache wget
ENV QBEC_VER=0.7.5
# second stage
FROM builder
RUN wget -O- https://github.com/splunk/qbec/releases/download/v${QBEC_VER}/qbec-linux-amd64.tar.gz \
| tar -C /usr/local/bin -xzf -
|
Have took a quick look at kaniko code, seems it use context, command args, files, and Dockerfile command to generate the HashValue for each image layer. |
I'd like to get some clarity on this issue as a number of things have changed in Kaniko and it sounds like there are a couple of things at play here. Some bugs in caching have been fixed so getting consistent cache keys is no longer a problem in v 0.15.0 Digests of layers and images built by kaniko are not reproducible (unless the Kaniko includes build args in the cache key (as mentioned earlier in this thread). It seems to me that this is the desired behavior, but I may be missing some use cases. Can someone share an example of when you would change a build arg but still want the build to be cached? OR Perhaps I'm misunderstanding this issue. It could be that layers which are unaffected by the change in build args should remain cached and are currently not. |
Hi @cvgw, |
Thanks @kvaps for confirming. i am going to close this issue now. |
Actual behavior
Kaniko rebuilds all layers with build arg used in from statement, but cache exists locally in in remote registry
Expected behavior
Kaniko rebuilds only changed layers
To Reproduce
Steps to reproduce the behavior:
Additional Information
test1/Dockerfile
test2/Dockerfile
The text was updated successfully, but these errors were encountered: