New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube image load
loads the same image but with different tags again
#11322
Comments
There are no such features in the current implementation, it is purely based on the name:tag This means that you need to use the alternatives (docker/podman/ctr) if you require it now |
both load and build are based on tarballs, not layers or files So it looks something like:
For build context it is like:
And then similar equivalents for the others. |
For load: using the cluster registry addon should also be able to fix this. $ docker tag python:3.8-buster $(minikube ip):5000/python:3.8-buster
$ time docker push $(minikube ip):5000/python:3.8-buster
The push refers to repository [192.168.49.2:5000/python]
a43310659d53: Pushed
8be90fda4620: Pushed
ddc3469d87c0: Pushed
8d18b38717e2: Pushed
651326e9f1ca: Pushed
5d5962699bd5: Pushed
a42439ce9650: Pushed
26270c5e25fa: Pushed
e2c6ff462357: Pushed
3.8-buster: digest: sha256:5ca75ad9cdf54ceebfd30f2e7e6b396c6779a7efac5d7aaa40cfd73190c2e8fc size: 2217
real 0m31,681s
user 0m0,192s
sys 0m0,090s
$ docker tag python:3.8-buster $(minikube ip):5000/python:3.8-buster-mytag
$ time docker push $(minikube ip):5000/python:3.8-buster-mytag
The push refers to repository [192.168.49.2:5000/python]
a43310659d53: Layer already exists
8be90fda4620: Layer already exists
ddc3469d87c0: Layer already exists
8d18b38717e2: Layer already exists
651326e9f1ca: Layer already exists
5d5962699bd5: Layer already exists
a42439ce9650: Layer already exists
26270c5e25fa: Layer already exists
e2c6ff462357: Layer already exists
3.8-buster-mytag: digest: sha256:5ca75ad9cdf54ceebfd30f2e7e6b396c6779a7efac5d7aaa40cfd73190c2e8fc size: 2217
real 0m0,199s
user 0m0,148s
sys 0m0,064s Since that uses the layers, not tarballs. For build one would need to keep a local build context (on the node), and instead use rsync to update it before building. That way only the delta would be transferred, rather than having to archive the whole build context (dir) and send it again.
As in it would use (partial) files, not tarballs. |
@spowelljr something for your benchmarks |
I should also mention that the problem with using the registry, is that the container start then needs to pull the image... i.e. it has been uploaded to /var/lib/registry but needs to be copied over to /var/lib/docker (etc) before it becomes available @kochetov-dmitrij : for your artificial use case, would it help if there was a
When minikube is pulling the image directly from an external registry, it will also use layers - like it normally does. The load command is intended to load something from the cache or from the host locally, not really from external. So if that is the case, it might be better to let the kubelet handle it - or maybe to use
|
This is (sortof) related to: #11276 Currently there is an optimization to not load images that exist, but it only looks at name:tag and not at contents. |
Thanks for the suggestions! My use case is building images on my host and running them on minikube in my dev pipeline. Sometimes there are no changes in the image but a new tag gets assigned. I thought Looks like using
I can look into implementing |
It would be nice if we could recognize this. It would also help with the "latest" i.e. the opposite problem, you have the same tag - but the contents changed The hope is to be able to use the "id" for this, even if it has other problems... |
The usual workaround/shortcut is to build the images in minikube, instead of building on the host
This would need some kind of "cloud storage" I suppose, or at least a backup and restore step. |
I meant I would build the images and store their cache on my host. And regardless whether minikube is up or going to start from a scratch, I can always quickly build my images on host and uploading them to minikube by running my pipeline. The "restore step" is simply running the pipeline again |
You said that it disturbed you that your build cache stored in the cluster got deleted with the cluster. It doesn't mean that all images will need to be built outside the cluster, even if that is one solution. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Problem
minikube image load
loads the same image but with different tags again instead of just comparing layer hashes and skipping the second unnecessary uploading.Steps to reproduce the issue:
The total size of all images doesn't change after loading different tags of the same image
Sys info
The text was updated successfully, but these errors were encountered: