Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow to create manifests of local images #3350

Open
caarlos0 opened this issue Oct 27, 2021 · 9 comments
Open

allow to create manifests of local images #3350

caarlos0 opened this issue Oct 27, 2021 · 9 comments

Comments

@caarlos0
Copy link
Contributor

Description

docker manifest create does not allow to create a manifest from unpushed images.

Steps to reproduce the issue:

  1. docker build some images locally
  2. docker manifest create using the images you just created

Describe the results you received:

no such manifest: registry/mylocalimg:tag

Describe the results you expected:

It should create the manifest locally, and when I push it (docker manifest push), it should push everything.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Cloud integration: 1.0.17
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:20 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:31 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.6.3)
  compose: Docker Compose (Docker Inc., v2.0.0)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 8
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.47-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 1.939GiB
 Name: docker-desktop
 ID: R5K7:TTMY:2XBT:BBBZ:4F6B:YEEA:OJCY:YTEV:4FEM:22F3:LKOY:2JJR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

n/a

@thaJeztah
Copy link
Member

I'm not sure this would be possible, as manifests are a reference to images in the registry, and reference the image's digest. This digest is calculated when the image is pushed. So, it would be "somewhat" possible to do this, but would involve the manifest push command to "automate" the steps;

  • look up local images referenced (foo:bar, foo:baz)
  • push foo:bar and get the digest
  • push foo:baz and get the digest
  • update the digest in the manifest
  • push the manifest

Using buildx to build the multi-arch image and/or the GitHub action for this is probably a better solution. Alternatively, docker buildx imagetools create (with the --append option) could possibly work as well to append new architectures to a manifest; https://docs.docker.com/engine/reference/commandline/buildx_imagetools_create/

@Felixoid
Copy link

Felixoid commented Mar 30, 2022

Alternatively, docker buildx imagetools create (with the --append option) could possibly work as well to append new architectures to a manifest; https://docs.docker.com/engine/reference/commandline/buildx_imagetools_create/

Dear @thaJeztah, how can one do it? We must use different --build-arg for different --platform, so we can't build all platforms in one run. I'd say, out of documentation and playing around I see that it requires images to be pushed to a registry as well.

Here's an example.

We have two different commands to build an images

docker buildx build --platform=linux/amd64 --build-arg=REPOSITORY='https://s3.amazonaws.com/package_release' --tag=clickhouse/clickhouse-server:head-amd64 --build-arg=VERSION='22.4.1.678' --progress=plain docker/server
docker buildx build --platform=linux/arm64 --build-arg=REPOSITORY='https://s3.amazonaws.com/package_aarch64' --tag=clickhouse/clickhouse-server:head-arm64 --build-arg=VERSION='22.4.1.678' --progress=plain docker/server

So we have two different images. Is there any way to merge them together before pushing remotely?

upd
Ok, according to docker/buildx#805 (comment) it's possible to push images by digest, save sha256 id with --iidfile head-arm64, and then one can use them in docker buildx imagetools create -t clickhouse/clickhouse-server:head -f head-arm64 -f head-amd64. Am I right?

upd2
No, I was wrong. The way to go is by using --metadata-file and reading containerimage.digest from there. The images should be pushed as --output=type=image,push-by-digest=true --tag=clickhouse/clickhouse-server, then a merged manifest is created as docker buildx imagetools create --tag clickhouse/clickhouse-server:head sha256:7ba33340ad98c15a90b30961552e7daab8a3936f4747347a82a9014736cc4abd sha256:ddbe9e633c9bc71de8f2d3814c6875a287b9c97d739fa9ed9893c8911b3e8852

@Felixoid
Copy link

Felixoid commented Mar 31, 2022

@thaJeztah another question regarding your point

This digest is calculated when the image is pushed.

I highly doubt it's the currently correct statement. When I use docker buildx build --metadata-file ... w/o pushing an image, I have digests there:

> grep containerimage.digest tmp/docker_images_check/*
tmp/docker_images_check/head-alpine-amd64:  "containerimage.digest": "sha256:3483a523269858a88d10bbd43f46a0a2544cccff8c072a5a8c2926e234a10a3a",
tmp/docker_images_check/head-alpine-arm64:  "containerimage.digest": "sha256:bf813839db941477e7c8ebde94361e3612885187040ec6bbf4e367996f605694",
tmp/docker_images_check/head-amd64:  "containerimage.digest": "sha256:7ea5a2790493219904ccc78ed6012de052571c4e040d7e35919d78be408c2792",
tmp/docker_images_check/head-arm64:  "containerimage.digest": "sha256:8c72952b2c22795dce3014dd752a96f2e587696b3254ef0bf65f4fcb84ae9577",

I can use them to refer remote digests, and it works if unnamed images are pushed to the repo:

docker buildx imagetools create --tag clickhouse/clickhouse-server:head sha256:7ea5a2790493219904ccc78ed6012de052571c4e040d7e35919d78be408c2792 sha256:8c72952b2c22795dce3014dd752a96f2e587696b3254ef0bf65f4fcb84ae9577

If I can use them, why wouldn't docker build imagetools create be able to push all at once? I guess it's just a meaning of some batch of code.

@thaJeztah
Copy link
Member

thaJeztah commented Mar 31, 2022

I'd have to double check with the @docker/build team. Some things may depend on what kind of builder you're using (container-builder or docker engine embedded buildkit); there's multiple digests; digest of the local image (which is calculated over the image layers in the local image store, before compressing; these are stable and reproducible), and digests of the image layers (blobs) after compressing; those are not guaranteed to be stable, as compression may produce different results in many circumstances (this is the reason the containerd store defaults to storing both compressed and un-compressed artifacts of images after pulling).

As to:

We have two different commands to build an images

docker buildx build --platform=linux/amd64 --build-arg=REPOSITORY='https://s3.amazonaws.com/package_release' --tag=clickhouse/clickhouse-server:head-amd64 --build-arg=VERSION='22.4.1.678' --progress=plain docker/server
docker buildx build --platform=linux/arm64 --build-arg=REPOSITORY='https://s3.amazonaws.com/package_aarch64' --tag=clickhouse/clickhouse-server:head-arm64 --build-arg=VERSION='22.4.1.678' --progress=plain docker/server

I'm wondering if for that specific example, the Dockerfile itself would be able to pick the right option based on one of the automatic platform ARGs, and conditionally pick the right value for REPOSITORY

But perhaps buildx bake would allow them to be set for each variant (@crazy-max ?), so that all can be built and pushed as a multi-arch image (each variant with the correct options).

@Felixoid
Copy link

Felixoid commented Mar 31, 2022

You are right, after docker buildx prune the new built images are different

diff -ru1 tmp/docker_images_check/head-alpine-amd64 tmp/docker_images_check_cached/head-alpine-amd64
--- tmp/docker_images_check/head-alpine-amd64	2022-03-31 14:50:29.816595096 +0200
+++ tmp/docker_images_check_cached/head-alpine-amd64	2022-03-31 10:46:31.912117357 +0200
@@ -22,12 +22,12 @@
   },
-  "containerimage.config.digest": "sha256:2abc7dfcc4c4b268e24655d9bf1027564fb782a8cce3d70b5105e4e1d14ce534",
+  "containerimage.config.digest": "sha256:5452ea43ee5aa45a6e1e26182b04a0df9bdb8da02e1a3321139200c7cf06a80f",
   "containerimage.descriptor": {
     "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
-    "digest": "sha256:32470e84612dfaebc4306e332a30158e855b6c6b079a4f6c0aea006db72db8c6",
+    "digest": "sha256:3483a523269858a88d10bbd43f46a0a2544cccff8c072a5a8c2926e234a10a3a",
     "size": 1781,
     "annotations": {
-      "org.opencontainers.image.created": "2022-03-31T12:50:26Z"
+      "org.opencontainers.image.created": "2022-03-31T08:46:28Z"
     }
   },
-  "containerimage.digest": "sha256:32470e84612dfaebc4306e332a30158e855b6c6b079a4f6c0aea006db72db8c6",
+  "containerimage.digest": "sha256:3483a523269858a88d10bbd43f46a0a2544cccff8c072a5a8c2926e234a10a3a",
   "image.name": "docker.io/clickhouse/clickhouse-server:head-alpine-amd64"
diff -ru1 tmp/docker_images_check/head-alpine-arm64 tmp/docker_images_check_cached/head-alpine-arm64
--- tmp/docker_images_check/head-alpine-arm64	2022-03-31 14:51:15.940363221 +0200
+++ tmp/docker_images_check_cached/head-alpine-arm64	2022-03-31 10:47:16.782102949 +0200
@@ -22,12 +22,12 @@
   },
-  "containerimage.config.digest": "sha256:ebf0b472412a991c8441b1da479a898b90e045ffbcefa2b5b08ee6cc57f8b60b",
+  "containerimage.config.digest": "sha256:ebee291992d7fd02dccba1520eaf80442f50855ea82035b8c7b7d8e4c10eb273",
   "containerimage.descriptor": {
     "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
-    "digest": "sha256:53dfe410eaa5a3a2414f099be9d360add6e769653344a0261fe84dff07c91361",
+    "digest": "sha256:bf813839db941477e7c8ebde94361e3612885187040ec6bbf4e367996f605694",
     "size": 1781,
     "annotations": {
-      "org.opencontainers.image.created": "2022-03-31T12:51:12Z"
+      "org.opencontainers.image.created": "2022-03-31T08:47:13Z"
     }
   },
-  "containerimage.digest": "sha256:53dfe410eaa5a3a2414f099be9d360add6e769653344a0261fe84dff07c91361",
+  "containerimage.digest": "sha256:bf813839db941477e7c8ebde94361e3612885187040ec6bbf4e367996f605694",
   "image.name": "docker.io/clickhouse/clickhouse-server:head-alpine-arm64"
diff -ru1 tmp/docker_images_check/head-amd64 tmp/docker_images_check_cached/head-amd64
--- tmp/docker_images_check/head-amd64	2022-03-31 14:47:24.888169443 +0200
+++ tmp/docker_images_check_cached/head-amd64	2022-03-31 10:44:51.898729939 +0200
@@ -17,12 +17,12 @@
   },
-  "containerimage.config.digest": "sha256:a0306caba2fedb10ebd5dc10b0f94a36ac71bb007abcb09eb60c5207b1d9dfe9",
+  "containerimage.config.digest": "sha256:d84561301369020bb5e98927796ea9fecbf7a31952db8dc334f96d8addbc0d31",
   "containerimage.descriptor": {
     "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
-    "digest": "sha256:34908b2909e4e3bb8bf5c6d27e35a6eae272892534af6c2e550a5b58ab41cfa1",
+    "digest": "sha256:7ea5a2790493219904ccc78ed6012de052571c4e040d7e35919d78be408c2792",
     "size": 2201,
     "annotations": {
-      "org.opencontainers.image.created": "2022-03-31T12:47:20Z"
+      "org.opencontainers.image.created": "2022-03-31T08:44:48Z"
     }
   },
-  "containerimage.digest": "sha256:34908b2909e4e3bb8bf5c6d27e35a6eae272892534af6c2e550a5b58ab41cfa1",
+  "containerimage.digest": "sha256:7ea5a2790493219904ccc78ed6012de052571c4e040d7e35919d78be408c2792",
   "image.name": "docker.io/clickhouse/clickhouse-server:head-amd64"
diff -ru1 tmp/docker_images_check/head-arm64 tmp/docker_images_check_cached/head-arm64
--- tmp/docker_images_check/head-arm64	2022-03-31 14:49:35.582748688 +0200
+++ tmp/docker_images_check_cached/head-arm64	2022-03-31 10:45:44.452107433 +0200
@@ -17,12 +17,12 @@
   },
-  "containerimage.config.digest": "sha256:dfc487c2ddef3ff512df0bae810a49ae52e2e29dae529ac3fd0602f525b81f35",
+  "containerimage.config.digest": "sha256:9df4519cb13cc2897568cb8607c4279aef6e31f173c8f42a0842ffd325b571bb",
   "containerimage.descriptor": {
     "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
-    "digest": "sha256:1df207d7ea2ffa7f293e8156d29114c8221706e8a87b27466c81b165f7b26eeb",
+    "digest": "sha256:8c72952b2c22795dce3014dd752a96f2e587696b3254ef0bf65f4fcb84ae9577",
     "size": 2201,
     "annotations": {
-      "org.opencontainers.image.created": "2022-03-31T12:49:31Z"
+      "org.opencontainers.image.created": "2022-03-31T08:45:41Z"
     }
   },
-  "containerimage.digest": "sha256:1df207d7ea2ffa7f293e8156d29114c8221706e8a87b27466c81b165f7b26eeb",
+  "containerimage.digest": "sha256:8c72952b2c22795dce3014dd752a96f2e587696b3254ef0bf65f4fcb84ae9577",
   "image.name": "docker.io/clickhouse/clickhouse-server:head-arm64"

On one hand... But on another, I still don't see an issue. In any case, the operation seems atomic. I push either everything at once or nothing.

@tisonkun
Copy link

This requirement could be a prerequisite for manually pushing multi-arch images without -amd64/-arm64 alike tags pushed at first.

Buildx supports doing it internally but docker manifest requires images to be pushed first. A common case is that users build multi-arch images on different workers (GitHub Actions runners). While images can be shared by save/load, this process doesn't create a manifest and thus cannot create a multi-arch image (manifest) locally and push.

Said you can do docker buildx build --platfrom ... --push ., but you cannot do:

docker load -i amd64-image.tar
docker load -i arm64-image.tar
// do something to merge the images and push

@Felixoid
Copy link

I didn't experiment locally but would push w/o tagging and latter using preserved sha256 work?

@tisonkun
Copy link

@Felixoid That should satisfy my use case at some point. How can I find the sha256 programatically? (so that I can integrate it into the CD process)

@Felixoid
Copy link

In our case, the push is done here https://github.com/ClickHouse/ClickHouse/blob/1e4fe038f562029fc24a0e7a33e5d428ea0474f9/tests/ci/docker_server.py#L243-L245, and the metadata file is preserved in https://github.com/ClickHouse/ClickHouse/blob/1e4fe038f562029fc24a0e7a33e5d428ea0474f9/tests/ci/docker_server.py#L264

So later the metadata is read and the container's digest used to combine the manifest in https://github.com/ClickHouse/ClickHouse/blob/1e4fe038f562029fc24a0e7a33e5d428ea0474f9/tests/ci/docker_server.py#L294

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants