Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting building multi-platform images (podman buildx) #1590

Closed
junaruga opened this issue May 13, 2019 · 61 comments
Closed

Supporting building multi-platform images (podman buildx) #1590

junaruga opened this issue May 13, 2019 · 61 comments
Assignees
Labels
buildkit from Podman This issue was either first reported on the Podman issue list or when running 'podman build' kind/feature Categorizes issue or PR as related to a new feature.

Comments

@junaruga
Copy link

junaruga commented May 13, 2019

Description

Supporting building multi-platform images (podman buildx)

Detail

This ticket is a request for feature, originally from containers/podman#3063 .

docker buildx [1][2] is to enable building and running multi-platform container images.
I would like to see that podman has like the feature.

$ docker buildx build --platform linux/arm64 ...

RHEL 8 started supporting multi arch including ARM 64 bit.
Quay 3 started supporting multi arch including ARM 64-bit. [3]
So, it might be a good timing for podman to support this feature.

docker buildx is using QEMU internally to do it.
As an another way to achieve this, there is qemu-user-static [4] also using QEMU.

According to the docker buildx's article [2], maybe both have similar logic in it.

This fast and lightweight container OS comes packaged with the QEMU emulator, and comes pre-configured with binfmt_misc to run binaries of any supported architecture.

But docker buildx looks much easier than qemu-user-static.

@junaruga
Copy link
Author

junaruga commented May 19, 2019

I like to explain more context about this ticket.

Why is this feature important for me now?

First, below announcement from Docker is April, 30, 2019. Just 20 days ago.
It's happening now.

[2] docker buildx article: https://engineering.docker.com/2019/04/multi-arch-images/

docker buildx is not stable and limited release for the edge version for now.
But after it is stable at once, people recognize and notice how building a multi arch container on x86_64 is easy. Currently most of the container users might not know even it is possible to build much arch containers on x86_64.

It's something like people who do not know a smart phone in a old era.
When they know it at once, they want to use it.

And people might say "We want to build and run a multi arch container like docker buildx in the open source license world, or in Fedora, CentOS or Debian ecosystem.

multiarch project

Let me share about multiarch project too that I am working for now.

[4] qemu-user-static: https://github.com/multiarch/qemu-user-static

multiarch project https://github.com/multiarch is to give people the below multi arch experience.

$ uname -m
x86_64

$ docker run --rm --privileged multiarch/qemu-user-static:register --reset

$ docker run --rm -t multiarch/fedora:30-aarch64 uname -m
aarch64
$ docker run --rm -t multiarch/fedora:30-s390x uname -m
s390x

$ docker run --rm -t multiarch/fedora:29-aarch64 uname -m
aarch64
$ docker run --rm -t multiarch/fedora:29-ppc64le uname -m
ppc64le
$ docker run --rm -t multiarch/fedora:29-s390x uname -m
s390x

It enables people to add multi arch testing cases to their current CI, and it has a role to promote the multiarch container use cases.
People recognize that "oh it is possible to build, run and test multi arch cases on the CI".

Here are the cases to add the multi arch cases to Travis CI.

https://github.com/BenLangmead/bowtie2/blob/master/.travis.yml#L22-L41
https://github.com/jts/nanopolish/blob/master/.travis.yml#L46-L75

It seems that adding multi arch cases on their existing x86_64 base CI is more casual for them rather than adding new native much arch supporting CI newly like "Shippable CI', in my experience of pull-requests on some projects.

Shippable CI supports native 64/32 bit ARM environment like this.
But I have never succeeded to merge my pull-request to the other organization's repositories.
https://github.com/lh3/minimap2/pull/398/files

There are some challenges for this multiarch project's current technology.
For example, people have to use the compatible containers for qemu-user-static rather than using their existing mutl arch containers.

And docker buildx and podman buildx solve it.

@rhatdan
Copy link
Member

rhatdan commented May 20, 2019

If the community wants to work on this, then we would consider it. Currently we don't have the resources or the priority to work on it.

@junaruga
Copy link
Author

junaruga commented May 20, 2019

@rhatdan you mean "the community" in "the community wants to work on this" means multarch project?

@rhatdan
Copy link
Member

rhatdan commented May 20, 2019

Yes, we need contributors opening PR's to make this happen, it is not as high as other features in the priority list of the core maintainers.

@junaruga
Copy link
Author

junaruga commented Jun 3, 2019

@rhatdan sure. thanks for considering keeping the possibility.
I wish this ticket is kept as opening.
Though I can not promise the work for buildash, it's not zero percent. The multiarch project needs some works before being ready for the work of buildash.

@grooverdan
Copy link

grooverdan commented Jun 10, 2019

Working on at least part of this with #1368 (comment). This covers the build side of buildah build-using-dockerfile --platform linux/aarch64/variant ... Yes I'd planned some interaction with qemu-static in the build.

I see docker buildx nicely integrates multiple platform arguments to produce a manifest. I'll see what can be done for buildah. I haven't looked at pod yet.

@junaruga
Copy link
Author

junaruga commented Jun 11, 2019

Wow, great work, @grooverdan

@rhatdan
Copy link
Member

rhatdan commented Jun 11, 2019

@grooverdan Thanks for looking into this.

@rhatdan
Copy link
Member

rhatdan commented Jun 20, 2019

@grooverdan Any progress?

@grooverdan
Copy link

grooverdan commented Jun 21, 2019

Yes:
tree: https://github.com/grooverdan/buildah/tree/args-target-platform
sample output of simple test case: https://dpaste.de/obgx

Current resolving issues:

  • multistage images currently don't get the right target architecture

I'm making the fetching of the last FROM image in a docker file to fetch using the --platform arguments.

For nicely handling multi-stages on cross builds I think the following is needed:
extending FROM to take a --platform {os/arch/variant|"build"|"target"} argument

Its needed as in multistage it not clear if a FROM, excluding the last one, is intended to be of the build or target arch (and if we're doing that we may as well do any).

Is this an acceptable extension?

Currently only see the platform Variant being used in the Push/Pull/Manifest protocols and not an Image metadata. I'm I missing anything in this respect?

@rhatdan
Copy link
Member

rhatdan commented Jun 21, 2019

I am fine with extending the FROM command.
@nalind WDYT?

@nalind
Copy link
Collaborator

nalind commented Jun 24, 2019

As a rule, I lean away from making up our own extensions to Dockerfile instructions and syntax.
What's the use case for having FROM do something other than assuming that the image being referred to is for target platform?

@grooverdan
Copy link

grooverdan commented Jun 25, 2019

When you have a multiple FROM statements in a multi-stage image its fairly obvious that the last one is of the target. Earlier stages however are more ambiguous, they could be of the build (native) architecture to cross-compile some artefacts, or build architecture agnostic artefacts (which is faster not emulated), or they could be of the target architecture to have components.

For now I'll follow the implementation of docker, leave this as a separate feature.

@nalind, seems docker buildx is already using the platform extension:
https://github.com/docker/buildx#building-multi-platform-images

@grooverdan
Copy link

grooverdan commented Jun 25, 2019

Interestingly (perversely?), docker --platform takes the first FROM as the target architecture and others at the native architecture:

Dockerfile.platformargs:

FROM alpine
ARG BUILDPLATFORM
ARG BUILDOS
ARG BUILDARCH
ARG BUILDVARIANT
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
FROM ubuntu
COPY --from=alpine /bin/ls /tmp/ls
$ DOCKER_BUILDKIT=1  docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:44:27 2019
 OS/Arch:           linux/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          18.09.5
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       e8ff056
  Built:            Thu Apr 11 04:13:40 2019
  OS/Arch:          linux/amd64
  Experimental:     true
$  DOCKER_BUILDKIT=1  docker build --tag testdocker --platform linux/ppc64le -f Dockerfile.platformargs .
$  docker create -ti --name container_test_docker testdocker file /bin/ls /tmp/ls
aec54aad0aa9ddba2a029da5107a199ef45e49c981965e4854c5286d9f8ee6fe
$ docker cp container_test_docker:/bin/ls /tmp/ubuntu_ls
$ docker cp container_test_docker:/tmp/ls /tmp/alpine_ls
$ file  /tmp/alpine_ls /tmp/ubuntu_ls
/tmp/alpine_ls: ELF 64-bit LSB pie executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-powerpc64le.so.1, stripped
/tmp/ubuntu_ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=9567f9a28e66f4d7ec4baf31cfbf68d0410f0ae6, stripped

moby/buildkit#1057

@junaruga
Copy link
Author

junaruga commented Jul 30, 2019

Hi @grooverdan

tree: https://github.com/grooverdan/buildah/tree/args-target-platform

Your latest program is still the above branch right? I thought I wanted to try to run your program.

@grooverdan
Copy link

grooverdan commented Jul 31, 2019

Yep its the latest. About to get back to this really soon. Plan was to normalize it its behaviour to be consistent with the buildkit issue comment above, see what was working and what needs to be fixed. I also need to see how far on disk libraries have come with respect to storing architecture information. Please have a try, report anything you like/dislike. If you want changes in my tree I'm happy to take PRs and I'll merge them somehow.

@junaruga
Copy link
Author

junaruga commented Aug 1, 2019

@grooverdan sure. alright. Let me try to run your program.

Plan was to normalize it its behaviour to be consistent with the buildkit issue comment above, see what was working and what needs to be fixed.

Sure, I recognized docker build --platform option was only available when setting DOCKER_BUILDKIT=1.

My environment.

$ DOCKER_BUILDKIT=1 docker version
Client: Docker Engine - Community
 Version:           19.03.0
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        aeac9490dc
 Built:             Wed Jul 17 18:16:02 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       aeac9490dc
  Built:            Wed Jul 17 18:14:40 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ DOCKER_BUILDKIT=1 docker build --help | grep platform
      --platform string         Set platform if server is multi-platform capable

$ docker build --help | grep platform
=> Empty line

Build a ppc64le image with your Dockerfile.

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-ppc64le --platform linux/ppc64le .

Run it with QEMU and binfmt_misc /proc/sys/fs/binfmt_misc/qemu-$arch installed by dnf install qemu-user-static.

$ docker run --rm -t test/alpine-ppc64le uname -m
ppc64le

@junaruga
Copy link
Author

junaruga commented Aug 1, 2019

I just compared the result beween docker build --help and DOCKER_BUILDKIT=1 docker build --help. Interesting.

$ diff docker_build_help.txt docker_build_help_builder_buildkit_1.txt 
11d10
<       --compress                Compress the build context using gzip
28a28,31
>   -o, --output stringArray      Output destination (format: type=local,dest=path)
>       --platform string         Set platform if server is multi-platform capable
>       --progress string         Set type of progress output (auto, plain, tty). Use
>                                 plain to show container output (default "auto")
32a36,37
>       --secret stringArray      Secret file to expose to the build (only if BuildKit
>                                 enabled): id=mysecret,src=/local/secret
34a40,42
>       --ssh stringArray         SSH agent socket or keys to expose to the build (only
>                                 if BuildKit enabled) (format:
>                                 default|<id>[=<socket>|<key>[,<key>]])

@junaruga
Copy link
Author

junaruga commented Aug 1, 2019

The --platform option works very well.
For the last case for --platform linux/i386, the result of uname -m is x86_64. But it is an issue of original i386/alpine image.

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-ppc64le --platform linux/ppc64le .
$ docker run --rm -t test/alpine-ppc64le uname -m
ppc64le

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-aarch64 --platform linux/aarch64 .
$ docker run --rm -t test/alpine-aarch64 uname -m
aarch64

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-s390x --platform linux/s390x .
$ docker run --rm -t test/alpine-s390x uname -m
s390x

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-arm --platform linux/arm .
$ docker run --rm -t test/alpine-arm uname -m
armv7l
$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-i386 --platform linux/i386 .
$ docker run --rm -t test/alpine-i386 uname -m
x86_64

$ docker pull i386/alpine
$ docker run --rm -t i386/alpine uname -m
x86_64

@grooverdan
Copy link

grooverdan commented Aug 1, 2019

I think I've completed:

Exposure of args in build process.

Less sure:

Ensuing args are declared before use (or not) in the same way as docker buildkit/buildx.

Known gaps:

Passing --platform argument into fetching docker images (e.g. docker pull --platform X ubuntu:latest`.

Image on filesystem detection of platform (not sure its even there).

Extending Dockerfile "FROM xxx" to FROM --platform ${(BUILD|TARGET)PLATFORM} xxx

@junaruga
Copy link
Author

junaruga commented Aug 1, 2019

@grooverdan Congratulations! I have a few questions.

I think the difference is that you had the ubuntu image locally in docker images for x86 while you didn't have alpine. If image exists locally it is always used instead of accessing the registry. This isn't really obvious and hopefully can be fixed ... Currently, it should always work correctly if you do docker build --pull .

According, above "moby/buildkit" ticket, did you implement above current docker's behavior to podman, to keep the consistency with docker, right?

Image on filesystem detection of platform (not sure its even there).

What does it mean?
For "docker buildx", it seems it is using internally "moby/buildkit"'s binfmt_misc data generated by below files: debian's binutils-$arch-linux-gnu deb packages, to detect the specific arch's container image specified by platform option.

https://github.com/moby/buildkit/blob/master/util/binfmt_misc/Dockerfile
https://github.com/moby/buildkit/blob/master/util/binfmt_misc/ppc64le_binary.go

You meant Podman does not have this kind of logic internally right?

@junaruga
Copy link
Author

junaruga commented Aug 1, 2019

I tested buildah on my chroot mock environment (Fedora rawhide), getting the source from your branch, referring https://src.fedoraproject.org/rpms/buildah/blob/master/f/buildah.spec , I could not run. Maybe it's not the new feature specific issue, but my environment's issue. ;<

$ git remote grooverdan add git@github.com:grooverdan/buildah.git

$ git remote -v
grooverdan	git@github.com:grooverdan/buildah.git (fetch)
grooverdan	git@github.com:grooverdan/buildah.git (push)
origin	git@github.com:containers/buildah.git (fetch)
origin	git@github.com:containers/buildah.git (push)

$ git checkout args-target-platform

Below is just from the buildah.spec file. Maybe unnecessary steps are included.
For someone who is interested in reproducing the process, I did run it on my mock environment with below ~/.config/mock.cfg. Then I did fedpkg srpm, mock *.rpm, and mock shell.

~/.config/mock.cfg

# Enable network on mock enviornment.
config_opts['use_host_resolv'] = True
config_opts['rpmbuild_networking'] = True
# Mount the source directory on mock environment.
config_opts['plugin_conf']['bind_mount_enable'] = True
config_opts['plugin_conf']['bind_mount_opts']['dirs'].append(('/home/jaruga/git/containers/buildah', '/mnt/buildah' ))
$ uname -m
x86_64

$ go version
go version go1.13beta1 linux/amd64
$ cd /mnt/buildah
$ sed -i 's/GOMD2MAN =/GOMD2MAN ?=/' docs/Makefile
$ sed -i '/docs install/d' Makefile
$ mkdir _build
$ pushd _build
$ ln -s /mnt/buildah src/github.com/containers/buildah
$ popd
$ mv vendor src
$ export GOPATH=/mnt/buildah/_build:/mnt/buildah
$ export 'BUILDTAGS=seccomp selinux'
$ GO111MODULE=off go build -buildmode pie -compiler gc '-tags=rpm_crashtraceback seccomp selinux' -ldflags ' -extldflags '\''-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\''' -a -v -x -o buildah github.com/containers/buildah/cmd/buildah
$ ./buildah --version
buildah version 1.9.1-dev (image-spec 1.0.0, runtime-spec 1.0.0)
$ ./buildah --help
...
  build-using-dockerfile Build an image using instructions in a Dockerfile
...
$ ./buildah build-using-dockerfile --help
....
      --platform string               Sets target platform and TARGET{PLATFORM,OS,ARCH,VARIANT} build args for cross builds
...

I used below file.

$ cat Dockerfile.fedora
FROM docker.io/fedora AS fedora
ARG BUILDPLATFORM
ARG BUILDOS
ARG BUILDARCH
ARG BUILDVARIANT
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
# RUN /usr/bin/uname -m
FROM docker.io/ubuntu
# RUN /usr/bin/uname -m
COPY --from=fedora /bin/ls /tmp/ls

Then

$ ./buildah bud -f Dockerfile.fedora .

As I got below error, I did put the file to the path.

STEP 1: FROM docker.io/fedora AS fedora
error creating build container: error obtaining default signature policy: open /etc/containers/policy.json: no such file or directory

As I got below error, I commented out the lines.

STEP 10: RUN /usr/bin/uname -m
error running container: error creating container for [/bin/sh -c /usr/bin/uname -m]: : exec: "runc": executable file not found in $PATH

Then run again, I got below error.

STEP 11: COPY --from=fedora /bin/ls /tmp/ls
STEP 12: COMMIT
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x55d9e14fb244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x55d9e19fa300, 0xc000726120, 0x55d9e19cd5e0, 0xc0000c2010, 0x55d9e19efe40, 0xc000b28160, 0x0, 0x55d9e19eff00, 0xc000ae4300, 0xc000b28000, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.(*Builder).Commit(0xc0000eb080, 0x55d9e19e6b40, 0xc0000be060, 0x0, 0x0, 0x55d9e151a453, 0x2a, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/commit.go:216 +0x37e
github.com/containers/buildah/imagebuildah.(*StageExecutor).commit(0xc0005e8160, 0x55d9e19e6b40, 0xc0000be060, 0xc0000e9600, 0xc00074c640, 0x34, 0x0, 0x0, 0x0, 0x10, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1577 +0x169f
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0005e8160, 0x55d9e19e6b40, 0xc0000be060, 0x1, 0x55d9e1512718, 0x1, 0xc0000e9600, 0xc000377dc0, 0xc000804b10, 0x10, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1096 +0x1a04
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc0006146c0, 0x55d9e19e6b40, 0xc0000be060, 0xc0000c7ef0, 0x2, 0x2, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x55d9e19e6b40, 0xc0000be060, 0x55d9e19fa300, 0xc000726120, 0xc0005abef0, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc0005b9020, 0x1, 0x3, 0xc0005abb58, 0xc000001b00, 0xc0000c4d80, 0xc000001c80, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc0005b9020, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc0005b8fc0, 0x3, 0x3, 0xc0002d7b80, 0xc0005b8fc0)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x55d9e1f5caa0, 0xc0000aac00, 0x7ffe6fe78807, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
Getting image source signatures
Copying blob 543791078bdb skipped: already exists
Copying blob c56e09e1bd18 skipped: already exists
Copying blob a31dbd3063d7 skipped: already exists
Copying blob b079b3fa8d1b skipped: already exists
Copying blob 75ec67da8039 done
Copying config ba4e8140d3 done
Writing manifest to image destination
Storing signatures
ba4e8140d35300be4470e1698b56cf69dfc7f10294188a73c20c82e9ff8380b7

@rhatdan
Copy link
Member

rhatdan commented Aug 1, 2019

Buildah requires you to have runc (Runc package in fedora) and /etc/containers/policy.json file installed. container-commons in Fedora world.

@junaruga
Copy link
Author

junaruga commented Aug 2, 2019

Thanks for the info! I missed below in buildah.spec.

Requires: runc >= 1.0.0-17
Requires: containers-common

I installed the required RPM packages.

$ mock -r fedora-rawhide-x86_64 -i runc
$ mock -r fedora-rawhide-x86_64 -i containers-common

On mock environment again.

$ rpm -q runc containers-common
runc-1.0.0-96.dev.git9ae7901.fc31.x86_64
containers-common-0.1.38-3.dev.git5f45112.fc31.x86_64

$ ./buildah rmi -a

$ ./buildah images -a
REPOSITORY   TAG   IMAGE ID   CREATED   SIZE

$ cat Dockerfile.test
FROM fedora
RUN uname -m
$ ./buildah bud -f Dockerfile.test .
STEP 1: FROM fedora
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x5574a60db244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x5574a65da300, 0xc000720900, 0x5574a65ad5e0, 0xc0000c2010, 0x5574a65d0260, 0xc0002b1770, 0xc0000e69a0, 0x5574a65cff00, 0xc000382180, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.pullImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc0002b1770, 0x0, 0x0, 0x5574a65ad5e0, 0xc0000c2010, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/pull.go:268 +0x3a5
github.com/containers/buildah.pullAndFindImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc0002b1770, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:34 +0x138
github.com/containers/buildah.resolveImage(0x5574a65c6b40, 0xc0000be060, 0xc0000e69a0, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:175 +0x114b
github.com/containers/buildah.newBuilder(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:250 +0x18d3
github.com/containers/buildah.NewBuilder(...)
	/mnt/buildah/_build/src/github.com/containers/buildah/buildah.go:435
github.com/containers/buildah/imagebuildah.(*StageExecutor).prepare(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:832 +0x3a0
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:981 +0x16a
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc00071c900, 0x5574a65c6b40, 0xc0000be060, 0xc0007287b0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc0006e8240, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0xc000639d58, 0xc000001e00, 0xc0000c4d80, 0xc00009c780, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc00064b470, 0x3, 0x3, 0xc0002d7b80, 0xc00064b470)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x5574a6b3caa0, 0xc0000aac00, 0x7ffce0ac9809, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x5574a60db244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x5574a65da300, 0xc000720900, 0x5574a65ad5e0, 0xc0000c2010, 0x5574a65d0260, 0xc000412080, 0xc0000e69a0, 0x5574a65cff00, 0xc00079e1e0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.pullImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc000412080, 0x0, 0x0, 0x5574a65ad5e0, 0xc0000c2010, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/pull.go:268 +0x3a5
github.com/containers/buildah.pullAndFindImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc000412080, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:34 +0x138
github.com/containers/buildah.resolveImage(0x5574a65c6b40, 0xc0000be060, 0xc0000e69a0, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:175 +0x114b
github.com/containers/buildah.newBuilder(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:250 +0x18d3
github.com/containers/buildah.NewBuilder(...)
	/mnt/buildah/_build/src/github.com/containers/buildah/buildah.go:435
github.com/containers/buildah/imagebuildah.(*StageExecutor).prepare(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:832 +0x3a0
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:981 +0x16a
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc00071c900, 0x5574a65c6b40, 0xc0000be060, 0xc0007287b0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc0006e8240, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0xc000639d58, 0xc000001e00, 0xc0000c4d80, 0xc00009c780, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc00064b470, 0x3, 0x3, 0xc0002d7b80, 0xc00064b470)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x5574a6b3caa0, 0xc0000aac00, 0x7ffce0ac9809, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
Getting image source signatures
Copying blob fd2e8b5b2254 done
Copying config ef49352c9c done
Writing manifest to image destination
Storing signatures
STEP 2: RUN uname -m
x86_64
STEP 3: COMMIT
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x5574a60db244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x5574a65da300, 0xc000720900, 0x5574a65ad5e0, 0xc0000c2010, 0x5574a65cfe40, 0xc0000e6dc0, 0x0, 0x5574a65cff00, 0xc000810720, 0xc0000e6c60, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.(*Builder).Commit(0xc000860000, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x0, 0x5574a60fa453, 0x2a, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/commit.go:216 +0x37e
github.com/containers/buildah/imagebuildah.(*StageExecutor).commit(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0xc0000e9340, 0xc0000982a0, 0x60, 0x0, 0x0, 0x0, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1577 +0x169f
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1096 +0x1a04
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc00071c900, 0x5574a65c6b40, 0xc0000be060, 0xc0007287b0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc0006e8240, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0xc000639d58, 0xc000001e00, 0xc0000c4d80, 0xc00009c780, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc00064b470, 0x3, 0x3, 0xc0002d7b80, 0xc00064b470)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x5574a6b3caa0, 0xc0000aac00, 0x7ffce0ac9809, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
Getting image source signatures
Copying blob 4843c53b9094 skipped: already exists
Copying blob 5390b6512b39 done
Copying config 59b62232fa done
Writing manifest to image destination
Storing signatures
59b62232fa9e1dbe0d4468bbcebfb4b4c8bb1c65be5114413847ab64e5a57e10
$ ./buildah images
REPOSITORY                 TAG      IMAGE ID       CREATED         SIZE
<none>                     <none>   59b62232fa9e   3 minutes ago   253 MB
docker.io/library/fedora   latest   ef49352c9c21   10 hours ago    253 MB

Any idea to fix this issue? Thanks.

@rhatdan
Copy link
Member

rhatdan commented Aug 2, 2019

Looking at the code, it looks like this is blowing up.

func getCopyOptions(store storage.Store, reportWriter io.Writer, sourceSystemContext *types.SystemContext, destinationSystemContext *types.SystemContext, manifestType string) *cp.Options {
	sourceCtx := getSystemContext(store, nil, "")
	if sourceSystemContext != nil {
		*sourceCtx = *sourceSystemContext
	}

	destinationCtx := getSystemContext(store, nil, "")
	if destinationSystemContext != nil {  // IN MASTER THIS IS LINE 28
		*destinationCtx = *destinationSystemContext  // I'M THINKINIG IT IS THIS LINE.
	}
	return &cp.Options{
		ReportWriter:          reportWriter,
		SourceCtx:             sourceCtx,
		DestinationCtx:        destinationCtx,
		ForceManifestMIMEType: manifestType,
	}
}

@kevinelliott
Copy link

kevinelliott commented Sep 13, 2019

Very excited for this. It's the only thing holding me back from jumping to podman/buildah from docker buildx. Thanks for the hard work here @grooverdan, filing the issue and testing @junaruga, and supporting the effort @rhatdan.

@junaruga
Copy link
Author

junaruga commented Sep 14, 2019

@kevinelliott Below can be an alternative way to build multiarch-platform images with podman until podman implement this feature.

  1. Prepare below Dockerfile.

    ARG BASE_IMAGE=fedora
    FROM ${BASE_IMAGE}
    
    RUN something
    RUN something
    ...
    
    
  2. Install qemu-user-static RPM or run qemu-user-static container image.

    $ yum install qemu-user-static
    
  3. Run an arch container by specifying arch container image directly using above Dockerfile.

    $ podman build --rm -t my-fedora --build-arg BASE_IMAGE=arm64v8/fedora . 
    
    $ podman run --rm -t my-fedora uname -m
    aarch64
    

@rhatdan
Copy link
Member

rhatdan commented Feb 2, 2021

You should be able to do most of this in emulation mode now.

buildah bud --manifest foobar --arch ARCH1 .
buildah bud --manifest foobar --arch ARCH2 .
buildah bud --manifest foobar --arch ARCH3 .

Should work now with the code in buildah-1.19.3

@avikivity
Copy link

avikivity commented Feb 2, 2021

Many thanks. I only have 1.18 in Fedora 33, will try it as soon as I get upgraded.

@avikivity
Copy link

avikivity commented Feb 2, 2021

I see that --arch is supported in 1.18. Can I assemble the image using podman manifest instead?

@rhatdan
Copy link
Member

rhatdan commented Feb 2, 2021

You you install the buildah in updates-testing

dnf -y update --enablerepo=updates-testing buildah

@avikivity
Copy link

avikivity commented Feb 2, 2021

Uh, I already installed the one from f34. But it's not working:

$ buildah bud --arch aarch64 --no-cache
STEP 1: FROM docker.io/fedora:33
STEP 2: RUN touch /testfile
STEP 3: COMMIT
--> 61d523dbf7b
61d523dbf7bcd8cf660ab1201ac8949c7531ee5c762f6174388220eece852e01
$ podman run -it --rm 61d523dbf7bcd8cf660ab1201ac8949c7531ee5c762f6174388220eece852e01 rpm -q coreutils
coreutils-8.32-12.fc33.x86_64

I'd expect the aaaarch64 rpm to be installed, not x86_64.

@avikivity
Copy link

avikivity commented Feb 2, 2021

hmm, maybe aarch64 is spelled arm64 today.

@avikivity
Copy link

avikivity commented Feb 2, 2021

$ buildah bud --arch arm64 --no-cache
STEP 1: FROM docker.io/fedora:33
Getting image source signatures
Copying blob be581a95d832 done  
Copying config 4d2e61f132 done  
Writing manifest to image destination
Storing signatures
STEP 2: RUN touch /testfile
exec container process `/bin/sh`: Exec format error
error building at STEP "RUN touch /testfile": error while running runtime: exit status 1
ERRO exit status 1             

I guess I need some qemu-user magic.

@avikivity
Copy link

avikivity commented Feb 2, 2021

I installed qemu-user-binfmt, but it still fails:

$ buildah bud --arch arm64 --no-cache
STEP 1: FROM docker.io/fedora:33
STEP 2: RUN touch /testfile
exec container process (missing dynamic library?) `/bin/sh`: No such file or directory
error building at STEP "RUN touch /testfile": error while running runtime: exit status 1
ERRO exit status 1           

@rhatdan
Copy link
Member

rhatdan commented Feb 2, 2021

Works for me. You need qemu-user-static

$ cat /tmp/Containerfile 
FROM docker.io/fedora:33
RUN arch
$ rpm -q qemu-user-static
qemu-user-static-5.1.0-9.fc33.x86_64
$ buildah bud /tmp
STEP 1: FROM docker.io/fedora:33
Getting image source signatures
Copying blob f147208a1e03 done  
Copying config a78267678b done  
Writing manifest to image destination
Storing signatures
STEP 2: RUN arch
x86_64
STEP 3: COMMIT
Getting image source signatures
Copying blob 5d6d8687c4a0 skipped: already exists  
Copying blob 5d185949c787 done  
Copying config 33b435c940 done  
Writing manifest to image destination
Storing signatures
--> 33b435c940e
33b435c940ea616a7ab363bdb4e6cea9ee56fc9f1274155a039aa948d6f18098
$ buildah bud --arch arm64 /tmp
STEP 1: FROM docker.io/fedora:33
STEP 2: RUN arch
aarch64
STEP 3: COMMIT
Getting image source signatures
Copying blob 5f7efc74a7d4 skipped: already exists  
Copying blob fd40a8df3ffd done  
Copying config 514fb81b72 done  
Writing manifest to image destination
Storing signatures
--> 514fb81b720
514fb81b72054e28894124661fd7545ff1e3a0adc230087b00d65843fbb7cd29

@avikivity
Copy link

avikivity commented Feb 2, 2021

Thanks! works for me too.

@rhatdan
Copy link
Member

rhatdan commented Feb 3, 2021

I believe we have the componants to do this now, in current buildah. Reopen if I am mistaken.

@rhatdan rhatdan closed this as completed Feb 3, 2021
@yangm97
Copy link

yangm97 commented Feb 4, 2021

@rhatdan do you know if multi-arch manifests are covered? Might be relevant for this issue. Docker build (or buildx, whatever) has a --platform flag which allows building multiple images and pushing them to the same "tag" within a single command. For example:

docker buildx build --platform=linux/amd64,linux/arm/v7 --push -t example.com/foo/bar:latest .

Should we expect a similar command from buildah? Or are users supposed to do something else? Maybe use skopeo to create the multi-arch manifests after pushing images to different tags?

@fboudra
Copy link

fboudra commented Feb 4, 2021

@yangm97 you can use buildah to create multiarch manifests. See buildah manifest create and buildah manifest push.

@junaruga
Copy link
Author

junaruga commented Feb 4, 2021

Thanks for the work and congrats!! I appreciate it. ❤️

@rhatdan
Copy link
Member

rhatdan commented Feb 5, 2021

I am working on fixing bugs in this area. But you should be able to do

buildah bud --manifest mymanifest --arch (or --platform) /contextdir.

In buildah 1.19.4 and podman 3.0.

divyansh42 added a commit to divyansh42/buildah-build that referenced this issue Feb 12, 2021
Since buildah now supports multi arch image built
support (ref: containers/buildah#1590)
This feature will allow building images based on multiple
architectures

Signed-off-by: divyansh42 <diagrawa@redhat.com>
divyansh42 added a commit to divyansh42/buildah-build that referenced this issue Feb 17, 2021
Since buildah now supports multi arch image built
support (ref: containers/buildah#1590)
This feature will allow building images based on multiple
architectures

Signed-off-by: divyansh42 <diagrawa@redhat.com>
tetchel pushed a commit to redhat-actions/buildah-build that referenced this issue Feb 17, 2021
Since buildah now supports multi arch image
support (ref: containers/buildah#1590)
This feature will allow building images for multiple
architectures.

Signed-off-by: divyansh42 <diagrawa@redhat.com>
@divyansh42
Copy link

divyansh42 commented Oct 28, 2021

I am trying to build a multi-arch image without using buildah bud but it is failing when I try to push the manifest.
All the examples provided in this issue and other places are solely for building using containerfile. It would have been a great help for me if, I can get an example for building without containerfile.
Thanks in advance.

@rhatdan
Copy link
Member

rhatdan commented Oct 28, 2021

Have you worked with the buildah manifest command?
buildah manifest create
buildah manifest add image1
buildah manifest add image2
buildah manifest push --all

Should be all you need.

@divyansh42
Copy link

divyansh42 commented Oct 28, 2021

Yes, I did this way.
Also tried with buildah commit --manifest but both ways resulted in the same error when I tried to push that manifest.

However, when I did this with containerfile everything worked perfectly.

edit: I don't think there is any problem with quay, as I successfully pushed manifest consisting of images built with containerfile.

@rhatdan
Copy link
Member

rhatdan commented Oct 28, 2021

Please open a new issue with the exact steps you are doing rather then discussion here in a closed issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
buildkit from Podman This issue was either first reported on the Podman issue list or when running 'podman build' kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests