Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting building multi-platform images (podman buildx) #1590

Open
junaruga opened this issue May 13, 2019 · 38 comments
Open

Supporting building multi-platform images (podman buildx) #1590

junaruga opened this issue May 13, 2019 · 38 comments
Assignees
Labels

Comments

@junaruga
Copy link

@junaruga junaruga commented May 13, 2019

Description

Supporting building multi-platform images (podman buildx)

Detail

This ticket is a request for feature, originally from containers/libpod#3063 .

docker buildx [1][2] is to enable building and running multi-platform container images.
I would like to see that podman has like the feature.

$ docker buildx build --platform linux/arm64 ...

RHEL 8 started supporting multi arch including ARM 64 bit.
Quay 3 started supporting multi arch including ARM 64-bit. [3]
So, it might be a good timing for podman to support this feature.

docker buildx is using QEMU internally to do it.
As an another way to achieve this, there is qemu-user-static [4] also using QEMU.

According to the docker buildx's article [2], maybe both have similar logic in it.

This fast and lightweight container OS comes packaged with the QEMU emulator, and comes pre-configured with binfmt_misc to run binaries of any supported architecture.

But docker buildx looks much easier than qemu-user-static.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented May 19, 2019

I like to explain more context about this ticket.

Why is this feature important for me now?

First, below announcement from Docker is April, 30, 2019. Just 20 days ago.
It's happening now.

[2] docker buildx article: https://engineering.docker.com/2019/04/multi-arch-images/

docker buildx is not stable and limited release for the edge version for now.
But after it is stable at once, people recognize and notice how building a multi arch container on x86_64 is easy. Currently most of the container users might not know even it is possible to build much arch containers on x86_64.

It's something like people who do not know a smart phone in a old era.
When they know it at once, they want to use it.

And people might say "We want to build and run a multi arch container like docker buildx in the open source license world, or in Fedora, CentOS or Debian ecosystem.

multiarch project

Let me share about multiarch project too that I am working for now.

[4] qemu-user-static: https://github.com/multiarch/qemu-user-static

multiarch project https://github.com/multiarch is to give people the below multi arch experience.

$ uname -m
x86_64

$ docker run --rm --privileged multiarch/qemu-user-static:register --reset

$ docker run --rm -t multiarch/fedora:30-aarch64 uname -m
aarch64
$ docker run --rm -t multiarch/fedora:30-s390x uname -m
s390x

$ docker run --rm -t multiarch/fedora:29-aarch64 uname -m
aarch64
$ docker run --rm -t multiarch/fedora:29-ppc64le uname -m
ppc64le
$ docker run --rm -t multiarch/fedora:29-s390x uname -m
s390x

It enables people to add multi arch testing cases to their current CI, and it has a role to promote the multiarch container use cases.
People recognize that "oh it is possible to build, run and test multi arch cases on the CI".

Here are the cases to add the multi arch cases to Travis CI.

https://github.com/BenLangmead/bowtie2/blob/master/.travis.yml#L22-L41
https://github.com/jts/nanopolish/blob/master/.travis.yml#L46-L75

It seems that adding multi arch cases on their existing x86_64 base CI is more casual for them rather than adding new native much arch supporting CI newly like "Shippable CI', in my experience of pull-requests on some projects.

Shippable CI supports native 64/32 bit ARM environment like this.
But I have never succeeded to merge my pull-request to the other organization's repositories.
https://github.com/lh3/minimap2/pull/398/files

There are some challenges for this multiarch project's current technology.
For example, people have to use the compatible containers for qemu-user-static rather than using their existing mutl arch containers.

And docker buildx and podman buildx solve it.

@rhatdan

This comment has been minimized.

Copy link
Member

@rhatdan rhatdan commented May 20, 2019

If the community wants to work on this, then we would consider it. Currently we don't have the resources or the priority to work on it.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented May 20, 2019

@rhatdan you mean "the community" in "the community wants to work on this" means multarch project?

@rhatdan

This comment has been minimized.

Copy link
Member

@rhatdan rhatdan commented May 20, 2019

Yes, we need contributors opening PR's to make this happen, it is not as high as other features in the priority list of the core maintainers.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Jun 3, 2019

@rhatdan sure. thanks for considering keeping the possibility.
I wish this ticket is kept as opening.
Though I can not promise the work for buildash, it's not zero percent. The multiarch project needs some works before being ready for the work of buildash.

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Jun 10, 2019

Working on at least part of this with #1368 (comment). This covers the build side of buildah build-using-dockerfile --platform linux/aarch64/variant ... Yes I'd planned some interaction with qemu-static in the build.

I see docker buildx nicely integrates multiple platform arguments to produce a manifest. I'll see what can be done for buildah. I haven't looked at pod yet.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Jun 11, 2019

Wow, great work, @grooverdan

@rhatdan

This comment has been minimized.

Copy link
Member

@rhatdan rhatdan commented Jun 11, 2019

@grooverdan Thanks for looking into this.

@rhatdan

This comment has been minimized.

Copy link
Member

@rhatdan rhatdan commented Jun 20, 2019

@grooverdan Any progress?

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Jun 21, 2019

Yes:
tree: https://github.com/grooverdan/buildah/tree/args-target-platform
sample output of simple test case: https://dpaste.de/obgx

Current resolving issues:

  • multistage images currently don't get the right target architecture

I'm making the fetching of the last FROM image in a docker file to fetch using the --platform arguments.

For nicely handling multi-stages on cross builds I think the following is needed:
extending FROM to take a --platform {os/arch/variant|"build"|"target"} argument

Its needed as in multistage it not clear if a FROM, excluding the last one, is intended to be of the build or target arch (and if we're doing that we may as well do any).

Is this an acceptable extension?

Currently only see the platform Variant being used in the Push/Pull/Manifest protocols and not an Image metadata. I'm I missing anything in this respect?

@rhatdan

This comment has been minimized.

Copy link
Member

@rhatdan rhatdan commented Jun 21, 2019

I am fine with extending the FROM command.
@nalind WDYT?

@nalind

This comment has been minimized.

Copy link
Collaborator

@nalind nalind commented Jun 24, 2019

As a rule, I lean away from making up our own extensions to Dockerfile instructions and syntax.
What's the use case for having FROM do something other than assuming that the image being referred to is for target platform?

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Jun 25, 2019

When you have a multiple FROM statements in a multi-stage image its fairly obvious that the last one is of the target. Earlier stages however are more ambiguous, they could be of the build (native) architecture to cross-compile some artefacts, or build architecture agnostic artefacts (which is faster not emulated), or they could be of the target architecture to have components.

For now I'll follow the implementation of docker, leave this as a separate feature.

@nalind, seems docker buildx is already using the platform extension:
https://github.com/docker/buildx#building-multi-platform-images

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Jun 25, 2019

Interestingly (perversely?), docker --platform takes the first FROM as the target architecture and others at the native architecture:

Dockerfile.platformargs:

FROM alpine
ARG BUILDPLATFORM
ARG BUILDOS
ARG BUILDARCH
ARG BUILDVARIANT
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
FROM ubuntu
COPY --from=alpine /bin/ls /tmp/ls
$ DOCKER_BUILDKIT=1  docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:44:27 2019
 OS/Arch:           linux/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          18.09.5
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       e8ff056
  Built:            Thu Apr 11 04:13:40 2019
  OS/Arch:          linux/amd64
  Experimental:     true
$  DOCKER_BUILDKIT=1  docker build --tag testdocker --platform linux/ppc64le -f Dockerfile.platformargs .
$  docker create -ti --name container_test_docker testdocker file /bin/ls /tmp/ls
aec54aad0aa9ddba2a029da5107a199ef45e49c981965e4854c5286d9f8ee6fe
$ docker cp container_test_docker:/bin/ls /tmp/ubuntu_ls
$ docker cp container_test_docker:/tmp/ls /tmp/alpine_ls
$ file  /tmp/alpine_ls /tmp/ubuntu_ls
/tmp/alpine_ls: ELF 64-bit LSB pie executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-powerpc64le.so.1, stripped
/tmp/ubuntu_ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=9567f9a28e66f4d7ec4baf31cfbf68d0410f0ae6, stripped

moby/buildkit#1057

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Jul 30, 2019

Hi @grooverdan

tree: https://github.com/grooverdan/buildah/tree/args-target-platform

Your latest program is still the above branch right? I thought I wanted to try to run your program.

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Jul 31, 2019

Yep its the latest. About to get back to this really soon. Plan was to normalize it its behaviour to be consistent with the buildkit issue comment above, see what was working and what needs to be fixed. I also need to see how far on disk libraries have come with respect to storing architecture information. Please have a try, report anything you like/dislike. If you want changes in my tree I'm happy to take PRs and I'll merge them somehow.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Aug 1, 2019

@grooverdan sure. alright. Let me try to run your program.

Plan was to normalize it its behaviour to be consistent with the buildkit issue comment above, see what was working and what needs to be fixed.

Sure, I recognized docker build --platform option was only available when setting DOCKER_BUILDKIT=1.

My environment.

$ DOCKER_BUILDKIT=1 docker version
Client: Docker Engine - Community
 Version:           19.03.0
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        aeac9490dc
 Built:             Wed Jul 17 18:16:02 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       aeac9490dc
  Built:            Wed Jul 17 18:14:40 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ DOCKER_BUILDKIT=1 docker build --help | grep platform
      --platform string         Set platform if server is multi-platform capable

$ docker build --help | grep platform
=> Empty line

Build a ppc64le image with your Dockerfile.

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-ppc64le --platform linux/ppc64le .

Run it with QEMU and binfmt_misc /proc/sys/fs/binfmt_misc/qemu-$arch installed by dnf install qemu-user-static.

$ docker run --rm -t test/alpine-ppc64le uname -m
ppc64le
@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Aug 1, 2019

I just compared the result beween docker build --help and DOCKER_BUILDKIT=1 docker build --help. Interesting.

$ diff docker_build_help.txt docker_build_help_builder_buildkit_1.txt 
11d10
<       --compress                Compress the build context using gzip
28a28,31
>   -o, --output stringArray      Output destination (format: type=local,dest=path)
>       --platform string         Set platform if server is multi-platform capable
>       --progress string         Set type of progress output (auto, plain, tty). Use
>                                 plain to show container output (default "auto")
32a36,37
>       --secret stringArray      Secret file to expose to the build (only if BuildKit
>                                 enabled): id=mysecret,src=/local/secret
34a40,42
>       --ssh stringArray         SSH agent socket or keys to expose to the build (only
>                                 if BuildKit enabled) (format:
>                                 default|<id>[=<socket>|<key>[,<key>]])
@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Aug 1, 2019

The --platform option works very well.
For the last case for --platform linux/i386, the result of uname -m is x86_64. But it is an issue of original i386/alpine image.

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-ppc64le --platform linux/ppc64le .
$ docker run --rm -t test/alpine-ppc64le uname -m
ppc64le

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-aarch64 --platform linux/aarch64 .
$ docker run --rm -t test/alpine-aarch64 uname -m
aarch64

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-s390x --platform linux/s390x .
$ docker run --rm -t test/alpine-s390x uname -m
s390x

$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-arm --platform linux/arm .
$ docker run --rm -t test/alpine-arm uname -m
armv7l
$ DOCKER_BUILDKIT=1 docker build --tag test/alpine-i386 --platform linux/i386 .
$ docker run --rm -t test/alpine-i386 uname -m
x86_64

$ docker pull i386/alpine
$ docker run --rm -t i386/alpine uname -m
x86_64
@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Aug 1, 2019

I think I've completed:

Exposure of args in build process.

Less sure:

Ensuing args are declared before use (or not) in the same way as docker buildkit/buildx.

Known gaps:

Passing --platform argument into fetching docker images (e.g. docker pull --platform X ubuntu:latest`.

Image on filesystem detection of platform (not sure its even there).

Extending Dockerfile "FROM xxx" to FROM --platform ${(BUILD|TARGET)PLATFORM} xxx

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Aug 1, 2019

@grooverdan Congratulations! I have a few questions.

I think the difference is that you had the ubuntu image locally in docker images for x86 while you didn't have alpine. If image exists locally it is always used instead of accessing the registry. This isn't really obvious and hopefully can be fixed ... Currently, it should always work correctly if you do docker build --pull .

According, above "moby/buildkit" ticket, did you implement above current docker's behavior to podman, to keep the consistency with docker, right?

Image on filesystem detection of platform (not sure its even there).

What does it mean?
For "docker buildx", it seems it is using internally "moby/buildkit"'s binfmt_misc data generated by below files: debian's binutils-$arch-linux-gnu deb packages, to detect the specific arch's container image specified by platform option.

https://github.com/moby/buildkit/blob/master/util/binfmt_misc/Dockerfile
https://github.com/moby/buildkit/blob/master/util/binfmt_misc/ppc64le_binary.go

You meant Podman does not have this kind of logic internally right?

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Aug 1, 2019

I tested buildah on my chroot mock environment (Fedora rawhide), getting the source from your branch, referring https://src.fedoraproject.org/rpms/buildah/blob/master/f/buildah.spec , I could not run. Maybe it's not the new feature specific issue, but my environment's issue. ;<

$ git remote grooverdan add git@github.com:grooverdan/buildah.git

$ git remote -v
grooverdan	git@github.com:grooverdan/buildah.git (fetch)
grooverdan	git@github.com:grooverdan/buildah.git (push)
origin	git@github.com:containers/buildah.git (fetch)
origin	git@github.com:containers/buildah.git (push)

$ git checkout args-target-platform

Below is just from the buildah.spec file. Maybe unnecessary steps are included.
For someone who is interested in reproducing the process, I did run it on my mock environment with below ~/.config/mock.cfg. Then I did fedpkg srpm, mock *.rpm, and mock shell.

~/.config/mock.cfg

# Enable network on mock enviornment.
config_opts['use_host_resolv'] = True
config_opts['rpmbuild_networking'] = True
# Mount the source directory on mock environment.
config_opts['plugin_conf']['bind_mount_enable'] = True
config_opts['plugin_conf']['bind_mount_opts']['dirs'].append(('/home/jaruga/git/containers/buildah', '/mnt/buildah' ))
$ uname -m
x86_64

$ go version
go version go1.13beta1 linux/amd64
$ cd /mnt/buildah
$ sed -i 's/GOMD2MAN =/GOMD2MAN ?=/' docs/Makefile
$ sed -i '/docs install/d' Makefile
$ mkdir _build
$ pushd _build
$ ln -s /mnt/buildah src/github.com/containers/buildah
$ popd
$ mv vendor src
$ export GOPATH=/mnt/buildah/_build:/mnt/buildah
$ export 'BUILDTAGS=seccomp selinux'
$ GO111MODULE=off go build -buildmode pie -compiler gc '-tags=rpm_crashtraceback seccomp selinux' -ldflags ' -extldflags '\''-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\''' -a -v -x -o buildah github.com/containers/buildah/cmd/buildah
$ ./buildah --version
buildah version 1.9.1-dev (image-spec 1.0.0, runtime-spec 1.0.0)
$ ./buildah --help
...
  build-using-dockerfile Build an image using instructions in a Dockerfile
...
$ ./buildah build-using-dockerfile --help
....
      --platform string               Sets target platform and TARGET{PLATFORM,OS,ARCH,VARIANT} build args for cross builds
...

I used below file.

$ cat Dockerfile.fedora
FROM docker.io/fedora AS fedora
ARG BUILDPLATFORM
ARG BUILDOS
ARG BUILDARCH
ARG BUILDVARIANT
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
# RUN /usr/bin/uname -m
FROM docker.io/ubuntu
# RUN /usr/bin/uname -m
COPY --from=fedora /bin/ls /tmp/ls

Then

$ ./buildah bud -f Dockerfile.fedora .

As I got below error, I did put the file to the path.

STEP 1: FROM docker.io/fedora AS fedora
error creating build container: error obtaining default signature policy: open /etc/containers/policy.json: no such file or directory

As I got below error, I commented out the lines.

STEP 10: RUN /usr/bin/uname -m
error running container: error creating container for [/bin/sh -c /usr/bin/uname -m]: : exec: "runc": executable file not found in $PATH

Then run again, I got below error.

STEP 11: COPY --from=fedora /bin/ls /tmp/ls
STEP 12: COMMIT
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x55d9e14fb244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x55d9e19fa300, 0xc000726120, 0x55d9e19cd5e0, 0xc0000c2010, 0x55d9e19efe40, 0xc000b28160, 0x0, 0x55d9e19eff00, 0xc000ae4300, 0xc000b28000, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.(*Builder).Commit(0xc0000eb080, 0x55d9e19e6b40, 0xc0000be060, 0x0, 0x0, 0x55d9e151a453, 0x2a, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/commit.go:216 +0x37e
github.com/containers/buildah/imagebuildah.(*StageExecutor).commit(0xc0005e8160, 0x55d9e19e6b40, 0xc0000be060, 0xc0000e9600, 0xc00074c640, 0x34, 0x0, 0x0, 0x0, 0x10, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1577 +0x169f
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0005e8160, 0x55d9e19e6b40, 0xc0000be060, 0x1, 0x55d9e1512718, 0x1, 0xc0000e9600, 0xc000377dc0, 0xc000804b10, 0x10, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1096 +0x1a04
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc0006146c0, 0x55d9e19e6b40, 0xc0000be060, 0xc0000c7ef0, 0x2, 0x2, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x55d9e19e6b40, 0xc0000be060, 0x55d9e19fa300, 0xc000726120, 0xc0005abef0, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc0005b9020, 0x1, 0x3, 0xc0005abb58, 0xc000001b00, 0xc0000c4d80, 0xc000001c80, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc0005b9020, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc0005b8fc0, 0x3, 0x3, 0xc0002d7b80, 0xc0005b8fc0)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x55d9e1f5caa0, 0xc0000aac00, 0x7ffe6fe78807, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
Getting image source signatures
Copying blob 543791078bdb skipped: already exists
Copying blob c56e09e1bd18 skipped: already exists
Copying blob a31dbd3063d7 skipped: already exists
Copying blob b079b3fa8d1b skipped: already exists
Copying blob 75ec67da8039 done
Copying config ba4e8140d3 done
Writing manifest to image destination
Storing signatures
ba4e8140d35300be4470e1698b56cf69dfc7f10294188a73c20c82e9ff8380b7
@rhatdan

This comment has been minimized.

Copy link
Member

@rhatdan rhatdan commented Aug 1, 2019

Buildah requires you to have runc (Runc package in fedora) and /etc/containers/policy.json file installed. container-commons in Fedora world.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Aug 2, 2019

Thanks for the info! I missed below in buildah.spec.

Requires: runc >= 1.0.0-17
Requires: containers-common

I installed the required RPM packages.

$ mock -r fedora-rawhide-x86_64 -i runc
$ mock -r fedora-rawhide-x86_64 -i containers-common

On mock environment again.

$ rpm -q runc containers-common
runc-1.0.0-96.dev.git9ae7901.fc31.x86_64
containers-common-0.1.38-3.dev.git5f45112.fc31.x86_64

$ ./buildah rmi -a

$ ./buildah images -a
REPOSITORY   TAG   IMAGE ID   CREATED   SIZE

$ cat Dockerfile.test
FROM fedora
RUN uname -m
$ ./buildah bud -f Dockerfile.test .
STEP 1: FROM fedora
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x5574a60db244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x5574a65da300, 0xc000720900, 0x5574a65ad5e0, 0xc0000c2010, 0x5574a65d0260, 0xc0002b1770, 0xc0000e69a0, 0x5574a65cff00, 0xc000382180, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.pullImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc0002b1770, 0x0, 0x0, 0x5574a65ad5e0, 0xc0000c2010, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/pull.go:268 +0x3a5
github.com/containers/buildah.pullAndFindImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc0002b1770, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:34 +0x138
github.com/containers/buildah.resolveImage(0x5574a65c6b40, 0xc0000be060, 0xc0000e69a0, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:175 +0x114b
github.com/containers/buildah.newBuilder(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:250 +0x18d3
github.com/containers/buildah.NewBuilder(...)
	/mnt/buildah/_build/src/github.com/containers/buildah/buildah.go:435
github.com/containers/buildah/imagebuildah.(*StageExecutor).prepare(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:832 +0x3a0
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:981 +0x16a
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc00071c900, 0x5574a65c6b40, 0xc0000be060, 0xc0007287b0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc0006e8240, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0xc000639d58, 0xc000001e00, 0xc0000c4d80, 0xc00009c780, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc00064b470, 0x3, 0x3, 0xc0002d7b80, 0xc00064b470)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x5574a6b3caa0, 0xc0000aac00, 0x7ffce0ac9809, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x5574a60db244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x5574a65da300, 0xc000720900, 0x5574a65ad5e0, 0xc0000c2010, 0x5574a65d0260, 0xc000412080, 0xc0000e69a0, 0x5574a65cff00, 0xc00079e1e0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.pullImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc000412080, 0x0, 0x0, 0x5574a65ad5e0, 0xc0000c2010, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/pull.go:268 +0x3a5
github.com/containers/buildah.pullAndFindImage(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0x5574a65d0260, 0xc000412080, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:34 +0x138
github.com/containers/buildah.resolveImage(0x5574a65c6b40, 0xc0000be060, 0xc0000e69a0, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:175 +0x114b
github.com/containers/buildah.newBuilder(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc00064b530, 0xc0006e8750, 0x6, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/new.go:250 +0x18d3
github.com/containers/buildah.NewBuilder(...)
	/mnt/buildah/_build/src/github.com/containers/buildah/buildah.go:435
github.com/containers/buildah/imagebuildah.(*StageExecutor).prepare(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:832 +0x3a0
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:981 +0x16a
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc00071c900, 0x5574a65c6b40, 0xc0000be060, 0xc0007287b0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc0006e8240, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0xc000639d58, 0xc000001e00, 0xc0000c4d80, 0xc00009c780, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc00064b470, 0x3, 0x3, 0xc0002d7b80, 0xc00064b470)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x5574a6b3caa0, 0xc0000aac00, 0x7ffce0ac9809, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
Getting image source signatures
Copying blob fd2e8b5b2254 done
Copying config ef49352c9c done
Writing manifest to image destination
Storing signatures
STEP 2: RUN uname -m
x86_64
STEP 3: COMMIT
goroutine 1 [running]:
runtime/debug.Stack(0x29, 0x5574a60db244, 0xf)
	/usr/lib/golang/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
	/usr/lib/golang/src/runtime/debug/stack.go:16 +0x24
github.com/containers/buildah.getCopyOptions(0x5574a65da300, 0xc000720900, 0x5574a65ad5e0, 0xc0000c2010, 0x5574a65cfe40, 0xc0000e6dc0, 0x0, 0x5574a65cff00, 0xc000810720, 0xc0000e6c60, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/common.go:28 +0x98
github.com/containers/buildah.(*Builder).Commit(0xc000860000, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x0, 0x5574a60fa453, 0x2a, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/commit.go:216 +0x37e
github.com/containers/buildah/imagebuildah.(*StageExecutor).commit(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0xc0000e9340, 0xc0000982a0, 0x60, 0x0, 0x0, 0x0, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1577 +0x169f
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0000dca50, 0x5574a65c6b40, 0xc0000be060, 0x0, 0x5574a60f2717, 0x1, 0xc0000e9340, 0xc000385260, 0xc0006e8750, 0x6, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1096 +0x1a04
github.com/containers/buildah/imagebuildah.(*Executor).Build(0xc00071c900, 0x5574a65c6b40, 0xc0000be060, 0xc0007287b0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1727 +0x7ea
github.com/containers/buildah/imagebuildah.BuildDockerfiles(0x5574a65c6b40, 0xc0000be060, 0x5574a65da300, 0xc000720900, 0xc0006e8240, 0xc, 0x0, 0x0, 0x0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/imagebuildah/build.go:1894 +0x5ec
main.budCmd(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0xc000639d58, 0xc000001e00, 0xc0000c4d80, 0xc00009c780, 0xc0000c4de0, 0x0, ...)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:323 +0x1537
main.init.1.func1(0xc0002d7b80, 0xc00064b4d0, 0x1, 0x3, 0x0, 0x0)
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/bud.go:51 +0xd8
github.com/spf13/cobra.(*Command).execute(0xc0002d7b80, 0xc00064b470, 0x3, 0x3, 0xc0002d7b80, 0xc00064b470)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:762 +0x462
github.com/spf13/cobra.(*Command).ExecuteC(0x5574a6b3caa0, 0xc0000aac00, 0x7ffce0ac9809, 0x1d)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/mnt/buildah/src/github.com/spf13/cobra/command.go:800
main.main()
	/mnt/buildah/_build/src/github.com/containers/buildah/cmd/buildah/main.go:125 +0x92
Getting image source signatures
Copying blob 4843c53b9094 skipped: already exists
Copying blob 5390b6512b39 done
Copying config 59b62232fa done
Writing manifest to image destination
Storing signatures
59b62232fa9e1dbe0d4468bbcebfb4b4c8bb1c65be5114413847ab64e5a57e10
$ ./buildah images
REPOSITORY                 TAG      IMAGE ID       CREATED         SIZE
<none>                     <none>   59b62232fa9e   3 minutes ago   253 MB
docker.io/library/fedora   latest   ef49352c9c21   10 hours ago    253 MB

Any idea to fix this issue? Thanks.

@rhatdan

This comment has been minimized.

Copy link
Member

@rhatdan rhatdan commented Aug 2, 2019

Looking at the code, it looks like this is blowing up.

func getCopyOptions(store storage.Store, reportWriter io.Writer, sourceSystemContext *types.SystemContext, destinationSystemContext *types.SystemContext, manifestType string) *cp.Options {
	sourceCtx := getSystemContext(store, nil, "")
	if sourceSystemContext != nil {
		*sourceCtx = *sourceSystemContext
	}

	destinationCtx := getSystemContext(store, nil, "")
	if destinationSystemContext != nil {  // IN MASTER THIS IS LINE 28
		*destinationCtx = *destinationSystemContext  // I'M THINKINIG IT IS THIS LINE.
	}
	return &cp.Options{
		ReportWriter:          reportWriter,
		SourceCtx:             sourceCtx,
		DestinationCtx:        destinationCtx,
		ForceManifestMIMEType: manifestType,
	}
}
@kevinelliott

This comment has been minimized.

Copy link

@kevinelliott kevinelliott commented Sep 13, 2019

Very excited for this. It's the only thing holding me back from jumping to podman/buildah from docker buildx. Thanks for the hard work here @grooverdan, filing the issue and testing @junaruga, and supporting the effort @rhatdan.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Sep 14, 2019

@kevinelliott Below can be an alternative way to build multiarch-platform images with podman until podman implement this feature.

  1. Prepare below Dockerfile.

    ARG BASE_IMAGE=fedora
    FROM ${BASE_IMAGE}
    
    RUN something
    RUN something
    ...
    
    
  2. Install qemu-user-static RPM or run qemu-user-static container image.

    $ yum install qemu-user-static
    
  3. Run an arch container by specifying arch container image directly using above Dockerfile.

    $ podman build --rm -t my-fedora --build-arg BASE_IMAGE=arm64v8/fedora . 
    
    $ podman run --rm -t my-fedora uname -m
    aarch64
    
@yangm97

This comment has been minimized.

Copy link

@yangm97 yangm97 commented Sep 15, 2019

@junaruga good workaround, but you don't get to build for multiple architectures at once, nor will you get a multi-arch manifest.

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Sep 15, 2019

@yangm97 You might need to create below kind of bash (sh) script to build for multiple architectures at cone. You also can use parallel command and etc to run it in parallel [1]

BASE_IMAGES="fedora arm64v8/fedora arm32v7/fedora ppc64le/fedora s390x/fedora"
for image in ${BASE_IMAGES}; do
    podman build --rm -t my-fedora-$image --build-arg BASE_IMAGE=$image .
done

nor will you get a multi-arch manifest.

I do not understand the "multi-arch manifest".

[1] https://www.gnu.org/software/parallel/

@yangm97

This comment has been minimized.

Copy link

@yangm97 yangm97 commented Sep 15, 2019

@junaruga A manifest is what allows container runtimes to pull the correct image from, let's say fedora:latest, when :latest is a multi arch manifest which then points to multiple images.

I.e.

~ ❯❯❯ docker manifest inspect fedora:latest
{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 529,
         "digest": "sha256:f81f09918379d5442d20dff82a298f29698197035e737f76e511d5af422cabd7",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 529,
         "digest": "sha256:c829b1810d2dbb456e74a695fd3847530c8319e5a95dca623e9f1b1b89020d8b",
         "platform": {
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 529,
         "digest": "sha256:68b26da78d8790df143479ec2e3174c57cedb1c2e84ce1b2675d942d6848f2da",
         "platform": {
            "architecture": "ppc64le",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 529,
         "digest": "sha256:15352d97781ffdf357bf3459c037be3efac4133dc9070c2dce7eca7c05c3e736",
         "platform": {
            "architecture": "s390x",
            "os": "linux"
         }
      }
   ]
}

https://medium.com/@mauridb/docker-multi-architecture-images-365a44c26be6

@junaruga

This comment has been minimized.

Copy link
Author

@junaruga junaruga commented Sep 15, 2019

@yangm97 thanks for the info! Now I understand it!

If you want to use the manifest, the script can be like this.
print-arch-image-digest.sh prints "digest" value by "fedora" (base image URL) and specified architecture value of "platform/architecture". Maybe jq command to parse a json string is useful.

PLATFORM_ARCHS="amd64 arm64 ppc64le s390x"
BASE_IMAGE=fedora
for arch in $PLATFORM_ARCHS; do
    digest=$(./print-arch-image-digest.sh $BASE_IMAGE $arch)
    podman build --rm -t my-fedora-$arch --build-arg BASE_IMAGE=$BASE_IMAGE@$digest .
done

As a co-incidence, I created the similar script with jq and skopeo today.
https://github.com/junaruga/ci-multi-arch-test/blob/master/script/print-manifest-arch-digest.sh

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Oct 4, 2019

Just letting you know I'm back on this. Rebased. Started some test cases. Many more to write (and find more mistakes with) regarding multistage images. No work on mounting qemu within image during build process.

url:
https://github.com/grooverdan/buildah/tree/args-target-platform

@afbjorklund

This comment has been minimized.

Copy link
Contributor

@afbjorklund afbjorklund commented Oct 9, 2019

Excellent work, looking to see if I can use any of this when building "boot2podman" for ARM...

boot2podman/boot2podman#18

The current build system (for x86_64/amd64) uses containers, so it would be nice to be able to do something similar when cross-compiling (armv6/armv7/arm64) - instead of having to use a full VM.

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Oct 9, 2019

Found the following on how to detect arm variant of the current system in go: https://github.com/containerd/containerd/blob/master/platforms/cpuinfo.go#L99

@afbjorklund, do you know what might be the equivalent in bash (for the test cases)?

@afbjorklund

This comment has been minimized.

Copy link
Contributor

@afbjorklund afbjorklund commented Oct 10, 2019

@grooverdan : Here is the code from tinycore, hope it helps:

getBuild() {
BUILD=`uname -m`
case ${BUILD} in
	armv6l) echo "armv6" ;;
	armv7l) echo "armv7" ;;
	i686)   echo "x86" ;;
	x86_64) [ -f /lib/ld-linux-x86-64.so.2 ] && echo "x86_64" || echo "x86" ;;
	*)      echo "x86" ;;
esac
}
@afbjorklund

This comment has been minimized.

Copy link
Contributor

@afbjorklund afbjorklund commented Oct 10, 2019

Ended up not needing any of this for the boot2podman arm variants, but looks good anyway...
Turns out my issues were more related the my old kernel (as usual), rather than podman itself.

$ sudo podman run -it boot2podman-docker-tinycore.bintray.io/tinycore-compiletc:9.0-armv6
/ $ ps
PID   USER     COMMAND
    1 tc       {sh} /usr/bin/qemu-arm-static /bin/sh
    9 tc       {/bin/ps} /bin/ps
/ $ 

Same issue as with the "compatible images", from the Docker multiarch/qemu-user-static
You need Linux 4.8, if you want to do preloading of qemu images (F) rather than copying...

@grooverdan

This comment has been minimized.

Copy link

@grooverdan grooverdan commented Oct 11, 2019

Thanks for the Linux 4.8 hint of the F flag (https://www.kernel.org/doc/Documentation/admin-guide/binfmt-misc.rst). I hadn't come across that and certainly saves a copy/namespace mount.

Thanks for the arm hints, and I'll account for x86_32 (using i?86)) (which should output 386 looking at https://golang.org/src/go/build/syslist.go). I don't follow the x86_64) [ -f /lib/ld-linux-x86-64.so.2 ] test. arm64 is v8 variant currently (good enough for test case). I suspect I'm missing some v7 cases but this will only be hit if people run buildah tests on this hardware locally. Default case of x86 seems brutal, back to $BUILD seems like better assumption, at least for buildah, perhaps tinycore think differently.

Thanks for the hints and motivation.

update: variant/arch test ended up like this

@afbjorklund

This comment has been minimized.

Copy link
Contributor

@afbjorklund afbjorklund commented Oct 11, 2019

Probably it is because only four TCL variants existed:
http://tinycorelinux.net/9.x/

The 10.x is still in “beta” for arm, and there is no aarch64 yet.
Mostly for Raspberry Pi 4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants
You can’t perform that action at this time.