-
Notifications
You must be signed in to change notification settings - Fork 481
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add documentation/example for how to do multi-arch builds on a multi-arch Kubernetes cluster without emulation #516
Comments
builder need to create in each CI job before build. With in fact, so in each CI job, before build, could just run, for connect docker buildx client to buildkit in k8s cluster docker buildx create --use --name=buildkit --platform=linux/amd64 --node=buildkit-amd64 --driver=kubernetes --driver-opt="namespace=buildkit,nodeselector=kubernetes.io/arch=amd64"
docker buildx create --append --name=buildkit --platform=linux/arm64 --node=buildkit-arm64 --driver=kubernetes --driver-opt="namespace=buildkit,nodeselector=kubernetes.io/arch=arm64"
# not like x86, i386 could be run on x86_64 host.
# without emulation like qemu, arm64 host only support arm64
# so have to add a arm32 node into k8s cluster,
# and it should be append too
docker buildx create --append --name=buildkit --platform=linux/arm/v7 --node=buildkit-arm --driver=kubernetes --driver-opt="namespace=buildkit,nodeselector=kubernetes.io/arch=arm"
# not sure nodeselector is correct. i haven't arm32 host. after build, don't run if you don't want to run the scripts in each build,
This is same for all driver. it about how client / buildkit works. |
This is all good info @morlay. That's a useful point about setting up the deployment in advance. I think if I do that I can give the pods requests and limits to work around #210. Right now my cluster assigns some very low limits to anything that doesn't provide its own, so I don't think I can successfully build anything but the smallest containers. Any tips on how to get BuildKit on Kubernetes to pull from Docker Hub through my cluster's caching registry, to avoid the pull limits? When using the Docker Daemon I have to set up an
When pulling layers does BuildKit just pull through Kubernetes's container fetch mechanism? Or does it have its own config? Or does it run the Docker Daemon in its pod and I need to inject this config into the BuildKit pods before it starts? |
I've just tested dropping that |
buildkit running with containerd not docker. should update [registry."docker.io"]
mirrors = ["http://docker-registry.toil:5000"]
http = true
insecure = true see more https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md |
@morlay Thanks for the tip! This doesn't quite work for a couple reasons. The docs you linked show that the right format is to leave off the protocol scheme (so more like):
When I do that, it still doesn't work; I think the problem is that the mirror value is just passed along as a hostname, and the port is never parsed out, so if I'm running an HTTP mirror it needs to be on port 80. I will try moving the mirror to port 80 and seeing if that works. |
Changing the port to 80 and passing just the hostname didn't seem to help.
I plugged that in and it seems to be working now. Thanks! |
To get the multi-arch builds working with emulation, I had to add
|
@adamnovak initContainers:
- name: qemu
image: "{{ .Values.imageBinfmt.hub }}/binfmt:{{ .Values.imageBinfmt.tag }}"
args:
- --install
- amd64,arm64
securityContext:
privileged: true |
@adamnovak The error above should be fixed with qemu update in moby/buildkit#1953 . Can you test with |
@morlay That looks a bit like what I put in:
@tonistiigi I'm not letting |
@tonistiigi I installed buildkit on my
But I am still having multi-arch builds fail (here, for amd64 on the arm64 cluster) - strangely, it gets through all the RUN and apt commands, but seems to fail when either doing It always seems to fail with an illegal instruction or a panic - to me saying there is something up with qemu?
Another failure (same build system, same nodes, same manifests - different failure location):
|
it is qemu issue. qemu x86_64 not work well for golang compiling on aarch64 host. if you use pure go, could set GOARCH=$TARGETARCH Example: https://github.com/jaegertracing/jaeger-operator/blob/master/build/Dockerfile#L22 Notice the line 1 too. |
@morlay Great call, thank you! I adjusted the Dockerfile to include ARG TARGETOS
ARG TARGETARCH
ARG TARGETPLATFORM
ARG BUILDPLATFORM And adjusted my build to: RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -ldflags="-w -s" -o /app/argocd-notifications ./cmd And that seems to have fixed it - it took me a while, however, to realise that when using My working Dockerfile, in full for those that stumble across this in future: FROM --platform=$BUILDPLATFORM golang:1.15.3 as builder
RUN apt-get update && apt-get install ca-certificates
WORKDIR /src
ARG TARGETOS
ARG TARGETARCH
ARG TARGETPLATFORM
ARG BUILDPLATFORM
COPY go.mod /src/go.mod
COPY go.sum /src/go.sum
RUN go mod download
# Perform the build
COPY . .
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -ldflags="-w -s" -o /app/argocd-notifications ./cmd
RUN ln -s /app/argocd-notifications /app/argocd-notifications-backend
FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /app/argocd-notifications /app/argocd-notifications
COPY --from=builder /app/argocd-notifications-backend /app/argocd-notifications-backend
# User numeric user so that kubernetes can assert that the user id isn't root (0).
# We are also using the root group (the 0 in 1000:0), it doesn't have any
# privileges, as opposed to the root user.
USER 1000:0 |
More buildx info found here: docker/buildx#516
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
In #370, support was added for
--append
with the Kubernetes driver. This lets you add an amd64 builder on Kubernetes targeted to amd64 nodes, and ARM builders targeted to ARM nodes. This is only sort of hinted at in the documentation.A bit of an example was given in the PR:
However, it would be helpful to have a fully worked example in the documentation, from builder creation through
docker buildx build
command. There should also be some more prose about how this allows you to build each image on an actual host of the appropriate architecture, if available, and push them all together to the same tag at the end.The Right Way to handle 32-bit ARM would be nice to see here as well; can it just be another platform on the 64-bit ARM hosts?
It would also be good to show how/whether other client machines can connect to the same builder on the Kubernetes cluster, or if (some of?) the setup needs to be repeated for e.g. each CI job that wants to build a multi-arch image.
The text was updated successfully, but these errors were encountered: