-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coredns:v1.8.0 and etcd:3.4.13-0 on arm64 have the wrong architecture in their manifests; kubeadm init fails #104085
Comments
/sig release |
It turns the wrong architecture in the manifests was not the reason the pods weren't starting. Since the latest images of etcd and coredns are still affected, I'm going to keep the issue open. |
CoreDNS as well as etcd are released in their own process and are not in the scope of SIG Release. For etcd, the image lives there: https://github.com/kubernetes/kubernetes/blob/master/cluster/images/etcd Indeed, the architectures for non
I assume the same applies to CoreDNS, but I'm not sure right now where the image is being build. |
is there a way to tell cri-o to ignore the mismatch in the images temporary?
cc @chrisohaver @johnbelamaric @rajansandeep i think the image is build from the Docker file at: /sig api-machinery network |
related: note, this test is catching problems in etcd, coredns, kube-proxy, conformance image:
it's just reporting warnings because otherwise this would fail the entire test job and some CRs tolerate the arch mismatch. |
Fixes for coredns and etcd are in flight. |
/triage accepted |
The issue should be solved with the next Kubernetes minor release (v1.23) for etcd, and with the next coredns release. |
What happened:
Cluster initialization on arm64 via
kubeadm init
using CRI-O would fail after a timeout.watch crictl ps
wouldn't show any containers starting.CRI-O logs show the following:
The inspection of the images' manifests seems to confirm the initial assumption that some of them were pulled with the wrong architecture:
But all the other images are fine:
To rule out that CRI-O is somehow at fault, I ran the images through
skopeo inspect
, which confirmed the previous results even when passing--override-arch arm64
.So I checked what architecture the binaries in the container storage location actually are:
These are
arm64
binraries, the images' manifests are wrong.I reproduced this issue on another arm64 host via podman. The image digests are the same, and
podman inspect
also shows the architecture to beamd64
. Unlike CRI-O, however, podman can run the images.What you expected to happen:
That the images of the correct architecture also list that correct architecture in their manifests.
How to reproduce it (as minimally and precisely as possible):
kubeadm init
with CRI-O should do the trick.Anything else we need to know?:
This has already been reported in #99656.
But both etcd and coredns still show
amd64
on their latest tags.Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"archive", BuildDate:"2021-08-02T11:40:12Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/arm64"}
cat /etc/os-release
): Arch Linux ARMuname -a
):5.13.3
custom based on Arch Linux ARM configkubeadm
CRI-O
The text was updated successfully, but these errors were encountered: