First off, thanks for using Kubernetes on ARM
This is a known issue; it was discussed in kubernetes/kubernetes#38067 and we dropped armel support (which part of RPi 1 uses when cross-compiling).
Basically armhf (GOARM=7) can't run on the Pi 1, so we used armel with GOARM=6 in -v1.5 to support RPi 1. However, we went all armhf in v1.6, hence it's not working on the Pi 1.
Also, with the armhf switch, we were able to use https://hub.docker.com/r/armhf/alpine, which is great.
Hope it helps, but sorry for not being able to support the RPi 1 anymore.
If you want to help with documenting it/spreading the word, please do or come with suggestions
Is it possible to discuses a reintegration of armv6l Support. I found many Posts showing the interest for using Kubernetes on Pi Zero and other armv6l Pi Devices. P Zero is good for hosting Micro Services in Kubernetes or Swarm Cluster Environments. Docker Swarm works well for me. So it would be nice if anyone could recycle the discussion. Pi clusterhat is properly a nice demo infrastructure.
Looking at the current docker.io build for the pi zero,
It does seem to be using a recent version of go.
I agree that there is not enough ram to use k8s on it in a standalone mode, but having it be a slave on a bigger master, it should have enough resources to do some useful things still.
This is not true today since official images on different architectures are updated simultaneously. For example https://hub.docker.com/r/arm32v5/debian/, https://hub.docker.com/r/arm32v7/debian/ and https://hub.docker.com/r/amd64/debian/ were all updated 9 days ago.
https://hub.docker.com/r/arm32v6/alpine/ runs well on Pi Zero.
I hope that you will reconsider it. Stopping Pi Zero from running latest k8s is so disappointing.
Hi @juliancheal ,
I am still in the middle of building k8s on ClusterHAT, but I was able to compile and build binaries for Pi Zero.
Basically, I have followed the below with some modifications:
I worked on wsl:
#1 install gcc-arm-linux-gnueabi instead of gcc-arm-linux-gnueabihf
#2 before building for linux/arm, make two modifications to set_platform_envs() in hack/lib/golang.sh
GOARM has to be 5. If you specify 6, you will get a linker error during the build. (Which I couldn't resolve.)
@dbwest Yes, I used make all to build binaries. The exact commands I used were:
I needed binaries for nodes, so only those three binaries were needed.
I didn't use kubeadm. I was following Kelsey Hightower's "Kubernetes the Hard Way". As described here, you just need to put those binaries in appropriate location.
@shinichi-hashitani any idea what version of kubernetes you were building?
I haven't had any luck getting this to build for arm v6 (hoping to run on a pi zero w).
EDIT: Nevermind...looks like if I build on a linux machine it works. I was trying to do it from my mac
Not exactly sure what is causing issues at your end, but the below is the details on my end:
Again, I followed the below with some modifications:
I have prepared my note and exact steps I followed, but sorry that is only available in Japanese:
hi, as you can see in the discussion above, core Kubernetes dropped support for armv6l.
if you want to use k8s / kubeadm on armv6l you must recompile everything (including CNI images).
I'm just chiming in to say that I have successfully compiled K8s 1.18.3 from source by compiling it in the golang:1.13-alpine docker image, which is a multi-arch image and includes a armv6. (I have Docker configred to use QEMU for emulation and can run containers for other architectures.)
By merely cloning the git repo, and following the 4-step make process on the readme page (i.e. just do make all WHAT=cmd/component), all k8s components except kubelet were statically compiled and run as standalone executables on my pi zero, with no dependencies. (And if golang-alpine stops working, I can just bootstrap Arch Linux ARM from scratch, which should work fine for compiling.)
The only issue is that compiling kubelet still dynamically links to the system glibc library, and I haven't yet figured out how to fix that. I'm not a go programmer, and none of the compile flags I added for go or for gcc seemed to make a difference. (Kubelet has some C code I guess, because it needs gcc to compile.) I guess worst case I can bootstrap a docker image for every type of OS I run, so the glibc dynamic links will work, but I don't want to do that.
Debian still officially supports armel and has packages with a statically linked kubelet version, so my hacky solution currently is to just use their static binary from inside the armel deb package.
Lastly, you have to make your own repository with images that have these binaries (as well as the other versions), and configure kubeadm to pull those. And even more fun, although Docker runs on arm6, it incorrectly pulls arm7 images (a known bug for over 3 years), so you need to either change the arm7 image to just run the armel version, or make both arm6 and arm7 in the same image and just have the entry-point be a shell script that determines at runtime whether to launch the arm6 or arm7 program. Non-master nodes only need to run kubelet and kube-proxy, so those are probably the only images you need to do this for. (Another hack I've read about people using is pulling the correct image and then re-tagging it locally to be whatever image kubeadm wants to pull, so it will just use the local version.)
I'm actually just using ansible to setup k8s "the hard way", but I intend on still making compliant Docker images that can be drop-in replacements so kubeadm will work with them. If and when I can get kubelet to statically compile correctly, I will automate the process into a Dockerfile and stick the images on Docker Hub. Those images will have as many architectures as I can use, so ideally, we'll be able to use kubeadm on a multi-architecture cluster. E.g. amd64, arm64, arm6, and arm7. I estimate that full production Docker and K8s on Pi Zero's (as worker nodes) still leaves at least 50mb-100mb ram left for running small images. And if I strip down the kernel, I can probably free up another 30 or 40 megs. But that's far in the future. If I can get a single static page being served by an nginx container managed by K8s on my Pi Zero, I'm calling that a win for the time being.
Edit from Aug 7: I have managed to get everything working, and currently have a K8s cluster composed of arm6, arm7, arm8, and amd64. I will be making a write-up of my process sometime soon here, but for now, the important takeaway is to do a kubeadm install on an arm6 device as a worker node, you need binaries for kubeadm and kubelet, and only two containers, the pause container, and the kube-proxy container. You can build the binaries natively with buildx if you have QEMU, and just modify my Dockerfile. (Right now, that Dockerfile doesn't actually work completely -- the kube-controller-manager build keeps freezing up. But you can build kubelet, kubeadm, pause, kube-proxy, and the CNI plugins.)
Alternatively, you can pull the static binaries from the /usr/bin dir in the Arch packages I made for kubeadm and kubelet. I installed Arch Linux ARM on my Pi Zero, and so the CNI plugins were installed in my system by a package, but you can build them with my Dockerfile (or pull them from the Arch Linux ARM package) and then place the CNI binaries in the directory "/opt/cni/bin/" on your system. If you have those CNI binaries in that folder, and have kubelet installed and ready as a service, then you can just run kubeadm on the device and it should work fine. The only requirement is you need the correct kube-proxy and pause containers already available for your container engine.
On my Pi Zeroes, I have stock Docker installed, and I used the binaries I built from the Docker file, combined with analysis of the official K8s containers to build a compatible arm6 container for kube-proxy and pause. Specifying the Kubernetes version as v1.18.6 on kubeadm, required re-tagging those containers as "k8s.gcr.io/kube-proxy:v1.18.6" and "k8s.gcr.io/pause:3.2" respectively, but if those containers are already present and tagged correctly on your system, then kubeadm will succeed without complaint.
The only other issue is a working overlay network. I didn't want to go through more compilation hell, so I used Flannel, whose "arm" variant works on arm6 and arm7. You can install it with their defautl yaml file. However, you should add an env var for all sections called FLANNEL_MTU and set it to 1430 or lower. The default, 1500, causes some issues with metrics-server. Additionally, I combined all of Flannel's images into one multi-arch image if you want to use that. That will allow you to do what I did and strip the default yaml install file down to just one section.
With this "full" K8s installation using kubeadm and Docker CE, my Pi Zeroes idle at about 55% CPU usage, and have about 160MB memory free. If we assume I want to leave at least 25% for burst capacity, that still leaves about 20%, which equates to 200 millis. (Pi Zero has a single core 1GHz CPU.) To give some extra wiggle room, I rounded down and set my container request and limit to 120m, and RAM to 100MB. So far, everything works just fine. The only issue is heat, since my zeroes are all crammed together in a cute stackable case that doesn't have much air space.
(And of course, the manager node is not a Pi Zero, it's a Pi 4.)
Edit from Dec 1 2020: This will be my last update. In fact, there's not much to add. Kubeadm has a yaml configuration file, as do all of the other k8s components, none of which are all that well documented... but you can muddle through if you try.
One of the kubeadm options is to use a custom registry for your images, so you can make a multi-arch image and push it to a private registry and then use that for your setup rather than the hack of simply retagging an image in docker. This is what I have done in order to get rid of docker and just use straight containerd.
I still haven't figured out how to get the control plane components compiled for arm6. Both QEMU and native devices won't allow more than 1gb of ram, which is not sufficient for Go to compile most of the control plane. I am aware that Go theoretically can compile for other architectures, so I should be able to compile arm6 on my amd64 machine, using all of its ram. But for the life of me, I can't get that to work, so I'm left compiling things natively in either QEMU or on the devices themselves. Which means no arm6 control plane components.
But that's the only hiccup. Kubelet and kubeadm compile, and the pause container and kube-proxy containers likewise can be built with buildx. So it's still easy enough to get the worker node components working for arm6. If you are making a cluster with pi zeroes though, definitely read up on the kubelet configuration file in order to tweak it for resource usage. (Or, you know, use k3s or another lightweight distro rather than full stock k8s.)
I have binaries for old raspberries models published here https://github.com/aojea/kubernetes-raspi-binaries