-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error deploying Cilium: Binary "cilium-envoy" cannot be executed #23640
Comments
If you don't need L7 policies and L7 visibility, the easiest is probably to set |
@pchaigno I apologize for my lack of knowledge, this is done with a $ helm upgrade cilium cilium/cilium --version 1.12.6 \
--namespace kube-system \
--reuse-values \
--set egressGateway.enabled=true \
--set bpf.masquerade=true \
--set kubeProxyReplacement=strict \
--set l7Proxy=false |
Yes. That will disable the L7 proxy and I believe will remove the need for the cilium-envoy binary. There's also a possibility to use a different cilium-envoy for aarch64 but I'm not familiar with how. |
Adding the Cilium status: # cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: OK
\__/¯¯\__/ ClusterMesh: disabled
\__/
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 8, Ready: 8/8, Available: 8/8
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 8
hubble-ui Running: 1
hubble-relay Running: 1
cilium-operator Running: 1
Cluster Pods: 6/6 managed by Cilium
Image versions cilium quay.io/cilium/cilium:v1.12.6@sha256:454134506b0448c756398d3e8df68d474acde2a622ab58d0c7e8b272b5867d0d: 8
hubble-ui quay.io/cilium/hubble-ui:v0.9.2@sha256:d3596efc94a41c6b772b9afe6fe47c17417658956e04c3e2a28d293f2670663e: 1
hubble-ui quay.io/cilium/hubble-ui-backend:v0.9.2@sha256:a3ac4d5b87889c9f7cc6323e86d3126b0d382933bd64f44382a92778b0cde5d7: 1
hubble-relay quay.io/cilium/hubble-relay:v1.12.6@sha256:27a68a16f0ee7ed6ba690e91847de6931a5511f85a7f939320df216486764cb9: 1
cilium-operator quay.io/cilium/operator-generic:v1.12.6@sha256:eec4430d222cb2967d42d3b404d2606e66468de47ae85e0a3ca3f58f00a5e017: 1 |
For rasperry PI, you might need to install one additional package as per below. Understand that you are not running ubuntu, but no harm to give it a crack and see how it's going. https://docs.cilium.io/en/latest/operations/system_requirements/#ubuntu-22-04-on-raspberry-pi |
That |
what I meant is to check if there is any equivalent package for RaspiOS Lite. The above docs is added recently for the same issue while running proxy in rasperry PI. |
The thing is, I do not know what kernel related packages need to be installed, in order allow me use the l7Proxy and |
file list for the https://packages.ubuntu.com/jammy/arm64/linux-modules-extra-5.15.0-1005-raspi/filelist |
Any more info on this? I'm trying to run cilium on the most recent verion of DietPi (a Raspberry Pi OS derivative). Disabling the L7 Proxy breaks Ingress among other things. |
I've made a bit of progress on this, I think. I rebuilt the RPI kernel with the following options:
envoy actually starts now, but I've broken netwoking in the process |
even with l7Proxy disabled, when I tried on dietpi the cilium pod does something which affect/disable the host network stack in a way where connection to the device is not possible (say through ssh), since it's a headless setup can't really figure out what's happening and I need to reboot the node to connect through it adn disable cilium before the pod start. After 3-4 hours i kinda stopped the project of running cilium on raspberry pi hardware. |
I have a fully functional cluster of 3 control planes and 5 nodes, with cilium, metallb, longhorn and kube-prometheus-stack installed. Cilium works with L7 proxy disabled. otherwise I would be able to deploy Prometheus stack and the rest of pending pods because of the I stopped using a while ago DietPi and switched back to default arm64 RaspiOS, because I did not see any benefits running it, especially now with hardware dedicated for Kubernetes. I have a full Ansible deployment for cluster that I plan to make public, but I’m waiting on the Cilium devs to release a fix. |
@fmunteanu Here is a gist I created with steps for compiling the Pi OS kernel so that it works with envoy and the Cillium L7 Proxy. |
@jDmacD while this is a temporary solution, is not the proper way to go. Existing arm64 kernel should be directly supported by Cilium. Imagine the entire RaspiOS planet needs to compile the kernel, because Cilium has specific requirements. That’s not realistic and is the response I got so far from RaspiOS devs in raspberrypi/linux#5354, which is fair. Every time there is a kernel update, users will need to perform that kernel compile step. Kernel changes should not be promoted as solution, Cilium needs to adapt their software to existing RaspiOS kernel. I also do not understand why none of Cilium devs voice their opinion here. |
Any updates on this issue? |
@joestringer RaspiOS kernel devs mentioned it should be very easy for Cilium devs to address the issue. It it possible to get some clarifications on your side? Cilium expects larger pages of memory, the size of 48 bits required by Cilium is set to 39 bits for Raspberry Pi kernel. It should not be difficult at all for Cilium devs to address this issue. |
I can't speak for the difficulty as I have no background on ARM64 platform differences. If it's an easy fix, then great, we would welcome PR proposals to address the issue. I think it's fair to say that upstream Cilium devs would love for Cilium to be more compatible with specific OSes like this, but we rely on the community to report problems and propose solutions in order to make Cilium work better for everyone. |
@joestringer do you know which Cilium dev could look into this? A simple colab with RaspiOS kernel devs should get you all the answers. Let’s get some traction on this please, your help will be much appreciated by community. All required information to address the problem is posted into this issue, as well raspberrypi/linux#5354. |
I think a good first step is to change the issue labels, so Cilium devs can notice it. |
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
… BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
…f & BTF - for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - see https://github.com/libbpf/libbpf#bpf-co-re-compile-once--run-everywhere
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
- for tcmalloc (enjoy, cilium, etc) stuff cilium/cilium#23640 - also: `CONFIG_BPF_JIT=y`, `CONFIG_FTRACE_SYSCALLS=y`, `CONFIG_BPF_KPROBE_OVERRIDE=y` - this commit should contain no DEBUG/BTF changes
Is there an existing issue for this?
What happened?
I'm testing the Cilium installation on a fresh K3S high-availability cluster with two control planes and one node. My goal is to use the cluster with Cilium, MetalLB and Longhorn. This is a fresh install done with Ansible on RaspiOS Lite 64bits, I will have six nodes total once all deployment issues are resolved.
K3S
apollo.lan
control plane configuration:K3S
boreas.lan
control plane configuration:K3S
cerus.lan
node configuration:After initial cluster deployment, I see all pods pending:
# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-597584b69b-kbq9p 0/1 Pending 0 3m27s kube-system helm-install-traefik-9pr8d 0/1 Pending 0 3m27s kube-system helm-install-traefik-crd-tgv85 0/1 Pending 0 3m27s kube-system metrics-server-5f9f776df5-l5xgn 0/1 Pending 0 3m27s
Next, I installed the
cilium
binary in both control planes and the Cilium chart:# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system cilium-6mdw8 0/1 CrashLoopBackOff 2 (12s ago) 65s kube-system cilium-ngrmw 0/1 CrashLoopBackOff 2 (9s ago) 65s kube-system cilium-operator-5f7d7976fd-l5z44 1/1 Running 0 65s kube-system cilium-z9qbg 0/1 CrashLoopBackOff 2 (5s ago) 65s kube-system coredns-597584b69b-kbq9p 0/1 ContainerCreating 0 56m kube-system helm-install-traefik-9pr8d 0/1 ContainerCreating 0 56m kube-system helm-install-traefik-crd-tgv85 0/1 ContainerCreating 0 56m kube-system metrics-server-5f9f776df5-l5xgn 0/1 ContainerCreating 0 56m
Pods are on a CrashLoopBackOff:
I followed your Helm installation instructions, not sure what am I missing.
Cilium Version
# cilium version cilium-cli: v0.12.12 compiled with go1.19.4 on linux/arm cilium image (default): v1.12.5 cilium image (stable): v1.12.6 cilium image (running): v1.12.6
Kernel Version
Kubernetes Version
Sysdump
cilium-sysdump-20230208-160817.zip
Relevant log output
Anything else?
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: