Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Networking issues with Raspbian Lite / RPi4 #703
Describe the bug
I flashed Raspbian Lite for my Raspberry Pi 4 (4GB) and updated / upgraded packages with apt:
I did this on 3 separate RPis and ran the curl / sh script to install k3s as a server on each. I then tried to access Traefik on port 80 with the Node's IP (using my laptop). It did not respond.
If I use
I then deployed OpenFaaS with the following:
The pods which rely on networking failed to start, namely the ones which talk to Prometheus on start-up, or the ones which talk to NATS on start-up.
This worked well for me every time I tried it previously. The only thing which I think is different is having run
@ibuildthecloud DNS isn't working and IP connectivity isn't working either.
CoreDNS appears to be started.
Thanks for filing this issue @alexellis!
I am trying to replicate on a Raspberry Pi 2 (armv7) but am having a hard time doing so. I deployed OpenFaas using the commands you provided:
root@k3s-base:~/faas-netes# kubectl get all -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-b7464766c-cckf5 1/1 Running 0 10m kube-system pod/helm-install-traefik-w2d7g 0/1 Completed 0 10m kube-system pod/svclb-traefik-xm5c4 2/2 Running 0 8m26s kube-system pod/traefik-56688c4464-mhv7x 1/1 Running 0 8m25s openfaas pod/alertmanager-757cc474bc-6rqfw 1/1 Running 0 7m11s openfaas pod/faas-idler-59dfd85f6c-vgnhm 1/1 Running 2 7m12s openfaas pod/gateway-597f6578bc-7v9ch 2/2 Running 0 7m12s openfaas pod/nats-d4c9d8d95-bwfwf 1/1 Running 0 7m11s openfaas pod/prometheus-68d68d7466-xwclk 1/1 Running 0 7m9s openfaas pod/queue-worker-df9d5749c-zgfrh 1/1 Running 0 7m9s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 10m kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 10m kube-system service/traefik LoadBalancer 10.43.48.154 10.20.3.70 80:31121/TCP,443:31253/TCP 8m26s openfaas service/alertmanager ClusterIP 10.43.96.238 <none> 9093/TCP 7m12s openfaas service/gateway ClusterIP 10.43.61.241 <none> 8080/TCP 7m12s openfaas service/gateway-external NodePort 10.43.106.66 <none> 8080:31112/TCP 7m12s openfaas service/nats ClusterIP 10.43.107.135 <none> 4222/TCP 7m11s openfaas service/prometheus ClusterIP 10.43.121.9 <none> 9090/TCP 7m10s NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/svclb-traefik 1 1 1 1 1 <none> 8m26s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 1/1 1 1 10m kube-system deployment.apps/traefik 1/1 1 1 8m26s openfaas deployment.apps/alertmanager 1/1 1 1 7m12s openfaas deployment.apps/faas-idler 1/1 1 1 7m12s openfaas deployment.apps/gateway 1/1 1 1 7m12s openfaas deployment.apps/nats 1/1 1 1 7m11s openfaas deployment.apps/prometheus 1/1 1 1 7m11s openfaas deployment.apps/queue-worker 1/1 1 1 7m10s NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-b7464766c 1 1 1 10m kube-system replicaset.apps/traefik-56688c4464 1 1 1 8m26s openfaas replicaset.apps/alertmanager-757cc474bc 1 1 1 7m12s openfaas replicaset.apps/faas-idler-59dfd85f6c 1 1 1 7m12s openfaas replicaset.apps/gateway-597f6578bc 1 1 1 7m12s openfaas replicaset.apps/nats-d4c9d8d95 1 1 1 7m11s openfaas replicaset.apps/prometheus-68d68d7466 1 1 1 7m10s openfaas replicaset.apps/queue-worker-df9d5749c 1 1 1 7m9s NAMESPACE NAME COMPLETIONS DURATION AGE kube-system job.batch/helm-install-traefik 1/1 101s 10m
The logs for CoreDNS:
root@k3s-base:~/faas-netes# kubectl logs -n kube-system deployment.apps/coredns .:53 2019-08-06T18:39:47.494Z [INFO] CoreDNS-1.3.0 2019-08-06T18:39:47.495Z [INFO] linux/arm, go1.11.4, c8f0e94 CoreDNS-1.3.0 linux/arm, go1.11.4, c8f0e94 2019-08-06T18:39:47.495Z [INFO] plugin/reload: Running configuration MD5 = ef347efee19aa82f09972f89f92da1cf
Following the instructions for testing DNS at https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
root@k3s-base:~/faas-netes# kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml pod/busybox created
And resolving the OpenFaas service:
root@k3s-base:~/faas-netes# kubectl exec -ti busybox -- nslookup nats.openfaas.svc.cluster.local Server: 10.43.0.10 Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local Name: nats.openfaas.svc.cluster.local Address 1: 10.43.107.135 nats.openfaas.svc.cluster.local
Thanks for spending some time on it. There are at least two things you are doing differently.
I did mention the upgrade step in the original post. Upgrading your RPi3 packages may break, or this may only break with RPi4.
Retrying with a fresh install produced the same (working) result. Some of the pods entered
Sorry I don't have an RPi4 to test with, but if you could provide more info, like the CoreDNS logs and service resolve test that would be awesome.
It may be that the RPi just needs more time to bring everything up, and eventually all of your pods will stay in a running state.
Still crashing after 22m, due to no DNS, no IP connectivity.
These are the modules loaded:
vs. on the working unit:
Thanks for the info @alexellis!
If installed via curl script k3s will prepend its binary path (ie
Got docker and k3s working together, issue appears that k3s uses iptables-legacy and docker is using iptables that in Buster is really iptables-nft, from issue kubernetes/kubernetes#71305 having both tables active is recipe for disaster
sudo iptables -F
and a reboot fixed and allows both to run in the cluster or worker
At the moment I think we should just add a warning to the install script if using an older version of iptables since this will be difficult to address for all projects.
The upstream iptables-nft binary probably needs to fallback to legacy binary or kernel modules if nft is not available.