Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
flannel.1 link gets 2 ipv4 addresses on secondary nodes #883
When I run
I haven't manually added these addresses, they just appear automatically when flannel starts.
The flannel manifest I have applied is: https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml (
The entire log leading up to the error is:
Kube flannel should start properly, not only on the master, but on all additional nodes that join the cluster.
Kube flannel starts correctly on the master, but additional nodes that join the cluster appear to allocate two IPv4 addresses to
Steps to Reproduce (for bugs)
Initialize a master node on HypriotOS with
This is preventing me from running Kubernetes right now. I previously had a working k8s 1.7 cluster with Flannel 0.7. Only since upgrading Kubernetes and Flannel has this issue started occurring.
we see the same issue with flannel 0.7.1.
ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:50:56:a2:ab:59 brd ff:ff:ff:ff:ff:ff inet 10.10.14.187/23 brd 10.10.15.255 scope global eth0 valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP link/ether 02:42:ad:b4:c5:7f brd ff:ff:ff:ff:ff:ff inet 172.30.79.1/24 scope global docker0 valid_lft forever preferred_lft forever 46: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 62:d6:6c:70:2a:94 brd ff:ff:ff:ff:ff:ff inet 172.30.73.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet 172.30.79.0/32 scope global flannel.1 valid_lft forever preferred_lft forever
The same issue occurs with Flannel v0.10.0.
OS: Raspbian Stretch Lite
+1, same setup as @wroney688:
In my case all the flannel pods initially come up successfully but after ~3 days the flannel pod gets stuck in
Here's the interfaces on the worker with the failing flannel pod:
As with others, running
Can we re-open this issue? Any logs I can grab next time to help debug?