New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flannel.1 link gets 2 ipv4 addresses on secondary nodes #883
Comments
Thoughts: is it possible flannel attempts to allocate a single IP, which works, but flannel believes it failed, so it attempts to allocate the IP again, which results in the two addresses? |
Update: I fixed this by just |
@d11wtq Interesting, I've not seen this before but certainly let me know if you see this again. |
we see the same issue with flannel 0.7.1. ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:a2:ab:59 brd ff:ff:ff:ff:ff:ff
inet 10.10.14.187/23 brd 10.10.15.255 scope global eth0
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 02:42:ad:b4:c5:7f brd ff:ff:ff:ff:ff:ff
inet 172.30.79.1/24 scope global docker0
valid_lft forever preferred_lft forever
46: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 62:d6:6c:70:2a:94 brd ff:ff:ff:ff:ff:ff
inet 172.30.73.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet 172.30.79.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
Your Environment |
The same issue occurs with Flannel v0.10.0. OS: Raspbian Stretch Lite |
i get same promblem |
Same under K8s 1.12.0, flannel v0.10.0; however, sudo ip link delete flannel.1 did allow it to come up without error. (Hypriot 1.9.0 on ARM - Raspberry Pi 3 B+) |
check you etcd data and loacl env file , flanneld will get preview ip when startup |
+1, same setup as @wroney688: -k8s 1.12.0
In my case all the flannel pods initially come up successfully but after ~3 days the flannel pod gets stuck in
Here's the interfaces on the worker with the failing flannel pod:
As with others, running Can we re-open this issue? Any logs I can grab next time to help debug? |
Just ran into this, is this a NetworkManager problem? |
Using
kubeadm
and kubernetes 1.8.3 to provision a 3 node cluster on HypriotOS (ARM), I can initialize the master fine, but when joining nodes into the cluster,kube-flannel
crashes on each node with:When I run
ip addr
I can see thatflannel.1
has been given two different IPs, each a /24 apart from each other.I haven't manually added these addresses, they just appear automatically when flannel starts.
kube-flannel
entersCrashLoopBackoff
and never recovers from the error. Is this a bug, or something I need to update on my host OS? How do I remove the second address if this is a suitable temporary workaround?The flannel manifest I have applied is: https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml (
sed s/amd64/arm/g
).The entire log leading up to the error is:
Expected Behavior
Kube flannel should start properly, not only on the master, but on all additional nodes that join the cluster.
flannel.1
should automatically be allocated a single IPv4 address, rather than two addresses.Current Behavior
Kube flannel starts correctly on the master, but additional nodes that join the cluster appear to allocate two IPv4 addresses to
flannel.1
, which causes Kube flannel to enter a crash loop.Steps to Reproduce (for bugs)
Initialize a master node on HypriotOS with
kubeadm init
.Join a secondary node on HypriotOS with
kubeadm join
.Observe that
kube-flannel
runs fine on the master, but crashes on the worker node.Context
This is preventing me from running Kubernetes right now. I previously had a working k8s 1.7 cluster with Flannel 0.7. Only since upgrading Kubernetes and Flannel has this issue started occurring.
Your Environment
The text was updated successfully, but these errors were encountered: