Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weave fail to assign IPv4 / Remove ephemeral peers from Weave Net via AWS ASG lifecycle hook #2970

Closed
hollowimage opened this issue May 16, 2017 · 16 comments

Comments

@hollowimage
Copy link

kubernetes/kubernetes#45858

x-submitting here per request.

In short: through some set of circumstances, weave-net pods failed to snag ipv4 interface and the rest of the cluster was pretty much down.

@marccarre
Copy link
Contributor

Hi @hollowimage, thanks for raising this issue.
Would you have the output of docker logs for Weave Net's container?
(as mentioned by @bboreham, hints present in these logs may point to the root cause)

@hollowimage
Copy link
Author

unfortunately I do not. at the time we were rather in a fire mode.

I reported the issue only after the cluster came back up without problems at the end of the day monday, by which point the pods were destroyed and recreated.

I think last time i looked at their logs, there were some tidbits about failing to bind to 127.0.0.1:[port]. thats the only thing that stood out, and i dont know if this was a consequence of it failing to allocate the ipv4 interface or a cause.

@marccarre
Copy link
Contributor

@hollowimage, the only thing which comes to mind with:

failing to bind to 127.0.0.1:[port]

would be if you have two instances of Weave Net running, e.g.:

  • you first weave launch-ed, and then
  • you then started Weave Net's daemonset in Kubernetes.
  1. Could something like the above have happened?
  2. Also, which version of Weave Net are you using?

@hollowimage
Copy link
Author

i do not do manual weave starts. its all done through DS. I was using weave 1.9.3 I think at the time, since then, as part of troubleshooting, updated the DS definition to pull 1.9.5.

to elaborate again: everything was fine. then one day when my cluster scaled back up from 0 kubelets to 2 (we scale it down at night), the above behavior started happening.

@hollowimage
Copy link
Author

This happened again.

root@ip-172-32-76-16:/home/admin# ifconfig
datapath  Link encap:Ethernet  HWaddr d2:a6:b8:1a:29:f9
          inet6 addr: fe80::d0a6:b8ff:fe1a:29f9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:7 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:392 (392.0 B)  TX bytes:648 (648.0 B)

docker0   Link encap:Ethernet  HWaddr 02:42:21:bb:37:80
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 0a:ed:66:75:84:3a
          inet addr:172.32.76.16  Bcast:172.32.76.255  Mask:255.255.255.0
          inet6 addr: fe80::8ed:66ff:fe75:843a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:399254 errors:0 dropped:0 overruns:0 frame:0
          TX packets:87760 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:520545231 (496.4 MiB)  TX bytes:16216956 (15.4 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:30384 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30384 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:2372686 (2.2 MiB)  TX bytes:2372686 (2.2 MiB)

vethwe-bridge Link encap:Ethernet  HWaddr 6a:30:e0:c9:48:48
          inet6 addr: fe80::6830:e0ff:fec9:4848/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:998 (998.0 B)  TX bytes:1296 (1.2 KiB)

vethwe-datapath Link encap:Ethernet  HWaddr f2:49:a7:e5:b9:ed
          inet6 addr: fe80::f049:a7ff:fee5:b9ed/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1296 (1.2 KiB)  TX bytes:998 (998.0 B)

vxlan-6784 Link encap:Ethernet  HWaddr 0a:4c:25:01:f4:b0
          inet6 addr: fe80::84c:25ff:fe01:f4b0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:65485  Metric:1
          RX packets:2017 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2017 errors:0 dropped:8 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2771432 (2.6 MiB)  TX bytes:2771432 (2.6 MiB)

weave     Link encap:Ethernet  HWaddr d2:5d:2e:08:96:68
          inet6 addr: fe80::d05d:2eff:fe08:9668/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:740 (740.0 B)  TX bytes:648 (648.0 B)

Here are some logs from the weave container:

root@ip-172-32-76-16:/home/admin# docker logs 351af5c88ee2
INFO: 2017/06/02 11:03:03.369149 Command line options: map[docker-api: conn-limit:30 datapath:datapath http-addr:127.0.0.1:6784 ipalloc-init:consensus=2 status-addr:0.0.0.0:6782 ipalloc-range:10.32.0.0/12 nickname:ip-172-32-76-16 no-dns:true port:6783]
INFO: 2017/06/02 11:03:03.369209 Communication between peers is unencrypted.
INFO: 2017/06/02 11:03:03.382309 Our name is d2:5d:2e:08:96:68(ip-172-32-76-16)
INFO: 2017/06/02 11:03:03.382373 Launch detected - using supplied peer list: [172.32.75.120 172.32.76.16]
INFO: 2017/06/02 11:03:03.382458 Checking for pre-existing addresses on weave bridge
INFO: 2017/06/02 11:03:03.387418 [allocator d2:5d:2e:08:96:68] No valid persisted data
INFO: 2017/06/02 11:03:03.868800 [allocator d2:5d:2e:08:96:68] Initialising via deferred consensus
INFO: 2017/06/02 11:03:03.868949 Sniffing traffic on datapath (via ODP)
INFO: 2017/06/02 11:03:04.168961 Discovered local MAC d2:a6:b8:1a:29:f9
INFO: 2017/06/02 11:03:04.169147 Discovered local MAC d2:5d:2e:08:96:68
INFO: 2017/06/02 11:03:04.169303 Discovered local MAC 6a:30:e0:c9:48:48
INFO: 2017/06/02 11:03:04.170456 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2017/06/02 11:03:04.171044 Listening for metrics requests on 0.0.0.0:6782
INFO: 2017/06/02 11:03:04.368849 ->[172.32.75.120:6783] attempting connection
INFO: 2017/06/02 11:03:04.368966 ->[172.32.76.16:6783] attempting connection
INFO: 2017/06/02 11:03:04.370346 ->[172.32.76.16:36808] connection accepted
INFO: 2017/06/02 11:03:04.370818 ->[172.32.76.16:36808|d2:5d:2e:08:96:68(ip-172-32-76-16)]: connection shutting down due to error: cannot connect to ourself
INFO: 2017/06/02 11:03:04.371027 ->[172.32.76.16:6783|d2:5d:2e:08:96:68(ip-172-32-76-16)]: connection shutting down due to error: cannot connect to ourself
INFO: 2017/06/02 11:03:04.371265 ->[172.32.75.120:6783|7e:de:cc:e2:35:75(ip-172-32-75-120)]: connection ready; using protocol version 2
INFO: 2017/06/02 11:03:04.371377 overlay_switch ->[7e:de:cc:e2:35:75(ip-172-32-75-120)] using fastdp
INFO: 2017/06/02 11:03:04.371415 ->[172.32.75.120:6783|7e:de:cc:e2:35:75(ip-172-32-75-120)]: connection added (new peer)
INFO: 2017/06/02 11:03:04.569170 ->[172.32.75.120:6783|7e:de:cc:e2:35:75(ip-172-32-75-120)]: connection fully established
INFO: 2017/06/02 11:03:04.570092 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2017/06/02 11:03:04.570864 sleeve ->[172.32.75.120:6783|7e:de:cc:e2:35:75(ip-172-32-75-120)]: Effective MTU verified at 8939
INFO: 2017/06/02 11:04:40.269503 ->[172.32.75.42:44133] connection accepted
INFO: 2017/06/02 11:04:40.565481 ->[172.32.75.42:44133|0e:95:85:4d:0a:33(ip-172-32-75-42)]: connection ready; using protocol version 2
INFO: 2017/06/02 11:04:40.565576 overlay_switch ->[0e:95:85:4d:0a:33(ip-172-32-75-42)] using fastdp
INFO: 2017/06/02 11:04:40.565623 ->[172.32.75.42:44133|0e:95:85:4d:0a:33(ip-172-32-75-42)]: connection added (new peer)
INFO: 2017/06/02 11:04:40.570395 ->[172.32.75.42:44133|0e:95:85:4d:0a:33(ip-172-32-75-42)]: connection fully established
INFO: 2017/06/02 11:04:41.169085 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2017/06/02 11:04:41.365319 sleeve ->[172.32.75.42:6783|0e:95:85:4d:0a:33(ip-172-32-75-42)]: Effective MTU verified at 8939
INFO: 2017/06/02 11:04:43.553601 Discovered remote MAC 72:a4:d6:88:a3:dd at 0e:95:85:4d:0a:33(ip-172-32-75-42)
INFO: 2017/06/02 11:04:43.857532 Discovered remote MAC c2:32:78:61:c7:71 at 0e:95:85:4d:0a:33(ip-172-32-75-42)
INFO: 2017/06/02 11:04:44.513281 Discovered remote MAC 0e:95:85:4d:0a:33 at 0e:95:85:4d:0a:33(ip-172-32-75-42)
INFO: 2017/06/02 11:14:03.384708 Expired MAC d2:a6:b8:1a:29:f9 at d2:5d:2e:08:96:68(ip-172-32-76-16)
INFO: 2017/06/02 11:14:03.468733 Expired MAC d2:5d:2e:08:96:68 at d2:5d:2e:08:96:68(ip-172-32-76-16)
INFO: 2017/06/02 11:14:03.468754 Expired MAC 6a:30:e0:c9:48:48 at d2:5d:2e:08:96:68(ip-172-32-76-16)
INFO: 2017/06/02 11:15:03.469041 Expired MAC 72:a4:d6:88:a3:dd at 0e:95:85:4d:0a:33(ip-172-32-75-42)
INFO: 2017/06/02 11:15:03.469090 Expired MAC c2:32:78:61:c7:71 at 0e:95:85:4d:0a:33(ip-172-32-75-42)
INFO: 2017/06/02 11:15:03.469103 Expired MAC 0e:95:85:4d:0a:33 at 0e:95:85:4d:0a:33(ip-172-32-75-42)
admin@ip-172-32-76-16:~$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:ed:66:75:84:3a brd ff:ff:ff:ff:ff:ff
    inet 172.32.76.16/24 brd 172.32.76.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::8ed:66ff:fe75:843a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:21:bb:37:80 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1
    link/ether d2:a6:b8:1a:29:f9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d0a6:b8ff:fe1a:29f9/64 scope link
       valid_lft forever preferred_lft forever
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether d2:5d:2e:08:96:68 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d05d:2eff:fe08:9668/64 scope link
       valid_lft forever preferred_lft forever
7: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 32:b1:6f:2f:a0:a7 brd ff:ff:ff:ff:ff:ff
9: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default qlen 1000
    link/ether f2:49:a7:e5:b9:ed brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f049:a7ff:fee5:b9ed/64 scope link
       valid_lft forever preferred_lft forever
10: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default qlen 1000
    link/ether 6a:30:e0:c9:48:48 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6830:e0ff:fec9:4848/64 scope link
       valid_lft forever preferred_lft forever
11: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65485 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 0a:4c:25:01:f4:b0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::84c:25ff:fe01:f4b0/64 scope link
       valid_lft forever preferred_lft forever

@hollowimage
Copy link
Author

root@ip-172-32-76-16:/home/admin# uname -a
Linux ip-172-32-76-16 4.4.65-k8s #1 SMP Tue May 2 15:48:24 UTC 2017 x86_64 GNU/Linux

@hollowimage
Copy link
Author

/home/weave # ./weave --local status ipam
6a:3d:2a:1b:d2:72(ip-172-32-76-156)        256 IPs (00.0% of total) - unreachable!
76:78:8c:b0:a8:90(ip-172-32-75-97)       16384 IPs (01.6% of total) - unreachable!
1e:f5:fe:f8:b5:99(ip-172-32-75-77)       65536 IPs (06.2% of total) - unreachable!
76:e6:c0:5a:2e:c5(ip-172-32-76-7)       262144 IPs (25.0% of total) - unreachable!
6a:2c:a4:e8:32:93(ip-172-32-75-214)      32768 IPs (03.1% of total) - unreachable!
36:b3:77:b4:7b:13(ip-172-32-75-216)         16 IPs (00.0% of total) - unreachable!
aa:19:6a:90:c0:de(ip-172-32-75-249)         32 IPs (00.0% of total) - unreachable!
a6:bf:53:5c:4e:4c(ip-172-32-75-116)         64 IPs (00.0% of total) - unreachable!
4a:db:3f:ec:f4:1f(ip-172-32-76-140)      98304 IPs (09.4% of total) - unreachable!
3a:b6:99:fb:e9:ec(ip-172-32-76-174)         13 IPs (00.0% of total) - unreachable!
ba:1d:a1:62:f8:d0(ip-172-32-76-102)       2048 IPs (00.2% of total) - unreachable!
fa:ea:f8:fb:b6:aa(ip-172-32-76-240)      16384 IPs (01.6% of total) - unreachable!
92:02:b9:5c:85:25(ip-172-32-75-229)      32768 IPs (03.1% of total) - unreachable!
fe:27:ee:6d:ed:f2(ip-172-32-75-127)      98304 IPs (09.4% of total) - unreachable!
9e:03:b0:91:85:3b(ip-172-32-75-117)     262144 IPs (25.0% of total) - unreachable!
6a:e2:5c:93:07:af(ip-172-32-76-107)         32 IPs (00.0% of total) - unreachable!
22:5f:13:81:2e:0f(ip-172-32-76-153)        512 IPs (00.0% of total) - unreachable!
9a:b9:60:87:ef:9c(ip-172-32-76-114)       8192 IPs (00.8% of total) - unreachable!
b6:ae:13:db:6e:4c(ip-172-32-76-147)      65536 IPs (06.2% of total) - unreachable!
a2:7a:8f:c8:d4:ad(ip-172-32-75-100)         18 IPs (00.0% of total) - unreachable!
da:ac:26:51:cd:3f(ip-172-32-75-121)        128 IPs (00.0% of total) - unreachable!
06:6e:e8:90:ec:4a(ip-172-32-75-90)         256 IPs (00.0% of total) - unreachable!
5e:2a:e6:ab:6a:69(ip-172-32-76-128)       4096 IPs (00.4% of total) - unreachable!
6e:83:d2:48:d1:81(ip-172-32-76-77)       32768 IPs (03.1% of total) - unreachable!
56:68:7e:ae:62:af(ip-172-32-76-46)          16 IPs (00.0% of total) - unreachable!
4e:8b:5b:cf:90:09(ip-172-32-76-32)          64 IPs (00.0% of total) - unreachable!
36:01:a7:6d:a9:db(ip-172-32-76-9)          128 IPs (00.0% of total) - unreachable!
7e:de:cc:e2:35:75(ip-172-32-75-120)          1 IPs (00.0% of total)
1a:c1:5a:3a:2d:d3(ip-172-32-75-47)        1024 IPs (00.1% of total) - unreachable!
96:58:84:0f:00:75(ip-172-32-75-55)        8192 IPs (00.8% of total) - unreachable!
5a:9e:40:65:d4:aa(ip-172-32-76-163)       1024 IPs (00.1% of total) - unreachable!
82:11:fd:be:44:02(ip-172-32-75-113)        512 IPs (00.0% of total) - unreachable!
7e:38:c9:ff:ae:21(ip-172-32-75-215)       2048 IPs (00.2% of total) - unreachable!
fe:97:4e:db:63:81(ip-172-32-75-176)       4096 IPs (00.4% of total) - unreachable!
76:6e:74:67:51:9f(ip-172-32-76-170)      32768 IPs (03.1% of total) - unreachable!

@bboreham
Copy link
Contributor

bboreham commented Jun 2, 2017

Looks like this is the same as #2797

@marccarre
Copy link
Contributor

As discussed on Slack workaround is to weave rmpeer the workers which got shut down so they release their IPAM range.

@marccarre
Copy link
Contributor

Also, given the workers are shut down via an AWS' Auto-Scaling Group, implementing a lifecycle hook might help in this case.

@hollowimage
Copy link
Author

Removing all terminated peers did the trick and cluster instantly reallocated the private range to weave pods and everything came back to life. In our case this was safe to do since they have been prior permanently destroyed.

@goblain
Copy link

goblain commented Jun 12, 2017

For reference, similar situation here today. Got a cluster with some history (many nodes rotated over 3rd party api termination, not exactly a graceful process), recently our nodes started getting stuck in a weird state where the CNI was not configuring network as expected, which resulted in a bunch of errors like

kuberuntime_sandbox.go:54] CreatePodSandbox for pod "<...>(c34a9d92-4f8c-11e7-82e3-aee0afe94606)" failed: rpc error: code = 4 desc = context deadline exceeded

remote_runtime.go:86] RunPodSandbox from runtime service failed: rpc error: code = 4 desc = context deadline exceeded

kubelet.go:1752] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 4m0.417346093s ago; threshold is 3m0s]

after weave rmpeer'ing dead peers existing cluster nodes started spinning up new pods as expected.

@marccarre marccarre changed the title weave failing to assign ipv4 Weave fail to assign IPv4 / Remove ephemeral peers from Weave Net via AWS ASG lifecycle hook Jun 16, 2017
@mfornasa
Copy link

Same issue here, on a cluster with some autoscaling happening daily (more or less 10 node creation action per day).

@fromthebridge
Copy link

Hello, this issue is still happening in with AWS ASG, any news for a fix? Workaround is to fix with a weave rmpeer.

@bricef
Copy link
Contributor

bricef commented Oct 20, 2017

@mfornasa & @emdupp - I'm currently working on an integration test & fix for this issue when on kubernetes. This might be relevant to you. You can follow on #2797

@hollowimage
Copy link
Author

this was fixed per #2797

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants