Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cilium complains "IP is not L2 reachable" on startup #14340

Closed
joestringer opened this issue Dec 9, 2020 · 5 comments
Closed

Cilium complains "IP is not L2 reachable" on startup #14340

joestringer opened this issue Dec 9, 2020 · 5 comments
Assignees
Labels
kind/bug This is a bug in the Cilium logic. needs/triage This issue requires triaging to establish severity and next steps.

Comments

@joestringer
Copy link
Member

joestringer commented Dec 9, 2020

Cilium 1.9.1
Linux 5.9
GKE

Cilium logged this warning:

level=error msg="IP is not L2 reachable" error="iface: 'eth0' can't reach ip: '10.168.0.1'" interface=eth0 ipAddr=10.168.0.1 subsys=linux-datapath

As far as I can tell, this has no impact, it's just an extraneous warning.

Configmap:

$ kc get cm cilium-config -oyaml
apiVersion: v1
data:
  auto-direct-node-routes: "false"
  bpf-lb-map-max: "65536"
  bpf-map-dynamic-size-ratio: "0.0025"
  bpf-policy-map-max: "16384"
  cilium-endpoint-gc-interval: 5m0s
  cluster-id: ""
  cluster-name: default
  custom-cni-conf: "false"
  debug: "false"
  disable-cnp-status-updates: "true"
  enable-auto-protect-node-port-range: "true"
  enable-bandwidth-manager: "false"
  enable-bpf-clock-probe: "true"
  enable-bpf-masquerade: "true"
  enable-bpf-tproxy: "true"
  enable-endpoint-health-checking: "true"
  enable-endpoint-routes: "true"
  enable-health-check-nodeport: "true"
  enable-health-checking: "true"
  enable-hubble: "true"
  enable-ipv4: "true"
  enable-ipv6: "false"
  enable-l7-proxy: "true"
  enable-local-node-route: "false"
  enable-local-redirect-policy: "false"
  enable-policy: default
  enable-remote-node-identity: "true"
  enable-session-affinity: "true"
  enable-well-known-identities: "false"
  enable-xt-socket-fallback: "true"
  hubble-socket-path: /var/run/cilium/hubble.sock
  identity-allocation-mode: crd
  install-iptables-rules: "true"
  ipam: kubernetes
  kube-proxy-replacement: probe
  kube-proxy-replacement-healthz-bind-address: ""
  masquerade: "true"
  monitor-aggregation: medium
  monitor-aggregation-flags: all
  monitor-aggregation-interval: 5s
  native-routing-cidr: 10.4.0.0/14
  node-port-bind-protection: "true"
  operator-api-serve-addr: 127.0.0.1:9234
  preallocate-bpf-maps: "false"
  sidecar-istio-proxy-image: cilium/istio_proxy
  tunnel: disabled
  wait-bpf-mount: "false"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: cilium
    meta.helm.sh/release-namespace: cilium
  creationTimestamp: "2020-12-09T22:58:15Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: cilium-config
  namespace: cilium
  resourceVersion: "9573"
  selfLink: /api/v1/namespaces/cilium/configmaps/cilium-config
  uid: 7aca0827-c0a6-4260-b2fa-5f243f49af1e
@joestringer joestringer added the kind/bug This is a bug in the Cilium logic. label Dec 9, 2020
@pchaigno
Copy link
Member

/cc @brb who introduced that code in b78b3b7 (from #14201).

@brb brb added the needs/triage This issue requires triaging to establish severity and next steps. label Dec 10, 2020
@brb
Copy link
Member

brb commented Dec 10, 2020

@joestringer Do you still have it running? I'm interested in ip route and ip addr outputs.

@brb brb self-assigned this Dec 10, 2020
@pchaigno
Copy link
Member

On a 2-node GKE cluster with default config and the connectivity checks deployed:

$ kc logs cilium-tr7t7 | grep level=err
level=error msg="Command execution failed" cmd="[iptables -t mangle -n -L CILIUM_PRE_mangle]" error="exit status 1" subsys=iptables
level=error msg="IP is not L2 reachable" error="iface: 'eth0' can't reach ip: '10.164.0.1'" interface=eth0 ipAddr=10.164.0.1 subsys=linux-datapath
$ kc exec cilium-tr7t7 -- ip r
default via 10.164.0.1 dev eth0 proto dhcp metric 1024 
10.68.0.3 dev lxc753d751d6e3a scope link 
10.68.0.5 dev lxc8cda180f6b8d scope link 
10.68.0.30 dev lxcbf332c1642a4 scope link 
10.68.0.31 dev lxc676e64dcfe37 scope link 
10.68.0.40 dev lxc32cee0cbf75f scope link 
10.68.0.41 dev lxce5cdced026ed scope link 
10.68.0.52 dev lxc5f42ba28a8b1 scope link 
10.68.0.53 dev lxc79d955efc343 scope link 
10.68.0.93 dev lxcaf9d229c8390 scope link 
10.68.0.131 dev lxc7c09d00b4ca5 scope link 
10.68.0.142 dev lxc91861e399a43 scope link 
10.68.0.156 dev lxc752412536191 scope link 
10.68.0.167 dev lxc446a2e6bf51a scope link 
10.68.0.174 dev lxc397b2def4a32 scope link 
10.68.0.222 dev lxc0360ab664dcf scope link 
10.68.0.228 dev lxc359fd97d427e scope link 
10.68.0.246 dev lxc_health scope link 
10.164.0.1 dev eth0 proto dhcp scope link metric 1024 
169.254.123.0/24 dev docker0 proto kernel scope link src 169.254.123.1 linkdown 
$ kc exec cilium-tr7t7 -- ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000
    link/ether 42:01:0a:a4:00:4f brd ff:ff:ff:ff:ff:ff
    inet 10.164.0.79/32 scope global dynamic eth0
       valid_lft 85957sec preferred_lft 85957sec
    inet6 fe80::4001:aff:fea4:4f/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:75:0a:d3:aa brd ff:ff:ff:ff:ff:ff
    inet 169.254.123.1/24 brd 169.254.123.255 scope global docker0
       valid_lft forever preferred_lft forever
5: veth87f33821@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default 
    link/ether d6:c1:41:61:ce:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::d4c1:41ff:fe61:cee8/64 scope link 
       valid_lft forever preferred_lft forever
14: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 42:a6:96:ab:c7:ca brd ff:ff:ff:ff:ff:ff
    inet6 fe80::40a6:96ff:feab:c7ca/64 scope link 
       valid_lft forever preferred_lft forever
15: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 06:cd:a8:f6:ee:9a brd ff:ff:ff:ff:ff:ff
    inet 10.68.0.95/32 scope link cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::4cd:a8ff:fef6:ee9a/64 scope link 
       valid_lft forever preferred_lft forever
17: lxc_health@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 92:d8:64:74:fa:4f brd ff:ff:ff:ff:ff:ff link-netns cilium-health
    inet6 fe80::90d8:64ff:fe74:fa4f/64 scope link 
       valid_lft forever preferred_lft forever
19: lxc91861e399a43@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether fe:d8:15:cf:31:7c brd ff:ff:ff:ff:ff:ff link-netnsid 8
    inet6 fe80::fcd8:15ff:fecf:317c/64 scope link 
       valid_lft forever preferred_lft forever
21: lxc752412536191@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether a6:e4:41:c8:9a:a1 brd ff:ff:ff:ff:ff:ff link-netnsid 10
    inet6 fe80::a4e4:41ff:fec8:9aa1/64 scope link 
       valid_lft forever preferred_lft forever
23: lxc359fd97d427e@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether aa:ad:1c:31:d4:c6 brd ff:ff:ff:ff:ff:ff link-netnsid 11
    inet6 fe80::a8ad:1cff:fe31:d4c6/64 scope link 
       valid_lft forever preferred_lft forever
25: lxc8cda180f6b8d@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 96:b3:50:d9:15:12 brd ff:ff:ff:ff:ff:ff link-netnsid 12
    inet6 fe80::94b3:50ff:fed9:1512/64 scope link 
       valid_lft forever preferred_lft forever
27: lxc753d751d6e3a@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:dc:12:65:f2:01 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::8dc:12ff:fe65:f201/64 scope link 
       valid_lft forever preferred_lft forever
29: lxc32cee0cbf75f@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 3a:0e:4d:df:d7:98 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::380e:4dff:fedf:d798/64 scope link 
       valid_lft forever preferred_lft forever
31: lxc7c09d00b4ca5@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 52:f6:14:7e:81:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 13
    inet6 fe80::50f6:14ff:fe7e:81e9/64 scope link 
       valid_lft forever preferred_lft forever
33: lxc5f42ba28a8b1@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether b2:b7:bc:9f:c2:8e brd ff:ff:ff:ff:ff:ff link-netnsid 14
    inet6 fe80::b0b7:bcff:fe9f:c28e/64 scope link 
       valid_lft forever preferred_lft forever
35: lxcaf9d229c8390@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 56:59:35:af:24:5a brd ff:ff:ff:ff:ff:ff link-netnsid 16
    inet6 fe80::5459:35ff:feaf:245a/64 scope link 
       valid_lft forever preferred_lft forever
37: lxcbf332c1642a4@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 3a:9a:7a:8c:32:14 brd ff:ff:ff:ff:ff:ff link-netnsid 15
    inet6 fe80::389a:7aff:fe8c:3214/64 scope link 
       valid_lft forever preferred_lft forever
39: lxc397b2def4a32@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 42:c4:9f:df:02:00 brd ff:ff:ff:ff:ff:ff link-netnsid 17
    inet6 fe80::40c4:9fff:fedf:200/64 scope link 
       valid_lft forever preferred_lft forever
41: lxce5cdced026ed@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether ee:fa:ab:32:53:06 brd ff:ff:ff:ff:ff:ff link-netnsid 18
    inet6 fe80::ecfa:abff:fe32:5306/64 scope link 
       valid_lft forever preferred_lft forever
45: lxc676e64dcfe37@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether f6:17:d7:e1:af:d7 brd ff:ff:ff:ff:ff:ff link-netnsid 20
    inet6 fe80::f417:d7ff:fee1:afd7/64 scope link 
       valid_lft forever preferred_lft forever
47: lxc446a2e6bf51a@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:4a:46:f9:3d:22 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::84a:46ff:fef9:3d22/64 scope link 
       valid_lft forever preferred_lft forever
49: lxc0360ab664dcf@if48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 6e:1e:46:1e:d6:98 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::6c1e:46ff:fe1e:d698/64 scope link 
       valid_lft forever preferred_lft forever
51: lxc79d955efc343@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether 86:b1:cb:0c:5f:0c brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::84b1:cbff:fe0c:5f0c/64 scope link 
       valid_lft forever preferred_lft forever

@brb
Copy link
Member

brb commented Dec 10, 2020

@pchaigno Many thanks!

The problem is that eth0 has IP addr 10.164.0.79/32 and the next hop is 10.164.0.1, and because of /32 the check https://github.com/cilium/cilium/blob/v1.9.1/pkg/datapath/linux/node.go#L595 thinks that they both are in different L2. I think it's safe to remove the check, as in any case the nexthop should be L2 reachable.

@brb
Copy link
Member

brb commented Jan 13, 2021

Closing, as it was fixed in v1.8, v1.9 and master.

@brb brb closed this as completed Jan 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is a bug in the Cilium logic. needs/triage This issue requires triaging to establish severity and next steps.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants