Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory in Kubernetes 1.11.0 #569

Closed
marslo opened this issue Jul 3, 2018 · 13 comments

Comments

@marslo
Copy link

marslo commented Jul 3, 2018

This error is from the discussion in CoreOS.

Here details:
I've disabled the IPV6 in my PC by:

$ tail -3 /etc/default/grub
# disable ipv6
GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1"
GRUB_CMDLINE_LINUX="ipv6.disable=1"

$ sudo update-grub
$ sudo reboot

IP addr shows in my pc:

$ ip -6 addr show

$ ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 130.147.182.57/23 brd 130.147.183.255 scope global dynamic noprefixroute enp0s31f6
       valid_lft 690681sec preferred_lft 690681sec
3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.10.235/23 brd 192.168.11.255 scope global dynamic noprefixroute wlp2s0
       valid_lft 85898sec preferred_lft 85898sec
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever

And kubernetes init by:

$ sudo kubeadm init --ignore-preflight-errors=all --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=130.147.182.57 --kubernetes-version=v1.11.0

After inited the kubernetes master, the pod start failure due to open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory

$ kubectl --namespace=kube-system get pods
NAME                                   READY     STATUS              RESTARTS   AGE
coredns-78fcdf6894-555tm               0/1       ContainerCreating   0          1h
coredns-78fcdf6894-c7lms               0/1       ContainerCreating   0          1h
etcd-imarslo18                         1/1       Running             1          1h
kube-apiserver-imarslo18               1/1       Running             1          1h
kube-controller-manager-imarslo18      1/1       Running             1          1h
kube-flannel-ds-f8j2z                  1/1       Running             1          56m
kube-proxy-sddp2                       1/1       Running             1          1h
kube-scheduler-imarslo18               1/1       Running             1          1h
kubernetes-dashboard-6948bdb78-hh8wm   0/1       ContainerCreating   0          19m

And, actually, I don't even has eth0 network card in my PC.

Pod details:

$ kubectl --namespace=kube-system describe pods coredns-78fcdf6894-555tm
Name:           coredns-78fcdf6894-555tm
Namespace:      kube-system
Node:           imarslo18/192.168.10.235
Start Time:     Tue, 03 Jul 2018 18:49:20 +0800
Labels:         k8s-app=kube-dns
                pod-template-hash=3497892450
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Container ID:
    Image:         k8s.gcr.io/coredns:1.1.3
    Image ID:
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-k4xfp (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-k4xfp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-k4xfp
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                  From                Message
  ----     ------                  ----                 ----                -------
  Warning  FailedScheduling        57m (x93 over 1h)    default-scheduler   0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedCreatePodSandBox  56m                  kubelet, imarslo18  Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "819e2f4395bfa32332180480b9d72a76c287154c19fd3a748f7cfb2acecc2af7" network for pod "coredns-78fcdf6894-555tm": NetworkPlugin cni failed to set up pod "coredns-78fcdf6894-555tm_kube-system" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "819e2f4395bfa32332180480b9d72a76c287154c19fd3a748f7cfb2acecc2af7" network for pod "coredns-78fcdf6894-555tm": NetworkPlugin cni failed to teardown pod "coredns-78fcdf6894-555tm_kube-system" network: failed to get IP addresses for "eth0": <nil>]

My PC details:

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

$ cat /etc/sysctl.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv6.conf.all.forwarding=0
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1

$ sudo sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
sysctl: cannot stat /proc/sys/net/ipv6/conf/all/forwarding: No such file or directory
sysctl: cannot stat /proc/sys/net/ipv6/conf/all/disable_ipv6: No such file or directory
sysctl: cannot stat /proc/sys/net/ipv6/conf/default/disable_ipv6: No such file or directory
sysctl: cannot stat /proc/sys/net/ipv6/conf/lo/disable_ipv6: No such file or directory 

$ cat /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
@marslo
Copy link
Author

marslo commented Jul 5, 2018

@squeed
Copy link
Member

squeed commented Jul 5, 2018

As I already explained, this has been fixed for 6 months. Please upgrade your CNI plugins.

@squeed squeed closed this as completed Jul 5, 2018
@marslo
Copy link
Author

marslo commented Jul 9, 2018

Thanks @squeed, as a green-hand of kubernetes. Could you please provide any documents about how to upgrade the CNI? I don't have any clue about it.

Thanks a lot.

@jellonek
Copy link
Member

jellonek commented Jul 9, 2018

This is the question for your deployment software provider, which in your case is probably https://github.com/kubernetes/kubeadm

@marslo
Copy link
Author

marslo commented Jul 10, 2018

Thanks a lot @jellonek.

@GangChenTFS
Copy link

@marslo ,I have the same issue, have you fixed this issue? could you share your solution? thanks.

@marslo
Copy link
Author

marslo commented Sep 19, 2018

@henshitou, Actually, I haven't fixed this issue yet. I'm using 1.10.5 instead of 1.11.0.

@GangChenTFS
Copy link

GangChenTFS commented Sep 19, 2018 via email

@marslo
Copy link
Author

marslo commented Sep 19, 2018

@henshitou, thank you a lot. I will let you know the result.

@zbialik
Copy link

zbialik commented Apr 11, 2019

@henshitou Aren't you enabling ipv6 by using your workaround? I used it but ended up not getting my cluster to integrate with F5 loadbalancer and think it may be due to this workaround setup. Let me know, thanks.

@ws316156697
Copy link

ws316156697 commented Aug 16, 2023

I encountered a strange ipv6 issue, I turn off ipv6,after restarting the machine :
net.ipv6.conf.all.disable_ipv6 =1
net.ipv6.conf.default.disable_ipv6 = 1

but net.ipv6.conf.eth0.disable_ipv6 = 0
use 'sudo systemctl restart systemd-sysctl.service' or 'systctl --system' it will be 1
but after reboot it is 0

root@k8s-master01:/etc/systemd# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
root@k8s-master01:/etc/systemd# uname -a
Linux k8s-master01 5.4.0-150-generic #167-Ubuntu SMP Mon May 15 17:35:05 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
root@k8s-master01:/etc/systemd# cat /etc/sysctl.conf | grep ipv6 | grep -v ^#
root@k8s-master01:/etc/systemd# cat /etc/sysctl.d/* | grep ipv6 | grep -v ^#
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
root@k8s-master01:/etc/systemd# sysctl -a | grep net.ipv6.conf.*.disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 1

@squeed
Copy link
Member

squeed commented Aug 16, 2023

@ws316156697 it is changing the sysctl inside the container's network namespace, not the host-level eth0. Hope that answers your question.

@ws316156697
Copy link

ws316156697 commented Aug 16, 2023

@ws316156697 it is changing the sysctl inside the container's network namespace, not the host-level eth0. Hope that answers your question.

Yes, but I don't know where to find the answer to my question. I want to try my luck here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants