-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes IPv6 problem on Docker 17.x #531
Comments
Interesting. I was able to manually add that using See https://github.com/containernetworking/cni/tree/master/cnitool for the basics. |
Ah, I was not able to solve the problem but perhaps get some more clues:
This was with the cni plugins built from the master |
I think kubeadm is giving you older CNI plugins. However, I'm concerned that a newer CNI plugin isn't able to "adopt" an existing bridge. Hmm. |
Thank you for your help, cnitool was awesome while debugging this issue. I think I have found the issue now and the problem is not within CNI but in docker. The problem is that the container created by docker has the following sysctl settings (even though the host has disable_ipv6=0 for all interfaces):
This means that when CNI creates the interface in the container's network namespace it will get disable_ipv6=1. The IPv6 address assignment then fails (as it should) which gives the I must confess I did not run docker with ipv6=true at first but I tried that now but it made no difference unfortunately. Looks like there are some IPv6 regressions in Docker 17.x. Maybe the disable_ipv6 thing got introduced at the same time? I need to do some more digging. A question, what do you think about having CNI set |
I am also getting this issue after upgrading to kubernetes 1.9.x. I have ipv6 disable at boot, this is the kind messages I am getting:
Maybe the CNI should check if the directory exists before trying to write to it? |
I do not think this is the same problem as I encountered but thanks for chiming in nevertheless. In my case I want to use IPv6 but cannot due to recent changes in docker. I updated the title of the Issue now that I know more of the cause of the problem. Regarding your issue I think it should be fixed in the master branch so might be worth a try recompiling the CNI plugins from master. |
Regarding the original issue I wrote a small patch setting However, I still had issues with the kube-dns pod which kept restarting. Turned out to be the same root cause but this time for the loopback
|
Hey Nyren, I am also hitting the same issue. I am so glad that someone other me has found the issue. I also found that POD has the disable_ipv6 =1 even though the HOST has disable_ipv6 =0. But, then I tried to do a work-around by trying to get into the POD-netns and change that /proc/..disbale_ipv6 = 0, but, that doesn't work. Because, the /proc/ file system in the POD is always a read-only, how did you get to work-around it???, as your patch should give error saying /proc/ is only a read-only. Please, let me know. Thanks |
Just wondering: has anyone having trouble here tried telling Docker they want to use IPv6? I.e. run the daemon with |
I have tried both with and without ipv6 "enabled" in the docker daemon config file. Unfortunately it makes no difference since docker still sets The only thing that happens with |
Yes, you cannot change disable_ipv6 from "within" the container because those processes will have their capabilities dropped. Instead you need to only set the same network namespace and write to However, even if you do this it is not possible to change I will try and upload a patch with a workaround in a couple of days. |
Even, I ran the docker with --ipv6, but, the docker expects that the interface will be created by itself and docker itself will assign the IPv6 address, which I don't want. I want the CNI plugin to do the work of assigning the IP-address. Thanks |
I see the same issue on Ubuntu 16.04 with docker 17.12.0-ce. I'm going to try older versions, as this was working previously. |
FYI: I downgraded to 17.03.2-ce and the IP address assignment is now working. |
Thanks, great to know. So not all 17.x releases were bad then. I just tried spinnig up a container on Docker 18.01.0-ce with ipv6=true and the issue seems still to be there although loopback is OK now.
i.e. a new interface created by CNI would get disable_ipv6=1. Btw, if you would like to run on a newer Docker release, please try the patch in the pull request above and see if it helps. |
I heard that 17.09 should also work. Has anyone raised an issue with docker on this regression? |
The issue is linked above. |
Testing with Relevant commit in Testing17.03
17.06
17.09
17.12
|
Thanks, great to know which Docker releases works and which do not. I can add the following as well: 18.01
Excellent that you found the docker commit which introduced the issue. I have updated the CNI plugins pull request with the workaround, hope it will be merged. |
@abhijitherekar asked about how CNI can write the What happens here is that CNI does not run "inside" the container (i.e. the CNI process has not dropped its capabilities) and only hooks into the network namespace of the container. Therefore |
@nyren I am a little new to this. So, I am trying to figure out why did it fail. |
@abhijitherekar You can check the network namespace of a given pid with The way cni is run is quite similar to |
CNI Plugins v0.7.0 includes the |
For the next one running into here, yes, docker 18.06 is still/also effected:
|
Docker 18.09 is on this list, as well (OS/Arch: linux/arm): |
@wuerzelchen Did you figure any alternative way out to get IPv6 running with k8s? |
Not yet. Docker itself runs with IPv6. But now I'm stuck with those premission denied issues. |
@telmich I'm back... did you upgrade CNI? I'm currently looking into my apt packages and see that kubernetes-cni is at v 0.6.0-00 (not sure if that's the right path to check for the correct CNI version) and in this master branch the most recent release is v6, as well. I assume that I somehow need to upgrade to a newer version. If this kubernetes-cni is related to this repo/package. I'm currently quite lost and need some effort to get where I want to.
|
@wuerzelchen this repo is vendor'd into Kubernetes as a copy of the Go code. |
... so just for my understanding: when/how will be 0.7 available? |
Releases are at https://github.com/containernetworking/plugins/releases; 0.7.4 is the latest. |
So, another try. Thank you @bboreham for this hint. I downloaded the arm tar ball and extracted it to /opt/cni/bin.
Then I restarted and the same behavior exists.
What am I missing here. Documentation on how to debug such things on my own would be very much appreciated. |
Fixes Kubernetes IPv6 problem on Docker containernetworking/cni#531
Fixes Kubernetes IPv6 problem on Docker containernetworking/cni#531
I wonder if it's really fixed?
the same busybox deployed in k8s:
weave image
any ideas? |
Just curious, could you see if the container deployed in kubernetes got an actual IPv6 address assigned to eth0 ? The fix to set Oh and the fix was first included in CNI plugins 0.7.0 so probably good to check your version of CNI just to be sure. |
We were able to work around this by modifying the calico config map:
Another thing that needs to change is in the calico daemon set:
This was previously set to false. |
@nyren I'm not sure how to check cni version?
does it means cni version 0.3.0? (I guess yes..) But I don't know how to upgrade? I simply installed weave using
|
I think Please try the following command to see if weave actually assigned an IPv6 address to the container: My guess is that weave did not try to assign an IPv6 address and therefore CNI let the @eytan-avisror, makes sense to me that you had to tell Calico to assign an IPv6 address to get rid of the The effect of Docker setting |
same with docker
|
@olamy, I would guess that it is your configuration of weave networking in kubernetes that does not assign an IPv6 address to the container. I.e. CNI is never told to assign an IPv6 address and therefore it does not need to touch |
@nyren kind of help Thanks! :) but no idea how to fix that. I use a very standard installation without anything custom |
Hi, I am struggling with the following error when trying to deploy a Kubernetes 1.9.2 cluster with IPv6 networking.
The same error occurs on both master (for kube-dns) and worker nodes (for any other pod). Deploying a IPv4 kubernetes cluster with the CNI bridge driver works fine. I tried builing the latest CNI plugins from master but still got the same error.
OS: CentOS 7.4.1708
Kubernetes: 1.9.2
CNI: 0.6.0
Docker: 17.12.0.ce-1.el7.centos
/etc/cni/net.d/10-bridge-v6.conf:
If you have any ideas on things to try it would be very much appreciated. Thanks!
The text was updated successfully, but these errors were encountered: