New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPv6 e2e Tests #47666
Comments
@dcbw any info you can share regarding the OpenShift DIND e2e test setup would be appreciated. I would like to explore using OpenShift DIND as a reference model for the v6 e2e test. |
/area ipv6 |
/sig network |
/sig testing |
@danehans the core script is https://github.com/openshift/origin/blob/master/hack/dind-cluster.sh . The workflow is basically:
If you want to test it out, just git clone the openshift/origin repo, cd into it, type 'make', and then once that's done proceed with hack/dind-cluster.sh start. Trivial to test. Each "machine" or "node" is just a docker container, in which the given openshift process runs. Inside that container, another 'docker' also runs which is the docker that actually creates the containers in which the pods run. Hence "docker in docker". Note that openshift builds combined binaries, much like hyperkube. So openshift-node is a combined kubelet+kube-proxy, while openshift-master is a combined apiserver+controller-manager+etcd. The "machines" are just normal docker containers attached to the docker0 bridge; nothing special. Your e2e script could set up a custom docker network that is IPv6-enabled, and then docker should assign the "machine" IPv6 address to eth0 inside the machine container. Then you're golden, inside the machine container you can do whatever you want. The mechanics of the script itself are a bit more complicated obviously, and it builds some docker images on first-run that are based on Fedora 25. See https://github.com/openshift/origin/tree/master/images/dind for those. These images are then the basis of the "machine" containers for docker. The script then proceeds to configure a bunch of stuff, which is roughly analagous to what Kubernetes' local-up-cluster.sh does. So some combination of dind-cluster.sh and local-up-cluster.sh would be the path forward here. It's going to be a bit messy to do the integration, but perhaps starting by stripping down the dind-cluster.sh script to a bare minimum where it just starts the "machine" docker containers would be a good start, then proceed to call out to bits (and maybe some refactoring) of local-up-cluster.sh after that. |
@marun points out that https://github.com/Mirantis/kubeadm-dind-cluster might be a better place to start |
cc: @ivan4th |
I'll be glad to help with kubeadm-dind-cluster if it really can be used for this task. It used to have |
@ivan4th thanks for the offer. The sig-network IPv6 Working Group needs to create an e2e test that runs on GCE. Since GCE does not support IPv6, kubeadm-dind-cluster may be a helpful solution to the problem. I've done a quick read of kubeadm-dind-cluster and have a few questions:
I see kubeadm-dind-cluster also supports a src option. This is the option we are most interested in because the working group has several outstanding PRs that need e2e testing. Have you successfully tested the src option recently?
Is ^ still an issue even though the moby/moby#9939 and #38337 have been fixed? |
Looks like good case to use kubeadm-dind-cluster with virtlet, to spawn kubernetes on kubernetes. As we have a possibility to run on cloud without enabled nested virtualization (with fallback to plain emulation instead of virtualization) in this case we could spawn tested environment in virtualized network, separated from real one - what could be useful in case of e2e tests. |
@jellonek Well, on GCE Virtlet-based VMs will be slow (non-KVM). Although in non-GCE case k8s-on-k8s example may be indeed useful, here we're talking about GCE env where DIND is much faster. @danehans Concerning btrfs, as far as I understand moby/moby#9939 is not completely fixed, i.e. orphan subvolumes will still cause |
(typo in mention, wrote Jell instead of jellonek initially -- sorry for spam) |
@ivan4th I successfully deployed a 3-node v1.6 k8s cluster using https://github.com/Mirantis/kubeadm-dind-cluster in my Docker for Mac (10.12.5) dev environment . I had to
I'm super impressed on how easy and fast this multi-node setup is. Nice work! I am starting to test changes that are needed for IPv6. I first tried updating DIND_SUBNET to IPv6 and that failed:
I opened kubernetes-retired/kubeadm-dind-cluster#17 which is related to the above. Do you have bandwidth to help me fix issues I find in the kubeadm-dind-cluster project? |
I moved kubeadm-dind-cluster testing from my Mac to a Cent7 box b/c Docker for Mac apparently does not support IPv6. I am now unable to create a cluster using kubeadm-dind-cluster due to the following docker daemon error:
Here is my docker and OS details:
I'm going to upgrade Docker and try again. |
@ivan4th I'm still having the same problem after the Docker upgrade. Can you share the details of your setup so I can replicate? I would like to know your OS, kernel and docker versions. Thanks! |
@danehans
(it needs to be sourced; also see the contents of the script for settings etc.) As of IPv6, I didn't use it yet with DIND, but I'll try to see what I can do to make it possible to specify IPv6 subnet. |
@ivan4th I configured docker to use the overlay driver, but I am hitting the same error. I can successfully deploy k8s using kubeadm directly. Here are the details:
I'll try on Ubuntu 16.04.2 since that works for you. |
I'll try DIND on CentOS 7 too & see what's going on there. |
Automatic merge from submit-queue (batch tested with PRs 47869, 48013, 48016, 48005) Adds IPv6 unit test cases to kubeadm **What this PR does / why we need it**: Adds IPv6 test cases to kubeadm. It's needed to ensure test cases cover IPv6 related networking scenarios for kubeadm-based k8s deployments. **Which issue this PR fixes** This PR is in support of Issue #1443. **Special notes for your reviewer**: Additional PR's may follow as e2e testing is being developed by Issue #47666. **Release note**: ```NONE ```
xref #48227 |
@ivan4th I'm seeing the same issue. I get the same docker error with graphdriver, when kubeadm init starts. OS: cat /etc/centos-release kernel: 3.10.0-514.26.1.el7.x86_64 Docker version: Server: go version go1.8.1 linux/amd64 Docker Storage Driver: devicemapper |
@ivan4th Any thoughts as to what I can try to get this working? |
@ivan4th I saw another issue as well. When I use the BUILD_KUBEADM and BUILD_HYPERKUBE flags it creates the image, and then tries to get rid of the docker container (kill and wait), but there is no container running, so it hangs forever. |
@pmichali Not worth discussing here -- open issues in the kubeadm-dind-cluster repo instead |
Automatic merge from submit-queue Adds IPv6 test case to kubeadm bootstrap **What this PR does / why we need it**: Adds IPv6 test cases in support of kubeadm bootstrap functionality. It's needed to ensure test cases cover IPv6 related networking scenarios. **Which issue this PR fixes** This PR is in support of Issue #1443 and Issue #47666 **Special notes for your reviewer**: Additional PR's will follow to ensure kubeadm fully supports IPv6. **Release note**: ```NONE ``` /area ipv6
Automatic merge from submit-queue Updates Kubeadm Master Endpoint for IPv6 **What this PR does / why we need it**: Previously, kubeadm would use ip:port to construct a master endpoint. This works fine for IPv4 addresses, but not for IPv6. Per [RFC 3986](https://www.ietf.org/rfc/rfc3986.txt), IPv6 requires the ip to be encased in brackets when being joined to a port with a colon. This patch updates kubeadm to support wrapping a v6 address with [] to form the master endpoint url. Since this functionality is needed in multiple areas, a dedicated util function was created for this purpose. **Which issue this PR fixes** Fixes Issue kubernetes/kubeadm#334 **Special notes for your reviewer**: As part of a bigger effort to add IPv6 support to Kubernetes: Issue #1443 Issue #47666 **Release note**: ```NONE ``` /area kubeadm /area ipv6 /sig network /sig cluster-ops
Another PR needed for IPv6 e2e testing (in addition to those listed above): |
The Kubernetes sig/network IPv6 work group has put together an IPv6 Test Suite that's a start of a test plan for what we intend to (eventually) test as an upstream gating test. At some point, we're hoping to be running this set of test cases on a virtualized, multi-node, IPv6 cluster that's instantiated using kubeadm-dind-cluster with a bunch of IPv6 support added by @danehans and @pmichali, with help from @ivan4th. Appreciate any feedback/suggestions. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
An effort is underway to add IPv6 functionality across Kubernetes. Currently, no tests exist to verify e2e functionality for Kubernetes when using IPv6.
The text was updated successfully, but these errors were encountered: