Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix multi node two pods getting same IP and nodespec not having PodCIDR #9875

Merged
merged 2 commits into from
Dec 11, 2020

Conversation

sadlil
Copy link
Contributor

@sadlil sadlil commented Dec 7, 2020

Fixes #9838
Ref #7538

With this fixes enabled In multi node mode nodes gets the podCIDR set

$ kubectl get nodes -o custom-columns=NAME:.metadata.name,SPEC:.spec
NAME           SPEC
minikube       map[podCIDR:10.244.0.0/24 podCIDRs:[10.244.0.0/24]]
minikube-m02   map[podCIDR:10.244.1.0/24 podCIDRs:[10.244.1.0/24]]
minikube-m03   map[podCIDR:10.244.2.0/24 podCIDRs:[10.244.2.0/24]]

Pods running in a node gets IP from the node CIDR as intended. And No reuse of same IP.

$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
default       my-nginx-5b56ccd65f-dghzv          1/1     Running   0          43s     10.244.1.2     minikube-m02   <none>           <none>
default       my-nginx-5b56ccd65f-l5f2k          1/1     Running   0          43s     10.244.2.2     minikube-m03   <none>           <none>
default       my-nginx-5b56ccd65f-tlwx5          1/1     Running   0          43s     10.244.0.3     minikube       <none>           <none>
default       my-nginx-5b56ccd65f-xfhmp          1/1     Running   0          43s     10.244.1.4     minikube-m02   <none>           <none>
default       net-test-c4f9cfdd4-4w95m           1/1     Running   0          43s     10.244.1.3     minikube-m02   <none>           <none>
default       net-test-c4f9cfdd4-9bsks           1/1     Running   0          43s     10.244.2.3     minikube-m03   <none>           <none>
default       net-test-c4f9cfdd4-jxqt8           1/1     Running   0          43s     10.244.0.4     minikube       <none>           <none>
default       net-test-c4f9cfdd4-wkfrg           1/1     Running   0          43s     10.244.2.4     minikube-m03   <none>           <none>
kube-system   coredns-f9fd979d6-wcdm8            1/1     Running   0          3m1s    10.244.0.2     minikube       <none>           <none>

Pods running in worker nodes are able to resolve dns and connect.

kubectl exec -it net-test-c4f9cfdd4-9bsks -- /bin/bash
bash-5.0# curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
bash-5.0# curl my-nginx.default
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Running with only single node also have the kindnet enabled as CNI and getting ips from the podCIDR.

$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE    IP             NODE       NOMINATED NODE   READINESS GATES
default       my-nginx-5b56ccd65f-56gnr          1/1     Running   0          38s    10.244.0.10    minikube   <none>           <none>
default       my-nginx-5b56ccd65f-8ntxh          1/1     Running   0          38s    10.244.0.7     minikube   <none>           <none>
default       my-nginx-5b56ccd65f-msxws          1/1     Running   0          38s    10.244.0.3     minikube   <none>           <none>
default       my-nginx-5b56ccd65f-sdm7v          1/1     Running   0          38s    10.244.0.8     minikube   <none>           <none>
default       net-test-c4f9cfdd4-8wbp8           1/1     Running   0          37s    10.244.0.9     minikube   <none>           <none>
default       net-test-c4f9cfdd4-d9hr9           1/1     Running   0          37s    10.244.0.6     minikube   <none>           <none>
default       net-test-c4f9cfdd4-klmdg           1/1     Running   0          38s    10.244.0.4     minikube   <none>           <none>
default       net-test-c4f9cfdd4-vrb2l           1/1     Running   0          37s    10.244.0.5     minikube   <none>           <none>
kube-system   coredns-f9fd979d6-25pwj            1/1     Running   0          106s   10.244.0.2     minikube   <none>           <none>
kube-system   etcd-minikube                      1/1     Running   0          119s   192.168.49.2   minikube   <none>           <none>
kube-system   kindnet-2klfc                      1/1     Running   0          106s   192.168.49.2   minikube   <none>           <none>
kube-system   kube-apiserver-minikube            1/1     Running   0          119s   192.168.49.2   minikube   <none>           <none>
kube-system   kube-controller-manager-minikube   1/1     Running   0          119s   192.168.49.2   minikube   <none>           <none>
kube-system   kube-proxy-txqpb                   1/1     Running   0          106s   192.168.49.2   minikube   <none>           <none>
kube-system   kube-scheduler-minikube            1/1     Running   0          119s   192.168.49.2   minikube   <none>           <none>
kube-system   storage-provisioner                1/1     Running   0          118s   192.168.49.2   minikube   <none>           <none>

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 7, 2020
@k8s-ci-robot
Copy link
Contributor

Hi @sadlil. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 7, 2020
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

Copy link
Contributor

@azhao155 azhao155 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question about change for disable cni

@medyagh
Copy link
Member

medyagh commented Dec 8, 2020

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 8, 2020
@minikube-pr-bot
Copy link

kvm2 Driver
error collecting results for kvm2 driver: timing run 0 with Minikube (PR 9875): timing cmd: [/home/performance-monitor/.minikube/minikube-binaries/9875/minikube start --driver=kvm2]: starting cmd: fork/exec /home/performance-monitor/.minikube/minikube-binaries/9875/minikube: exec format error
docker Driver
error collecting results for docker driver: timing run 0 with Minikube (PR 9875): timing cmd: [/home/performance-monitor/.minikube/minikube-binaries/9875/minikube start --driver=docker]: starting cmd: fork/exec /home/performance-monitor/.minikube/minikube-binaries/9875/minikube: exec format error

pkg/minikube/cni/cni.go Outdated Show resolved Hide resolved
@sharifelgamal
Copy link
Collaborator

I believe the none driver test failures are because of the change to kindnet by default. I think we need to disable CNI for baremetal no matter what.

@minikube-pr-bot
Copy link

kvm2 Driver
error collecting results for kvm2 driver: timing run 0 with minikube: timing cmd: [/home/performance-monitor/minikube/out/minikube start --driver=kvm2]: waiting for minikube: exit status 50
docker Driver
Times for minikube: 28.8s 30.2s 28.7s
Average time for minikube: 29.3s

Times for Minikube (PR 9875): 33.9s 29.3s 27.9s
Average time for Minikube (PR 9875): 30.4s

Averages Time Per Log

+--------------------------------+----------+--------------------+
|              LOG               | MINIKUBE | MINIKUBE (PR 9875) |
+--------------------------------+----------+--------------------+
| * minikube v1.15.1 on Debian   | 0.2s     | 0.2s               |
|                           9.11 |          |                    |
| * Using the docker driver      | 0.1s     | 0.1s               |
| based on user configuration    |          |                    |
| * Starting control plane node  | 0.1s     | 0.1s               |
| minikube in cluster minikube   |          |                    |
| * Creating docker container    | 9.2s     | 9.2s               |
| (CPUs=2, Memory=3700MB) ...    |          |                    |
| * Preparing Kubernetes v1.20.0 | 18.5s    |                    |
| on Docker 19.03.14 ...         |          |                    |
| * Verifying Kubernetes         | 1.1s     | 0.7s               |
| components...                  |          |                    |
| * Enabled addons:              | 0.1s     | 0.1s               |
| storage-provisioner,           |          |                    |
| default-storageclass           |          |                    |
| * Done! kubectl is now         | 0.0s     | 0.0s               |
| configured to use "minikube"   |          |                    |
| cluster and "default"          |          |                    |
| namespace by default           |          |                    |
+--------------------------------+----------+--------------------+

@minikube-pr-bot
Copy link

kvm2 Driver
error collecting results for kvm2 driver: timing run 0 with minikube: timing cmd: [./minikube start --driver=kvm2]: waiting for minikube: exit status 80
docker Driver
Times for minikube: 30.1s 28.7s 29.9s
Average time for minikube: 29.6s

Times for Minikube (PR 9875): 29.4s 28.9s 29.0s
Average time for Minikube (PR 9875): 29.1s

Averages Time Per Log

+--------------------------------+----------+--------------------+
|              LOG               | MINIKUBE | MINIKUBE (PR 9875) |
+--------------------------------+----------+--------------------+
| * minikube v1.15.1 on Debian   | 0.2s     | 0.2s               |
|                           9.11 |          |                    |
| * Using the docker driver      | 0.1s     | 0.1s               |
| based on user configuration    |          |                    |
| * Starting control plane node  | 0.1s     | 0.1s               |
| minikube in cluster minikube   |          |                    |
| * Creating docker container    | 9.4s     | 8.9s               |
| (CPUs=2, Memory=3700MB) ...    |          |                    |
| * Preparing Kubernetes v1.20.0 | 18.5s    |                    |
| on Docker 19.03.14 ...         |          |                    |
| * Verifying Kubernetes         | 1.2s     | 0.7s               |
| components...                  |          |                    |
| * Enabled addons:              | 0.1s     | 0.1s               |
| storage-provisioner,           |          |                    |
| default-storageclass           |          |                    |
| * Done! kubectl is now         | 0.0s     | 0.0s               |
| configured to use "minikube"   |          |                    |
| cluster and "default"          |          |                    |
| namespace by default           |          |                    |
+--------------------------------+----------+--------------------+

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 11, 2020
Copy link
Member

@medyagh medyagh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a great PR and I can't thank you more for fixing such mystrious bug in minikube

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: medyagh, sadlil, tstromberg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@medyagh medyagh changed the title Fix Problematic Multi Node Pod Networking with KindNet CNI Fix multi node two pods getting same IP and nodespec not having PodCIDR Dec 11, 2020
@medyagh medyagh merged commit e96b05e into kubernetes:master Dec 11, 2020
@lingsamuel lingsamuel mentioned this pull request Dec 24, 2020
15 tasks
@sadlil sadlil deleted the kindnet-podcidr branch December 26, 2020 15:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Problematic Multi Node Networking with docker driver and kindnetd CNI
8 participants