Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node add networking errors with default Mac OS X (arm) / Docker 4.10.1 (82475) installation #14639

Closed
spurin opened this issue Jul 26, 2022 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@spurin
Copy link

spurin commented Jul 26, 2022

What Happened?

If I create a 3 node cluster using the defaults on a standard Mac OS X Monterey arm build -

james@JamessMacStudio ~ % minikube start --nodes 3 -p multinode-demo
😄  [multinode-demo] minikube v1.26.0 on Darwin 12.4 (arm64)
✨  Automatically selected the docker driver
📌  Using Docker Desktop driver with root privileges
👍  Starting control plane node multinode-demo in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting worker node multinode-demo-m02 in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    ▪ env NO_PROXY=192.168.49.2
🔎  Verifying Kubernetes components...

👍  Starting worker node multinode-demo-m03 in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2,192.168.49.3
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    ▪ env NO_PROXY=192.168.49.2
    ▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "multinode-demo" cluster and "default" namespace by default

And then I add another node -

james@JamessMacStudio ~ % minikube node add -p multinode-demo
😄  Adding node m04 to cluster multinode-demo
👍  Starting worker node multinode-demo-m04 in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
🔎  Verifying Kubernetes components...
🏄  Successfully added m04 to multinode-demo!

And then attempt to run a daemonset -

james@JamessMacStudio ~ % cat nginx_daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx

james@JamessMacStudio ~ % kubectl apply -f nginx_daemonset.yaml
daemonset.apps/nginx created

It fails to provision the pod to the 4th node, with the following error -

james@JamessMacStudio ~ % kubectl get pods -o wide
NAME          READY   STATUS              RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
nginx-7hd5r   0/1     ContainerCreating   0          2m38s   <none>       multinode-demo-m04   <none>           <none>
nginx-chsg8   1/1     Running             0          2m38s   10.244.0.3   multinode-demo       <none>           <none>
nginx-jlqbj   1/1     Running             0          2m38s   10.244.1.2   multinode-demo-m02   <none>           <none>
nginx-wmnnc   1/1     Running             0          2m38s   10.244.2.2   multinode-demo-m03   <none>           <none>

james@JamessMacStudio ~ % kubectl describe pod/nginx-7hd5r
<snip>
  Normal   SandboxChanged          2m44s (x12 over 2m55s)  kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  2m43s (x4 over 2m46s)   kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "c5abf90d8d509be26bbb09a6c9cd5deb508d870f65690b2e3f28e1536ce32e27" network for pod "nginx-7hd5r": networkPlugin cni failed to set up pod "nginx-7hd5r_default" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c5abf90d8d509be26bbb09a6c9cd5deb508d870f65690b2e3f28e1536ce32e27" network for pod "nginx-7hd5r": networkPlugin cni failed to teardown pod "nginx-7hd5r_default" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-3782a8c3e01ab34f8ff4098a -m comment --comment name: "crio" id: "c5abf90d8d509be26bbb09a6c9cd5deb508d870f65690b2e3f28e1536ce32e27" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3782a8c3e01ab34f8ff4098a':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.

If I repeat the process with a different CNI, i.e. calico, it works as expected -

james@JamessMacStudio ~ % minikube delete -p multinode-demo
🔥  Deleting "multinode-demo" in docker ...
🔥  Deleting container "multinode-demo" ...
🔥  Deleting container "multinode-demo-m02" ...
🔥  Deleting container "multinode-demo-m03" ...
🔥  Deleting container "multinode-demo-m04" ...
🔥  Removing /Users/james/.minikube/machines/multinode-demo ...
🔥  Removing /Users/james/.minikube/machines/multinode-demo-m02 ...
🔥  Removing /Users/james/.minikube/machines/multinode-demo-m03 ...
🔥  Removing /Users/james/.minikube/machines/multinode-demo-m04 ...
💀  Removed all traces of the "multinode-demo" cluster.



james@JamessMacStudio ~ % minikube start --nodes 3 -p multinode-demo --cni=calico
😄  [multinode-demo] minikube v1.26.0 on Darwin 12.4 (arm64)
✨  Automatically selected the docker driver
📌  Using Docker Desktop driver with root privileges
👍  Starting control plane node multinode-demo in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting worker node multinode-demo-m02 in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    ▪ env NO_PROXY=192.168.49.2
🔎  Verifying Kubernetes components...

👍  Starting worker node multinode-demo-m03 in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2,192.168.49.3
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    ▪ env NO_PROXY=192.168.49.2
    ▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "multinode-demo" cluster and "default" namespace by default



james@JamessMacStudio ~ % minikube node add -p multinode-demo
😄  Adding node m04 to cluster multinode-demo
👍  Starting worker node multinode-demo-m04 in cluster multinode-demo
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7802MB) ...
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
🔎  Verifying Kubernetes components...
🏄  Successfully added m04 to multinode-demo!




james@JamessMacStudio ~ % cat nginx_daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
james@JamessMacStudio ~ % kubectl apply -f nginx_daemonset.yaml
daemonset.apps/nginx created



james@JamessMacStudio ~ % kubectl get daemonset -o wide
NAME    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES   SELECTOR
nginx   4         4         4       4            4           <none>          25s   nginx        nginx    app=nginx



james@JamessMacStudio ~ % kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE                 NOMINATED NODE   READINESS GATES
nginx-bdmzb   1/1     Running   0          37s   10.244.239.1     multinode-demo-m02   <none>           <none>
nginx-rj75g   1/1     Running   0          37s   10.244.12.1      multinode-demo-m04   <none>           <none>
nginx-sllqh   1/1     Running   0          37s   10.244.146.65    multinode-demo-m03   <none>           <none>
nginx-zbwc9   1/1     Running   0          37s   10.244.113.195   multinode-demo       <none>           <none>

Attach the log file

log.txt

Operating System

macOS (Default)

Driver

Docker

@RA489
Copy link

RA489 commented Jul 28, 2022

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Jul 28, 2022
@klaases
Copy link
Contributor

klaases commented Sep 14, 2022

Hi @spurin, are you still experiencing this issue?

Also, have you tried reaching out to the minikube community on Slack of Groups?

https://minikube.sigs.k8s.io/community/

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 14, 2022
@spurin
Copy link
Author

spurin commented Sep 14, 2022

Hi @klaases

I've just filed it here to be picked up if anyone wishes to do so. I'm using the work around that I've mentioned in the detail.

What other information is needed?

Thanks

James

@klaases
Copy link
Contributor

klaases commented Oct 5, 2022

HI @spurin, sure, appreciate your report. We'll keep it open and available for others who might find the information helpful as reference.

@klaases klaases removed kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Oct 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 3, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 2, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 4, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants