Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When after restart docker, kind cluster could't connect #1685

Closed
tsubasaxZZZ opened this issue Jun 23, 2020 · 19 comments
Closed

When after restart docker, kind cluster could't connect #1685

tsubasaxZZZ opened this issue Jun 23, 2020 · 19 comments
Assignees
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@tsubasaxZZZ
Copy link

tsubasaxZZZ commented Jun 23, 2020

I created kind cluster with following YAML:

# a cluster with 3 control-plane nodes and 3 workers
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
tsunomur@VM:~$ kind create cluster --config kind-example-config.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.18.2) 🖼
 ✓ Preparing nodes 📦 📦 📦 📦 📦 📦
 ✓ Configuring the external load balancer ⚖️
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining more control-plane nodes 🎮
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊
tsunomur@VM:~$ kubectl cluster-info --context kind-kind
Kubernetes master is running at https://127.0.0.1:43185
KubeDNS is running at https://127.0.0.1:43185/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
tsunomur@VM:~$ docker ps -a
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS                       NAMES
03dad9ed89f2        kindest/node:v1.18.2           "/usr/local/bin/entr…"   8 minutes ago       Up 6 minutes                                    kind-worker2
cbd3f2c279a8        kindest/node:v1.18.2           "/usr/local/bin/entr…"   8 minutes ago       Up 6 minutes        127.0.0.1:44681->6443/tcp   kind-control-plane3
1531621e9806        kindest/node:v1.18.2           "/usr/local/bin/entr…"   8 minutes ago       Up 6 minutes                                    kind-worker3
1ceaa76b5149        kindest/haproxy:2.1.1-alpine   "/docker-entrypoint.…"   8 minutes ago       Up 8 minutes        127.0.0.1:43185->6443/tcp   kind-external-load-balancer
a8b8cc91893e        kindest/node:v1.18.2           "/usr/local/bin/entr…"   8 minutes ago       Up 6 minutes        127.0.0.1:43397->6443/tcp   kind-control-plane
5076541a963d        kindest/node:v1.18.2           "/usr/local/bin/entr…"   8 minutes ago       Up 6 minutes                                    kind-worker
e64b81636f9a        kindest/node:v1.18.2           "/usr/local/bin/entr…"   8 minutes ago       Up 6 minutes        127.0.0.1:33069->6443/tcp   kind-control-plane2

And then restart docker(same reboot machine):

$ sudo systemctl stop docker

Result: disappear kind-external-load-balancer and even if rewrite cluster-url to control-plane's IP addr force, Pod deployment will pending ever.

tsunomur@VM:~$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                       NAMES
03dad9ed89f2        kindest/node:v1.18.2   "/usr/local/bin/entr…"   10 minutes ago      Up 5 seconds                                    kind-worker2
cbd3f2c279a8        kindest/node:v1.18.2   "/usr/local/bin/entr…"   10 minutes ago      Up 5 seconds        127.0.0.1:44681->6443/tcp   kind-control-plane3
1531621e9806        kindest/node:v1.18.2   "/usr/local/bin/entr…"   10 minutes ago      Up 5 seconds                                    kind-worker3
a8b8cc91893e        kindest/node:v1.18.2   "/usr/local/bin/entr…"   10 minutes ago      Up 4 seconds        127.0.0.1:43397->6443/tcp   kind-control-plane
5076541a963d        kindest/node:v1.18.2   "/usr/local/bin/entr…"   10 minutes ago      Up 4 seconds                                    kind-worker
e64b81636f9a        kindest/node:v1.18.2   "/usr/local/bin/entr…"   10 minutes ago      Up 5 seconds        127.0.0.1:33069->6443/tcp   kind-control-plane2

Does kind not support restart machine?

Ref:

@tsubasaxZZZ tsubasaxZZZ added the kind/support Categorizes issue or PR as a support question. label Jun 23, 2020
@BenTheElder
Copy link
Member

we need to know more details, like what version you're using.
kind does restart them on the latest version.

@BenTheElder
Copy link
Member

it would also be helpful to know if this happens with a simple kind create cluster (no config, no flags) and if so more about what your host environment is like

@tsubasaxZZZ
Copy link
Author

Thank you for your quick reply.

I use 0.8.1:

tsunomur@VM:~$ kind --version
kind version 0.8.1

When I created a simple cluster, not same stituation.
But exists Pod to be Error.

create cluster and check health

tsunomur@VM:~$ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.18.2) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋
tsunomur@VM:~$ k run nginx --image nginx --restart=Never
pod/nginx created
tsunomur@VM:~$ k get po
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          2s
tsunomur@VM:~$ k get po -w
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          3s
nginx   1/1     Running             0          17s
^Ctsunomur@VM:~$ k cluster-info
Kubernetes master is running at https://127.0.0.1:38413
KubeDNS is running at https://127.0.0.1:38413/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
tsunomur@VM:~$ k get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
tsunomur@VM:~$
tsunomur@VM:~$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                       NAMES
be19fb44893d        kindest/node:v1.18.2   "/usr/local/bin/entr…"   2 minutes ago       Up About a minute   127.0.0.1:38413->6443/tcp   kind-control-plane
tsunomur@VM:~$

Restart docker and check health

tsunomur@VM:~$ sudo systemctl stop docker
tsunomur@VM:~$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Tue 2020-06-23 17:54:16 UTC; 7s ago
     Docs: https://docs.docker.com
  Process: 31153 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=0/SUCCESS)
 Main PID: 31153 (code=exited, status=0/SUCCESS)

Jun 23 17:15:16 VM dockerd[31153]: time="2020-06-23T17:15:16.225369221Z" level=info msg="API listen on /var/run/docker.sock"
Jun 23 17:15:16 VM systemd[1]: Started Docker Application Container Engine.
Jun 23 17:16:43 VM dockerd[31153]: time="2020-06-23T17:16:43.583494164Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 23 17:54:02 VM systemd[1]: Stopping Docker Application Container Engine...
Jun 23 17:54:02 VM dockerd[31153]: time="2020-06-23T17:54:02.941309822Z" level=info msg="Processing signal 'terminated'"
Jun 23 17:54:12 VM dockerd[31153]: time="2020-06-23T17:54:12.957486326Z" level=info msg="Container be19fb44893d46e0e7800cd8af414b80fc5d4bccd0d050ce282a685dd93d3735 failed to exit within
Jun 23 17:54:15 VM dockerd[31153]: time="2020-06-23T17:54:15.089736440Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 23 17:54:16 VM dockerd[31153]: time="2020-06-23T17:54:16.254422134Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=m
Jun 23 17:54:16 VM dockerd[31153]: time="2020-06-23T17:54:16.254886136Z" level=info msg="Daemon shutdown complete"
Jun 23 17:54:16 VM systemd[1]: Stopped Docker Application Container Engine.
tsunomur@VM:~$ sudo systemctl start docker
tsunomur@VM:~$ k cluster-info
Kubernetes master is running at https://127.0.0.1:38413
KubeDNS is running at https://127.0.0.1:38413/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
tsunomur@VM:~$ k get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
tsunomur@VM:~$ k run nginx-after-restart --image nginx --restart=Never
pod/nginx-after-restart created
tsunomur@VM:~$ k get po
NAME                  READY   STATUS              RESTARTS   AGE
nginx                 0/1     Unknown             0          2m
nginx-after-restart   0/1     ContainerCreating   0          2s

But if only a Pod(no manage by Deployment) status is Error, I will recreate.

I won't use multi-node cluster yet.

@BenTheElder
Copy link
Member

BenTheElder commented Jun 23, 2020

yeah some errored pods is expected, not all things handle the IP switch well etc.

The cluster not coming back up with multi node is not,

what happens if you use:

# a cluster with 3 control-plane nodes and 3 workers
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

It's possible we have a bug in the "HA" mode, it's not well tested or used for much currently.

@BenTheElder BenTheElder self-assigned this Jun 23, 2020
@tsubasaxZZZ
Copy link
Author

tsubasaxZZZ commented Jun 24, 2020

I tried only a control-plane cluster with multi worker, and then reboot dockerd, it's seem to good condition.
I'll create only one control-plane from next time.

Thank you.

@rolinh
Copy link

rolinh commented Jun 24, 2020

I think this issue should be re-opened. The problem occurs when more than 1 control-plane is used. I could reproduce easily using this config (kind v0.8.1, docker 19.03.11-ce) :

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
$ docker ps -a
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS                     PORTS                       NAMES
9086b0999d6a        kindest/haproxy:2.1.1-alpine   "/docker-entrypoint.…"   5 minutes ago       Exited (0) 2 minutes ago                               kind-external-load-balancer
938b62548187        kindest/node:v1.18.2           "/usr/local/bin/entr…"   5 minutes ago       Up About a minute          127.0.0.1:39575->6443/tcp   kind-control-plane
d665bd9e5fe3        kindest/node:v1.18.2           "/usr/local/bin/entr…"   5 minutes ago       Up About a minute          127.0.0.1:34927->6443/tcp   kind-control-plane2

@tsubasaxZZZ tsubasaxZZZ reopened this Jun 25, 2020
@BenTheElder
Copy link
Member

I don't think 2 control planes is valid in kubeadm @rolinh, only 3? I thought we validated this but we must not.

That said, it does seem we have a bug here with multiple control planes.

I'm going to interject a brief note: I highly recommend testing with a single node cluster unless you have strong evidence that multi-node is relevant, doubly so for multi-control plane.

@rolinh
Copy link

rolinh commented Jun 25, 2020

@BenTheElder fwiw, the issue is the same with 3 control planes.

I'm going to interject a brief note: I highly recommend testing with a single node cluster unless you have strong evidence that multi-node is relevant, doubly so for multi-control plane.

Would you mind expanding on this? Why is this a problem? I've been testing things with up to 50 nodes clusters without issues so far except upon docker service restart (or machine reboot). As a single control-plane is sufficient, I'll stick to this but I do require to tests things in a multi-node clusters.

@BenTheElder
Copy link
Member

50 nodes? Cool! That's actually the largest single kind cluster I've heard of so far :-)

Many (most?) apps are unlikely to gain anything testing wise from multiple nodes, but running multi-node kind clusters overcommits the hardware (each node reports having the full host resources) while adding more overhead.

The "HA" mode is not actually HA due to etcd and due to running on top of one physical host ... it is somewhat useful for certain things where multiple api-servers matters.

Similarly multi-node is used for testing where multi-node rolling behavior matters (we test kubernetes itself with 1 control plane and 2 workers typically), outside of that it's just extra complexity and overhead.

@rolinh
Copy link

rolinh commented Jun 25, 2020

50 nodes? Cool! That's actually the largest single kind cluster I've heard of so far :-)

I've tried to push it further just out of curiosity but a 100 nodes cluster attempt brought my machine down to its knees with a ridiculous 2500+ load average at some point 😁

I work on Cilium (so I use kind with the Cilium CNI) and at the moment more specifically on Hubble Relay for cluster wide observability and being able to test things in a local multi-nodes cluster is just amazing. I used to have to run multiple VMs but this process is much heavier. We're also able to test things like cluster mesh with kind. We also recently introduced kind as part of our CI to run smoke tests.

@BenTheElder
Copy link
Member

cool, that's definitely one of those apps that will benefit from multi-node :-)
we see a lot of people going a bit nuts with nodes to run web-app like services that don't benefit from this 😅

@BenTheElder
Copy link
Member

tracking the HA restart issue with a bug here #1689
closing this one, but will continue responding to comments 😅

@shashankpai
Copy link

Facing this issue on one control-plane and 2 nodes kind cluster , when i start the cluster all things work fine but when in restart my machine the pods go in pending state below are some of the outputs

kubectl get nodes -o wide
NAME                STATUS   ROLES                  AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
dev-control-plane   Ready    control-plane,master   4d6h   v1.20.7   172.18.0.2    <none>        Ubuntu 21.04   5.11.0-16-generic   containerd://1.5.2
dev-worker          Ready    <none>                 4d6h   v1.20.7   172.18.0.3    <none>        Ubuntu 21.04   5.11.0-16-generic   containerd://1.5.2
dev-worker2         Ready    <none>                 4d6h   v1.20.7   172.18.0.4    <none>        Ubuntu 21.04   5.11.0-16-generic   containerd://1.5.2

when i do a describe of the nodes there are no events recorded

kubectl describe nodes 
Name:               dev-control-plane
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=dev-control-plane
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 09 Dec 2021 18:03:10 +0530
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  dev-control-plane
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Dec 2021 00:18:16 +0530
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:07 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:07 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:07 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:34 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.2
  Hostname:    dev-control-plane
Capacity:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
System Info:
  Machine ID:                 71683ce055cf4961b8a3ee1c84333375
  System UUID:                1afd3039-3bfc-4ae0-9f08-a07c860b766e
  Boot ID:                    71486fa9-9e88-4991-b71b-0cabb5682524
  Kernel Version:             5.11.0-16-generic
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.5.2
  Kubelet Version:            v1.20.7
  Kube-Proxy Version:         v1.20.7
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
ProviderID:                   kind://docker/dev/dev-control-plane
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-dev-control-plane                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         13h
  kube-system                 kindnet-kczjl                                100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d6h
  kube-system                 kube-apiserver-dev-control-plane             250m (2%)     0 (0%)      0 (0%)           0 (0%)         7h2m
  kube-system                 kube-controller-manager-dev-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         4h59m
  kube-system                 kube-proxy-zpqk9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
  kube-system                 kube-scheduler-dev-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         7h
  metallb-system              speaker-7sr4c                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                750m (6%)   100m (0%)
  memory             150Mi (1%)  50Mi (0%)
  ephemeral-storage  100Mi (0%)  0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>


Name:               dev-worker
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=dev-worker
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 09 Dec 2021 18:03:38 +0530
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  dev-worker
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Dec 2021 00:18:16 +0530
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 14 Dec 2021 00:17:56 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 14 Dec 2021 00:17:56 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 14 Dec 2021 00:17:56 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 14 Dec 2021 00:17:56 +0530   Mon, 13 Dec 2021 17:51:06 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.3
  Hostname:    dev-worker
Capacity:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
System Info:
  Machine ID:                 850286600982428ba831af865181ed72
  System UUID:                8f9a2778-c12d-4c6f-89aa-e35a1ab4f630
  Boot ID:                    71486fa9-9e88-4991-b71b-0cabb5682524
  Kernel Version:             5.11.0-16-generic
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.5.2
  Kubelet Version:            v1.20.7
  Kube-Proxy Version:         v1.20.7
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
ProviderID:                   kind://docker/dev/dev-worker
Non-terminated Pods:          (3 in total)
  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                ------------  ----------  ---------------  -------------  ---
  kube-system                 kindnet-vrqsz       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d6h
  kube-system                 kube-proxy-2j2jj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
  metallb-system              speaker-8c4rk       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (0%)  100m (0%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:              <none>


Name:               dev-worker2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=dev-worker2
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 09 Dec 2021 18:03:38 +0530
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  dev-worker2
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Dec 2021 00:18:16 +0530
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:48 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.4
  Hostname:    dev-worker2
Capacity:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
System Info:
  Machine ID:                 e68f0d5d1ff542d5a9e822d04a4c65ea
  System UUID:                68068b61-c0ce-449f-a60a-a7ebb54674b7
  Boot ID:                    71486fa9-9e88-4991-b71b-0cabb5682524
  Kernel Version:             5.11.0-16-generic
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.5.2
  Kubelet Version:            v1.20.7
  Kube-Proxy Version:         v1.20.7
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
ProviderID:                   kind://docker/dev/dev-worker2
Non-terminated Pods:          (3 in total)
  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                ------------  ----------  ---------------  -------------  ---
  kube-system                 kindnet-dqgpn       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d6h
  kube-system                 kube-proxy-bghtx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
  metallb-system              speaker-wd7ft       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (0%)  100m (0%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:              <none>

same goes for when describing pod

kubectl describe pod nginx3
Name:         nginx3
Namespace:    default
Priority:     0
Node:         <none>
Labels:       run=nginx3
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  nginx3:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g26tr (ro)
Volumes:
  default-token-g26tr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-g26tr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

all the pods in kube-system pods are ok

kubectl get pods  -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
etcd-dev-control-plane                      1/1     Running   0          13h
kindnet-dqgpn                               1/1     Running   4          4d6h
kindnet-kczjl                               1/1     Running   4          4d6h
kindnet-vrqsz                               1/1     Running   4          4d6h
kube-apiserver-dev-control-plane            1/1     Running   0          7h5m
kube-controller-manager-dev-control-plane   1/1     Running   4          5h1m
kube-proxy-2j2jj                            1/1     Running   4          4d6h
kube-proxy-bghtx                            1/1     Running   4          4d6h
kube-proxy-zpqk9                            1/1     Running   4          4d6h
kube-scheduler-dev-control-plane            1/1     Running   4          7h3m

there is also metallb deployed

kubectl get pods -n  metallb-system
NAME            READY   STATUS    RESTARTS   AGE
speaker-7sr4c   1/1     Running   4          4d6h
speaker-8c4rk   1/1     Running   4          4d6h
speaker-wd7ft   1/1     Running   6          4d6h

have observed few logs in kube-scheduler pods that are suspicious and are complaining of connection timed out issues

E1213 18:49:11.055015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:18.462234       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.18.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:20.875619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:21.398879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.18.0.3:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:24.216140       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.18.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:30.550908       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:33.391076       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:40.663384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.18.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:42.069543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://172.18.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:46.514854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.18.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:49.125063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:56.713987       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:57.622639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:58.085948       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:04.678999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:07.131961       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.18.0.3:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:16.841002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.18.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:17.867246       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:18.073130       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.18.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:18.200523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.18.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:26.614618       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:28.496243       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.18.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:32.498903       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:32.996472       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://172.18.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:46.853901       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:52.536527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:53.265195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.18.0.3:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:55.692606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:59.711252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:59.819107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.18.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:07.978263       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:10.578197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.18.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:12.975560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.18.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:14.441638       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.18.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:15.840423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://172.18.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:17.413701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:23.459527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:34.259630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:40.883489       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:42.076899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused

the kind config i am using

# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9
- role: worker
  image: kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9
- role: worker
  image: kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9

and kind version

kind --version
kind version 0.11.1

@aojea
Copy link
Contributor

aojea commented Dec 14, 2021

your containers have changed their IP after restart, control plane now is 172.18.0.2

dev-control-plane Ready control-plane,master 4d6h v1.20.7 172.18.0.2 Ubuntu 21.04 5.11.0-16-generic containerd://1.5.2

and before it should be 172.18.0.3

E1213 18:51:23.459527 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused

docker assign ip randomly, or your containers restart with the same IPs that they had before or your cluster will not work

@shashankpai
Copy link

yes @aojea thanks. just noticed that , after few restarts the control plane returned to its original 172.18.0.3 and now things are fine , but during next restart it will again assign ips randomly and it will fail . What can be done for this , i thought this issue only persists for ha control plane cluster . Is this outcome expected or we can change few configs ?

@aojea
Copy link
Contributor

aojea commented Dec 14, 2021

Is this outcome expected or we can change few configs ?

is how IP assignment works in docker, making it more predictable from KIND will requires to overcomplicate the code and will cause compatibility problems ... the ideal solution will be for docker IP assignment to try to keep the same IPs after restart

@TomHutter
Copy link

TomHutter commented Feb 25, 2022

Hey guys what do you think of this:

Assumption: When linking a docker container to another docker run --link other I assume that docker has an internal mechanism when restarting, to start my container after the other container I am linked to. Because my container is supposed to be linked to the other and this can only be achieved, if I know the IP address oft the other container.
I haven't checked the code, but what I figured out:

docker run --rm -d --name first alpine:3.14 sleep inf
docker run --rm -d --name second --link=first alpine:3.14 sleep inf

Checking /etc/hosts on first:

docker exec -it  first sh

/ # cat /etc/hosts
127.0.0.1       localhost
...
172.17.0.2      4834508c1103

Checking /etc/hosts on second:

docker exec -it second sh

/ # cat /etc/hosts
127.0.0.1       localhost
...
172.17.0.2      first 4834508c1103
172.17.0.3      4e7d766612d9

The second container with --link=first has the IP address of the firstcontainer in /etc/hosts.

Trying to start second container with --linked=firstresults in an error if firstis not running, which supports my theory.

docker run --rm -d --name second --link=first alpine:3.14 sleep inf
docker: Error response from daemon: could not get container for first: No such container: first.

What if using this mechanism to order the start of kindcontainers during a docker restart?

When creating the nodes adding something like:

control-plane containers link to their predecessors:

args = append(args, "--link=kind-control-plane", "--link=kind-control-plane2")

worker container link to all control-plane containers:

args = append(args, "--link=kind-control-plane", "--link=kind-control-plane2", "--link=kind-control-plane3")

What do you think? Is it worth a shot?

@BenTheElder
Copy link
Member

Docker --link is deprecated.

@tnqn
Copy link
Contributor

tnqn commented Mar 11, 2022

I faced same issue as #1685 (comment) in a multi-node cluster with single controlplane node. I saw the only issue is kube-controller-manager and kube-scheduler cannot connect kube-apiserver. Since enable_network_magic has done some magic, I tried to replace the stale local node IP with loopback address in the same place and the cluster is working after that.

I created #2671 with above change to see if it's an acceptable approach for fixing restart case of multi-node cluster with single controlplane node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

7 participants