Kubelet tries to get ContainerStatus of non-existent containers when initializing a cluster #124407
Labels
kind/bug
Categorizes issue or PR as related to a bug.
kind/support
Categorizes issue or PR as a support question.
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
sig/node
Categorizes an issue or PR as relevant to SIG Node.
Projects
What happened?
When initializing a cluster using
kubeadm init --pod-network-cidr 10.112.0.0/12 --service-cidr 10.16.0.0/12 --apiserver-advertise-address 172.X.X.X --v=5
, during thewait-control-plane
phase kubelet is launched and expected to launch essential pods for the control plane. However, kubeadm times out waiting for kubelet to be healthy:Taking a look at kubelet journal logs: (Provided logs are for after the node is registered)
At this point the required containers have been started according to containerd:
However, the conatiners whose ContainerStatus kubelet is trying to get are not among the containers that containerd has created!
containerd logs confirm this as well:
This behavior results in failure when initializing the cluster.
What did you expect to happen?
For kubelet to verify the readiness of created containers properly and for kubeadm to verify kubelet's health and carry on with the rest of the initialization process.
How can we reproduce it (as minimally and precisely as possible)?
Run
kubeadm init
with standard appropriate flags and wait for it to reach thewait-control-plane
phase.Follow kubelet & containerd logs:
Also keep track of containerd containers.
Anything else we need to know?
In case of concerns:
SystemdCgroup
is set totrue
in/etc/containerd/config.toml
.setenforce 0
.Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: