New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CrashLoopBackOff - After update to containerd 1.5.5 #6009
Comments
/kind/bug |
|
Doesn’t seem related to containerd. |
|
@bmcentos could you show the result of /proc/$(pidof containerd)/status? Thanks |
Dec 07 17:19:32 node-2 kubelet[18728]: E1207 17:19:32.302776 18728 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-spmfv_kube-system(929cc7ea-33d8-4a37-881c-1f6e8266a36f)\"" pod="kube-system/kube-proxy-spmfv" podUID=929cc7ea-33d8-4a37-881c-1f6e8266a36f
Dec 07 17:19:41 node-2 kubelet[18728]: I1207 17:19:41.284190 18728 scope.go:110] "RemoveContainer" containerID="d2871155e72ad981f5d241aad57a9d1a08ce80e6fbd445040719ca37553e9ab5"
Dec 07 17:19:41 node-2 kubelet[18728]: E1207 17:19:41.284820 18728 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-exporter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-exporter pod=node-exporter-j6qmk_monitor(7ccca029-7cce-47b7-8bb5-732b944fcb84)\"" pod="monitor/node-exporter-j6qmk" podUID=7ccca029-7cce-47b7-8bb5-732b944fcb84
Dec 07 17:19:44 node-2 kubelet[18728]: I1207 17:19:44.273365 18728 scope.go:110] "RemoveContainer" containerID="bc787d9132b0cf7072343972978d672e8fa3b683adfae8be36a9478edaeb13a0"
Dec 07 17:19:44 node-2 kubelet[18728]: E1207 17:19:44.274017 18728 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-spmfv_kube-system(929cc7ea-33d8-4a37-881c-1f6e8266a36f)\"" pod="kube-system/kube-proxy-spmfv" podUID=929cc7ea-33d8-4a37-881c-1f6e8266a36f
root@node-2:/tmp# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.3.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
root@node-2:/tmp# cat /proc/$(pidof containerd)/status
Name: containerd
Umask: 0022
State: S (sleeping)
Tgid: 18729
Ngid: 0
Pid: 18729
PPid: 1
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 128
Groups:
NStgid: 18729
NSpid: 18729
NSpgid: 18729
NSsid: 18729
VmPeak: 1355064 kB
VmSize: 1355064 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 52240 kB
VmRSS: 40780 kB
RssAnon: 27112 kB
RssFile: 13668 kB
RssShmem: 0 kB
VmData: 190036 kB
VmStk: 132 kB
VmExe: 17300 kB
VmLib: 1532 kB
VmPTE: 268 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
CoreDumping: 0
THP_enabled: 1
Threads: 11
SigQ: 0/3652
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffe3bfa2800
SigIgn: 0000000000000000
SigCgt: ffffffffffc1feff
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
NoNewPrivs: 0
Seccomp: 0
Seccomp_filters: 0
Speculation_Store_Bypass: thread vulnerable
Cpus_allowed: 00000000,00000000,00000000,00000001
Cpus_allowed_list: 0
Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 119
nonvoluntary_ctxt_switches: 203 root@node-2:/tmp# containerd --version
containerd github.com/containerd/containerd 1.4.5~ds1 1.4.5~ds1-2+deb11u1
root@node-2:/tmp# kubelet --version
Kubernetes v1.22.4
root@node-2:/tmp# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/" |
The only way I was able to replicate this issue was with docker (20.10.12-0ubuntu4) + containerd (1.5.9-0ubuntu3) installed and I ran K8s 1.24 on top of it (1.24 will now by default use Containerd CRI not Dockershim). In this default state Containerd kept shutting down the K8s containers (Reporting To fix I created the containerd config file: sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml Then set Then restarted containerd and the kubelet: systemctl restart containerd
systemctl restart kubelet I feel like this is a special edge case for me as I was just testing what happens if I installed Docker + Containerd (using It would be great if a containerd expert could explain why when I install containerd out of the box alongside docker that I need to create the default |
Fixed for my vanilla F36 install. Same symptoms. |
@padraigconnolly I don't know the reason. I think you can report it to docker community because the package is build by them :) It seems like there was mismatch between kubelet and containerd. close |
Hi.
After update my binaries files with package containerd-1.5.5-linux-amd64.tar.gz, and restart services kubelet and containerd, all run fine, but after reboot the node, the pods kube-proxy-56k4n and kube-flannel-ds-95422 got CrashLoopBackOff error, ass:
Not running after rebot Containerd Version:
Running fine Containerd Version:
My node is:
After roll back binaries to 1.4.2, and restart my services, all come back to run
One log got my atention:
My log from pod flannel errored is:
/var/log/message
So, have any modification from capacibilities or dependencies in new version of containerd, or some limitation? Any one ca help me to understand this error?
The text was updated successfully, but these errors were encountered: