-
Notifications
You must be signed in to change notification settings - Fork 41.6k
Description
What happened:
A pod of my deployment suddenly doesn't come up again and hangs in "CrashLoopBackOff"
What you expected to happen:
When a pod of a deployment is killed for some reason, I expect it to come up again.
How to reproduce it (as minimally and precisely as possible):
This only happens sometimes and cannot be reproduced dependably, I am still trying ...
Anything else we need to know?:
kubectl describe pod shows this error-message:
OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:367: setting cgroup config for procHooks process caused \\\"failed to write 200000 to cpu.cfs_quota_us: write /sys/fs/cgroup/cpu,cpuacct/container.slice/kubepods/burstable/pod6ba8075b-132e-11e9-ab2e-246e9674888c/a5f752a5a36fafeab7f16beb4763521cf2370efc3ba961e85a8ac1faef721b48/cpu.cfs_quota_us: invalid argument\\\"\"": unknown
Environment:
- Kubernetes version (use
kubectl version): Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:46:57Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} - Cloud provider or hardware configuration: Hardware
- OS (e.g. from /etc/os-release): coreos 1967.3.0
- Kernel (e.g.
uname -a): 4.14.88 - Install tools: terraform
- Others: