Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
pause containers getting oom-killed #1975
We see issues with pause container's processes being killed by the oom_reaper when the pod limits are low:
Steps to reproduce the issue:
kubectl create -f - <<EOF apiVersion: v1 kind: Pod metadata: name: test spec: containers: - image: alpine imagePullPolicy: IfNotPresent name: test stdin: true tty: true resources: limits: memory: 10Mi requests: memory: 10Mi EOF
Additional information you deem important (e.g. issue happens only occasionally):
It also means that previous limits set on pods, when they are tight, will be triggered when migrating to cri-o.
Additional environment details (AWS, VirtualBox, physical, etc.):
we were on containerd before, and tight limits like that were set and working. They don't work anymore. Example use case: serving single-page apps with nginx uses very little memory. We used to set the limit at 20MiB, now we set it at 32MiB.
I wonder what's best thought, between "hiding" this cost to be compatible or making the cost explicit and have that documented in some migration notes.
Hi @mrunalp , no the shim is not accounted in the pod with containerd. It's what I called the "hidden cost" earlier.