-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use the cgroup aware OOM killer if available #117793
use the cgroup aware OOM killer if available #117793
Conversation
/sig node |
97c3c08
to
ab4192d
Compare
4e8beb7
to
bfb463b
Compare
Thanks for making the change. I agree this would be nice to have, my only concern is if there are some workloads that want to maintain the current behavior and avoid having a cgroup OOM. I agree this this a better default for the majority of workloads, but does it make sense to have some way for a workload to opt out of this behavior? Some folks mention that some workloads (e.g. postgressql, ngnix) do handle this case correctly, xref: #50632 (comment) /cc @mrunalp @haircommander @giuseppe for thoughts |
@bobbypage: GitHub didn't allow me to request PR reviews from the following users: for, thoughts. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Yeah at this point, we have an established behavior of not doing this (even if it's the right thing). Unfortunately, if we want it tunable, we may need to introduce a knob to the pod api |
What's the consensus on enabling the behavior by default if its tunable? E.g. something like:
Happily accepting better naming suggestions :) |
I am happy to put it on by default. As another thought (which I worry will unnecessarily increase complexity): are there other cgroup fields we want to optionally enable? Would an opaque key value field for memoryCgroupOptions be useful? I am not immediately seeing other options that the kubelet wouldn't make opinionated decisions about already (like memory.high), so maybe we don't need the complexity. I just am wary of adding too many fields to the pod api so I want to consider before adding one for just this field |
This feature affected several of our workloads and caused a lot of OOM Killing in the cluster. One was an ML workload where the main process was not killed when a child process was killed, so another computation could continue. The second was for interactive workloads. For example, what used to work well with We wanted to disable this feature because it was difficult to fix those workloads immediately, but we could not because it was not an option. We patched kubelet to disable this feature. I would like an option to disable this feature. I understand that this feature works better with many workloads. (It would be better if it could be controlled on a per pod basis, but from the discussion so far it appears that this is not going to be provided.) Many clusters are currently running on cgroup v1. The problem will become more apparent when they switch to cgroup v2. |
We discussed this at the SIG Node meeting before the change went in. I added an agenda item to discuss this feedback at the meeting tomorrow. |
We would like an option to disable this feature as well. We saw several containers restarting if one of the process got OOMKilled. It impacted us on a mass level and we had to revert to cgroup v1 in middle of k8s upgrades. We are trying to figure out what would it take for us to come up with a workaround if goes live without an option to disable |
In the SIG-Node Meeting, we decided to pursue a node level kubelet config field that enables or disables this feature to allow an admin to opt-in. stay tuned in 1.30 for this |
@haircommander what's the best place to keep informed on progress on that feature? - I don't see anything in the sig-node google group or slack channel that seems directly related |
it was mostly discussed in the meeting (of which the notes can be found here). as we enter 1.30 cycle there will be additional documentation |
PR to add a flag to enable the old behavior: #122813 |
Hey folks! A bit late to the party, but we upgraded to Kubernetes 1.28 last week and, like others [1, 2, 3], had to face outages due to increased OOMs. At Celonis, among other things, we run a compute and memory intensive workload (10k cores, 80TiB memory), which loads data models (>100k) into memory to serve queries. Each data model is handled by a separate process, managed within a Pod by a coordinator process. Multiple Pods are managed by an orchestrator. The old behavior allowed serving tens of thousands of models with just a few nodes in a fairly dynamic way. If one process died because of OOM, the orchestrator would reschedule it. This had no repercussions on other models served in the same Pod, it’s simple, and memory utilization is very good. I’m sharing these details because I think this is a valid use case. Splitting each process in its own Pod is infeasible, given the sheer amount of processes (up to 100k) and also the variability in memory usage of a single process. Splitting them into coarser grained Pods is also not a solution long term, as it would only reduce the blast radius, but not eliminate the problem of a single model taking down hundreds more. Resource utilization would also be lower and result in a rather expensive bill. Keeping ourselves with cgroups v1 seems also not feasible long term, as I’m seeing discussions to drop support, both in Kubernetes, in systemd and in AWS Bottlerocket (which we’re using). We were also already using cgroups v2 and downgrading to v1 seems to fix the problem in the wrong way. I agree that the new behavior seems like a saner default. I’m however afraid that we’re thinking that the previous use case is no longer valid, so this is the deeper question we would like to address. In summary, I have the following concerns:
Is there anything we could be doing to mitigate the impact of this change? For example, short term backporting the new kubelet flag to 1.28 or 1.29, and longer term expedite a subset of solution 3 targeted to OOMs only? These are just ideas, and I’m happy to take other suggestions back to our teams as well. I hope SIG-Node can take this into consideration (cc @haircommander, @tzneal). Thanks! |
Hello, would it be possible to mark pod status as OOMKilled when any oom kill occurs in the pod. It seems, that oomkilled status is set only iff oomkill is received by PID 1 in the container. |
Hello, We migrated to k8s 1.28 this week on AWS EKS. I reached this issue after debugging for over a day. This is an issue that is affecting us in Prod. Do you have any update on when it will be backported to support the option to opt out of this default setting? Would appreciate any solutions others have implemented as a temp fix for this. Thanks |
@zaheerabbas-prodigal please open an issue with AWS/EKS support. |
@dims - Thanks for the response. Unfortunately, we do NOT have a support subscription for raising this as a priority issue with AWS/EKS support. We have mitigated this issue by increasing the memory limits of our service and backporting to cgroup1 using bottle rocket configs. That seemed to be the only solution we could do to mitigate this as a temporary fix. But this is a really bad state to be in. I wanted to reach out here and check if the k8s team is looking to prioritize and support this feature on an opt-in basis and NOT make it the default behavior. Or if we could employ other techniques to mitigate this behavior. |
@zaheerabbas-prodigal please see the tail end of the discussion in #122813 ( someone's gotta step up to do the work ... ) |
@zaheerabbas-prodigal we have implemented a scrappy workaround on our side. A daemon set just goes through all containers' cgroups and periodically disables the group oom killer. Note that this is done in a loop, not because the kubelet will revert it, but because we need to do it after every container is created: apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disable-memory-oom-group
spec:
selector:
matchLabels:
app: disable-memory-oom-group
template:
metadata:
labels:
app: disable-memory-oom-group
spec:
containers:
- name: disable-memory-oom-group
image: busybox
securityContext:
privileged: true
command:
- /bin/sh
- -c
- |-
while true; do
for f in $(find /sys/fs/cgroup/kubepods.slice/ -name memory.oom.group); do
echo "0" > "$f"
done
sleep 60
done This doesn't require cgroups v1 nor kubernetes versions downgrades. And before people get horrified, I should point out that we literally had no other option. |
We have successfully tested the proposed "ugly" workaround, but we chose to convert it to a cronjob running directly on the host and defined in user data:
we think this makes it less ugly ;) |
Just noting that this actually broke Kubernetes's CI's own assumptions about controlling PID1 and kill behavior. (we inject a tiny entrypoint wrapper in CI, and we don't expect it to be killed when the sidecar is still running, versus the test process underneath being killed) discussion in kubernetes-sigs/prow#210 |
Almost like @mbrt and @dwyart we created a Kyverno policy to workaround the issue more precisely. Add label the pods for which you need to disable group OOM, and Kyverno injects a privileged sidecar. Something like: apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disable-group-oom
annotations:
policies.kyverno.io/title: Disable group OOM for single pod
policies.kyverno.io/subject: Pod
policies.kyverno.io/minversion: 1.6.0
policies.kyverno.io/description: >-
Injects priveleged container into pod
and set 0 to memory.oom.group on node filesystem
spec:
rules:
- name: disable-group-oom
match:
any:
- resources:
kinds:
- Pod
selector:
matchLabels:
sexygroup/kyverno-policies: "disable-group-oom"
mutate:
patchStrategicMerge:
spec:
containers:
- name: disable-group-oom
image: alpine:3.20.2
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
requests:
cpu: 5m
memory: 30Mi
limits:
memory: 30Mi
command: ["sh", "-c"]
args:
- |
sleep 15
POD_UID=$(cat /proc/self/mountinfo | grep -m 1 '/pods/' | awk -F '/' '{print $4}' | tr '-' '_')
while true; do
for f in $(find /sys/fs/cgroup/kubepods.slice/ -name memory.oom.group | grep $POD_UID); do
echo 0 > $f
echo $f
cat $f
done
sleep 300
done Of course that's still a durty hack, but at least you can control it partly not disabling completely for whole cluster. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Sets the oom.memory.group bit for container cgroups on kernels with cgroups v2.
Which issue(s) this PR fixes:
Fixes #117070
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
https://github.com/torvalds/linux/blob/3c4aa44343777844e425c28f1427127f3e55826f/Documentation/admin-guide/cgroup-v2.rst?plain=1#L1280