Please sign in to comment.
oom diagnostic logging
Disable the oom killer for a container's memory cgroup when the memory limit is set. If oom occurs later, the problem task (that is, the one which triggered oom) will be suspended () and the oom notifier will be driven. This gives us the opportunity to gather diagnostics showing details of the out of memory condition. In the oom notifier, log memory usage, memory limit, swap + memory usage, swap + memory limit and statistics and re- enable the oom killer. Re-enabling the oom killer is necessary in spite of the fact that the oom notifier proceeds to terminate the container. This is because the current method of terminating the container can, at least in theory, deadlock when the container is out of memory (wshd can require more memory, e.g. for a stack frame, and be suspended due to lack of memory). Once re-enabled, the oom killer will kill a task (usually the application) in the container () and this will enable container termination to successfully kill the remaining tasks via wshd. If wshd hits the container's memory limit with the oom killer enabled, the oom killer will kill it and this will kill all the other processes in the container (since wshd is the PID namespace parent). IntelliJ IDEA .iml files are ignored. Footnotes:  Linux kernel's ./Documentation/cgroups/memory.txt states: "If OOM-killer is disabled, tasks under cgroup will hang/sleep in memory cgroup's OOM-waitqueue when they request accountable memory."  Although memory.txt does not specify that re-enabling the oom killer in the oom notifier will cause it to kill a task, this seems like the only reasonable behaviour and it seems to work that way in practice.
- Loading branch information...
Showing with 52 additions and 5 deletions.