-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom root Cgroup for a pod #11986
Comments
I'm not sure if we want to expose this as a feature - we want to use parent On Wed, Jul 29, 2015 at 11:19 AM, Patrick Cullen notifications@github.com
|
related to #5671. I think that kubelet wants to create/be in control of that cgroup though |
The use case is that some pods need different CPU affinity from other pods to avoid negative interactions. CPUsets in Cgroups are one way to deal with that. |
We're deeply deeply familiar with the problem. Handing control to users is On Wed, Jul 29, 2015 at 12:41 PM, Patrick Cullen notifications@github.com
|
/sub |
Issues go stale after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Is this feature still under consideration at all? If not, what would be the best way to achieve this behavior in k8s?
|
I still think pod-level resources are a fun and interesting feature, but I
don't know if anyone is working on it yet.
…On Thu, Oct 3, 2019 at 12:43 PM Natchaphon Ruengsakulrach < ***@***.***> wrote:
Is this feature still under consideration at all? If not, what would be
the best way to achieve this behavior in k8s?
My use case:
|-- parent cgroup (memory limit 10 GB)
|---- cgroup1 (memory limit 2 GB)
|---- cgroup2 (memory limit 2 GB)
...
|---- cgroup10 (memory limit 2 GB)
- If all 10 child cgroups use exactly 1 GB (total of 10 GB which is
the limit enforced by parent cgroup), then none of the them will be able to
exceed 1 GB; otherwise, they will be oom-killed.
- In my use case, it is rare that a child cgroup will consume 1GB
memory. Usually, their memory usage is around 200-300 MB. My cgroup
structure allows a cgroup that temporarily needs more than 1 GB to "borrow"
from other cgroups within the parent cgroup, provided that the total memory
usage in parent cgroup is below 10 GB.
- Note that memory consumption in my system is unpredictable
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#11986?email_source=notifications&email_token=ABKWAVBNOUTQZEGB5VDN5WTQMZDNHA5CNFSM4BMIZFM2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAJLQFQ#issuecomment-538097686>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABKWAVD6YMP6FI2FXMJA6M3QMZDNHANCNFSM4BMIZFMQ>
.
|
See kubernetes/enhancements#1592
|
I looked in the source code, but it does not seem to be possible to specify the Cgroup in the Pod spec. I see that Kubelet allows for a custom root Cgroup, but it is applied to all Pods. I would like to have certain Pods startup in a different Cgroup so I can manage the CPU set independently from other Pods.
The text was updated successfully, but these errors were encountered: