-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--kube-reserved and --system-reserved are not working #72762
Comments
/area kubelet |
@y-koseki the reservation appears to have been properly applied looking at the allocatable capacity reported back to the scheduler. what --cgroup-manager flag did you specify? is it possible for you to report the cgroupfs values you see under kubepods.slice for cpu and memory? |
fyi @dashpole have you seen this? i am not aware of anyone that is enforcing-node-allocatable in production for anything other than |
Problem 1 looks like it is looking at the disk usage of the entire node, not the allocatable usage only. It is also worth pointing out that the kubelet only enforces allocatable for ephemeral storage through monitoring + response, so usage by pods can temporarily exceed allocatable. |
Problem 2 looks like it might be a real bug. Although the problem with metrics from kubectl top is that they are 10s averages, so I am not 100% sure. I think I might have seen something like that before, but didn't dig into it. It very well could be a bug. My first thoughts are: Do we enforce that the kube-reserved cgroup and system-reserved cgroup have the same parent cgroup as kubepods? If not, I don't think cpu shares are correctly calculated, as cpu time is split proportionally to shares among cgroups with the same parent. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
I have ran kubelet with parameter:
The capacity of k8s node VM is as follows.
Problem1
Pods can use ephemeral-storage over
Allocatable
.The result of
curl https://${master_name}/api/v1/nodes/${node_name}/proxy/stats/summary | jq .node.fs
is as follows.Allocatable
is 24683826743 byte.usedBytes
is 27347832832 bytes.Problem2
Pods can use CPU over
Allocatable
.The result of
kubectl top pods
is as follows.Allocatable
is 13500m.CPU
is 14771m.The result of
curl https://${master_name}/api/v1/nodes/${node_name}/proxy/stats/summary | jq .node.cpu
is as follows.What you expected to happen:
I expected that Pods can NOT use ephemeral-storage and CPU over
Allocatable
.It seems that
--kube-reserved
and--system-reserved
are not working.I have also tried to run kubelet with parameter:
However, it did not resolve the problems.
How to reproduce it (as minimally and precisely as possible):
Run kubelet with parameter above.
Anything else we need to know?:
Environment:
kubectl version
):Cloud provider or hardware configuration:Fujitsu Cloud Service for OSS IaaS
OS (e.g. from /etc/os-release):
uname -a
):The text was updated successfully, but these errors were encountered: