-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "node-high" priority-level #101151
Add "node-high" priority-level #101151
Conversation
I'm definitely supportive for this change, but I would like to hear others opinions too. |
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
flowcontrol.PriorityLevelConfigurationSpec{ | ||
Type: flowcontrol.PriorityLevelEnablementLimited, | ||
Limited: &flowcontrol.LimitedPriorityLevelConfiguration{ | ||
AssuredConcurrencyShares: 100, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The node health traffic, is it O(number of nodes) in the cluster and each node sends health status to the apiserver on a regular interval? If so, 100
seems a bit high to me especially when we don't have borrowing.
With 100
added for node health, we are roughly allocating 1/3
of the global concurrency share to node-health traffic.
Some current stats:
- currently we allocate concurrency share of
30
for all kubelet traffic. workload-low
has the highest concurrency share of100
, this is pretty much for all in-cluster service account traffic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With 100 added for node health, we are roughly allocating 1/3 of the global concurrency share to node-health traffic.
And this 1/3 is roughly what we see in large clusters. Hence the value.
[I will let @mborsz to describe a bit more on the reasoning behind this number.]
If our numbers are visibly different than yours, I'm wondering how should we proceed.
I strongly believe that the rule itself is very important, because health reporting is more important than any other actions done by Kubelets (otherwise nodes may become unready etc.). But I'm wondering how should we solve the value here (I would really like to avoid exposing this as a knob of kube-apiserver).
Any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, it's a bug (I incorrectly overestimated global concurrency share). Indeed, it should be lower value, like ~30-50.
Reasoning behind this number: In the idle cluster (no pod change), nearly all requests are for node health checking (~1.1K QPS in 5k scale) and we see approx. ~6 cores usage for kube-apiserver, so we can assume that node-health check consumes approx. 10-15% of total master capacity (depending on how we exactly calculate this).
The current global concurrency share is 205, so to get 15% we need 36 shares (36 / (205 + 36). Rounding up, I suggest using 40.
resourceRule( | ||
[]string{flowcontrol.VerbAll}, | ||
[]string{coordinationv1.GroupName}, | ||
[]string{"leases"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's confusing to include leases in something named "node-health".
Maybe use "kubelet-high" instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming is hard :)
I think nodes aren't just about Kubelet - IIRC, NodeProblemDetector is also running under "NodeGroup".
So I think kubelet is not the best.
I actually like node (but it's in a sense of "node" (as a machine/VM where we run pods) not in the context of "node" as an API object. But I see where the confusion is coming from.
I with I have a better proposal (node-heartbeats? [but that doesn't reflect the node status changes]).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, if it's NPD also, then how about "node-high"? (but should NPD really be included?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"node-high" sounds fine to me.
Re NPD - in my opinion it should, because if there is a problem with a node, I think reporting the status (to reflect the actual state and e.g. block scheduling more pods on that in case of issues) is more important than other operations (e.g. reporting new status of pod or fetching secrets to start it or more like that).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for naming suggestions :) I renamed to "node-high" :)
@@ -48,6 +48,11 @@ var ( | |||
// cluster and the availability of those running pods in the cluster, including kubelet and | |||
// kube-proxy. | |||
SuggestedPriorityLevelConfigurationSystem, | |||
// "node-high" priority-level is for the kubelet health reporting. It is separated from "system" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/kubelet/node/
[NPD might be in this group too.]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
/sig api-machinery /lgtm @tkashem - does the new number (and Maciek`s explanation) look reasonable to you? Are you ok with the current state? |
/approve We can always change the number again if we come up with a better estimate. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lavalamp, mborsz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Review the full test history for this PR. Silence the bot with an |
2 similar comments
/retest Review the full test history for this PR. Silence the bot with an |
/retest Review the full test history for this PR. Silence the bot with an |
/triage accepted |
create public patch for github.com/kubernetes/pull/101151 See merge request eks-dataplane/eks-kubernetes-patches!55
What type of PR is this?
/kind feature
What this PR does / why we need it:
It adds "node-high" priority-level that is used by kubelets to report their status.
It has two goal:
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/assign @wojtek-t @MikeSpreitzer @lavalamp