Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "node-high" priority-level #101151

Merged
merged 1 commit into from
Apr 17, 2021
Merged

Conversation

mborsz
Copy link
Member

@mborsz mborsz commented Apr 15, 2021

What type of PR is this?

/kind feature

What this PR does / why we need it:

It adds "node-high" priority-level that is used by kubelets to report their status.
It has two goal:

  • making sure that kubelets are able to report their status even if control plane is overloaded by high pod churn (e.g. pod creation events, fetching secrets, fetching pods).
  • increasing total shares assigned to traffic that before this PR used "system" (in large clusters this is ~1K QPS, up to 90% of traffic in the cluster).

Which issue(s) this PR fixes:

Special notes for your reviewer:

Does this PR introduce a user-facing change?

New "node-high" priority-level has been added to Suggested API Priority and Fairness configuration.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


/assign @wojtek-t @MikeSpreitzer @lavalamp

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Apr 15, 2021
@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Apr 15, 2021
@wojtek-t
Copy link
Member

I'm definitely supportive for this change, but I would like to hear others opinions too.

@deads2k @tkashem @MikeSpreitzer

@fejta-bot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

flowcontrol.PriorityLevelConfigurationSpec{
Type: flowcontrol.PriorityLevelEnablementLimited,
Limited: &flowcontrol.LimitedPriorityLevelConfiguration{
AssuredConcurrencyShares: 100,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The node health traffic, is it O(number of nodes) in the cluster and each node sends health status to the apiserver on a regular interval? If so, 100 seems a bit high to me especially when we don't have borrowing.

With 100 added for node health, we are roughly allocating 1/3 of the global concurrency share to node-health traffic.

Some current stats:

  • currently we allocate concurrency share of 30 for all kubelet traffic.
  • workload-low has the highest concurrency share of 100, this is pretty much for all in-cluster service account traffic.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With 100 added for node health, we are roughly allocating 1/3 of the global concurrency share to node-health traffic.

And this 1/3 is roughly what we see in large clusters. Hence the value.
[I will let @mborsz to describe a bit more on the reasoning behind this number.]

If our numbers are visibly different than yours, I'm wondering how should we proceed.
I strongly believe that the rule itself is very important, because health reporting is more important than any other actions done by Kubelets (otherwise nodes may become unready etc.). But I'm wondering how should we solve the value here (I would really like to avoid exposing this as a knob of kube-apiserver).
Any ideas?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, it's a bug (I incorrectly overestimated global concurrency share). Indeed, it should be lower value, like ~30-50.

Reasoning behind this number: In the idle cluster (no pod change), nearly all requests are for node health checking (~1.1K QPS in 5k scale) and we see approx. ~6 cores usage for kube-apiserver, so we can assume that node-health check consumes approx. 10-15% of total master capacity (depending on how we exactly calculate this).

The current global concurrency share is 205, so to get 15% we need 36 shares (36 / (205 + 36). Rounding up, I suggest using 40.

resourceRule(
[]string{flowcontrol.VerbAll},
[]string{coordinationv1.GroupName},
[]string{"leases"},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's confusing to include leases in something named "node-health".

Maybe use "kubelet-high" instead?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming is hard :)

I think nodes aren't just about Kubelet - IIRC, NodeProblemDetector is also running under "NodeGroup".
So I think kubelet is not the best.

I actually like node (but it's in a sense of "node" (as a machine/VM where we run pods) not in the context of "node" as an API object. But I see where the confusion is coming from.

I with I have a better proposal (node-heartbeats? [but that doesn't reflect the node status changes]).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, if it's NPD also, then how about "node-high"? (but should NPD really be included?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"node-high" sounds fine to me.

Re NPD - in my opinion it should, because if there is a problem with a node, I think reporting the status (to reflect the actual state and e.g. block scheduling more pods on that in case of issues) is more important than other operations (e.g. reporting new status of pod or fetching secrets to start it or more like that).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for naming suggestions :) I renamed to "node-high" :)

@mborsz mborsz changed the title Add "node-health" priority-level Add "node-high" priority-level Apr 16, 2021
@@ -48,6 +48,11 @@ var (
// cluster and the availability of those running pods in the cluster, including kubelet and
// kube-proxy.
SuggestedPriorityLevelConfigurationSystem,
// "node-high" priority-level is for the kubelet health reporting. It is separated from "system"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/kubelet/node/

[NPD might be in this group too.]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@wojtek-t
Copy link
Member

/sig api-machinery

/lgtm
[I can safely do that because it requires api-approval anyway]

@tkashem - does the new number (and Maciek`s explanation) look reasonable to you? Are you ok with the current state?

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 16, 2021
@lavalamp
Copy link
Member

/approve

We can always change the number again if we come up with a better estimate.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: lavalamp, mborsz

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 16, 2021
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

2 similar comments
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot k8s-ci-robot merged commit 09bd596 into kubernetes:master Apr 17, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.22 milestone Apr 17, 2021
@fedebongio
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 20, 2021
Swizzmaster pushed a commit to Swizzmaster/kubernetes that referenced this pull request Feb 29, 2024
Swizzmaster pushed a commit to Swizzmaster/kubernetes that referenced this pull request Feb 29, 2024
create public patch for github.com/kubernetes/pull/101151

See merge request eks-dataplane/eks-kubernetes-patches!55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants