-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --concurrent-job-syncs
flag to kube-controller-manager
#117138
Conversation
Please note that we're already in Test Freeze for the Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Thu Apr 6 07:59:36 UTC 2023. |
/test pull-kubernetes-integration |
--concurrent-job-syncs
flag to kube-controller-manager--concurrent-job-syncs
flag to kube-controller-manager
/assign @wojtek-t |
/assign @soltysh I know there were some similar discussion about it recently (IIRC in the context of daemonset?) |
/triage accepted |
/sig cli |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This proposal is similar to #110433 and #111800 which both were closed. Especially the last comments in the latter propose increasing qps for controllers which would help in some scenarios, but there are still some issues holding us back. For now I'm going to close this as won't do it in the near term.
/close
@soltysh: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks @soltysh for review! The main difference between daemonsets (mentioned by you #111800) and jobs is that it is a valid use case to have a lot of tiny jobs (while daemonsets will usually be larger), so the problem is that even with the current qps controller limits there are use cases when we fail to saturate the job's controller qps limit due to insufficient concurrency (think of e.g. 1000 jobs, each with 1 pod to be created, if we have 5 workers, we would have max 5 concurrent pod creations). With pod creation latency around 10ms (based on performance tests perf-dash, but technically p99 slo is 1s for such mutating calls), we will get ~50 qps which is far from client-side qps limit of 100 qps. The reason for adding the flag by @tosi3k is to support such use cases. I agree that adding such flag will require us to support it in the future, but it's actually hard to imagine an implementation where we won't have a concept of worker (which is now used by nearly all controllers in kube-controller-manager) You mentioned that we need community support to prove value of the flag and I think we already have quite a lot of use cases:
Alternative approach would be to increase default worker count -- in most cases we should be throttled by qps anyway so no difference, cost of additional few goroutines is low, but in some corner cases the default value of 5 may not be sufficient and this is why people are asking for the flag. |
/reopen |
@mborsz: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Per offline discussion - fixed the description and added validation for the flag value. @soltysh PTAL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the provided reasoning behind this change, and based on an offline discussion
/lgtm
/approve
LGTM label has been added. Git tree hash: 0d882346c4427f2859ca312ef08ba639f031a840
|
/triage accepted |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: soltysh, tosi3k The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Cherry-picks kubernetes#117138 Change-Id: I886457d8b14fee268078cdf45d469bb6712e721c
What type of PR is this?
/kind feature
What this PR does / why we need it:
Number of job controller workers now defaults to 5 but cannot be overridden through CLI flags like one can already do for many other controllers in KCM.
Which issue(s) this PR fixes:
Fixes #80397.
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: