New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase cache size for leases #85219
Conversation
The default size (100) is not enough for large clusters and results in unnecessary restarts of kube-controller-manager watcher for leases, see http://perf-dash.k8s.io/#/?jobname=gce-5000Nodes&metriccategoryname=APIServer&metricname=LoadRequestCount&Resource=leases&Scope=cluster&Subresource=&Verb=LIST This PR will make it match what we have for nodes.
/sig scalability |
/milestone v1.17 |
/test pull-kubernetes-node-e2e-containerd |
/test pull-kubernetes-node-e2e-containerd |
Looks like the failing node test is failing consistently and not only for my PR - https://k8s-testgrid.appspot.com/sig-node-containerd#pull-node-e2e |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mm4tt, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
/retest Review the full test history for this PR. Silence the bot with an |
The default size (100) is not enough for large clusters and results in unnecessary restarts of kube-controller-manager watcher for leases, see http://perf-dash.k8s.io/#/?jobname=gce-5000Nodes&metriccategoryname=APIServer&metricname=LoadRequestCount&Resource=leases&Scope=cluster&Subresource=&Verb=LIST
This PR will make it match what we have for nodes.
We believe it may help deflaking the GCE-5K performance test: https://k8s-testgrid.appspot.com/sig-scalability-gce#gce-master-scale-performance
What type of PR is this?
/kind bug