-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-apiserver memory consumption during CRD creation #101755
Comments
|
/sig api-machinery |
|
Can you enable profiling via web interface host:port/debug/pprof/heap to give more details about this issue? |
|
I hava tested you crds, the apiserver memory may up for a while after creation of crd, but it will down after some periods, so what is you apiserver memroy limit? I think you should increase some memory for apiserver first, and test again |
Yes, I can give you more details via profiler.
Memory increasing is a good way but it needs actions on the IaaS tier. Also, it may be necessary before CRDs creation only. |
|
Almost certainly a problem constructing the OpenAPI schema; not clear if it's in the extensions-apiserver or the aggregator's merging, however. My money is on the former. |
|
Hi. Guys, am I understanding correct that this request is under assessment by kuber community ? |
|
Yeah, I investigated that, most of the memory allocation comes from |
|
Hi guys! please let us know if you have any new information about the issue. Thanks in advance. |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
/remove-lifecycle stale My suspicions were around the openapi aggregation as well but after digging into it a bit deeper, the bulk of the memory allocation comes from |
|
kubernetes/kube-openapi#251 has been opened to mitigate this |
|
#101755 (comment) Do we know if it's UpdateSpec in the apiextensions-apiserver or the aggregation layer, or both? |
|
I think we measured it in the aggregation layer, @Jefftree to confirm. @DangerOnTheRanger, could you look into it and make sure that your fix solves it? |
|
Since the apiextensions-apiserver spec is a subset of the aggregated spec and the memory consumption is proportional to the size of the spec, the aggregation layer strictly uses more memory that the apiextensions-apiserver. On a cluster with a couple of sample CRDs, the bulk of the memory was consumed in the aggregator. But if the number/size of the CRDs becomes large enough, the UpdateSpec in the apiextensions-apiserver would end up consuming a sizable amount of memory. @DangerOnTheRanger's PR targets the genericapiserver imported by all apiservers, so I think we should see benefits in both the aggregator and apiextensions? |
Add metrics to report the traffic towards CRDs generated by the RTEs. Even though issues like kubernetes/kubernetes#105932 kubernetes/kubernetes#101755 should be solved, it's both cheap and useful to provide these metrics on RTE side, since we already have all the infrastructure in place. Signed-off-by: Francesco Romani <fromani@redhat.com>
What happened:
The CustomResourceDefinitions creation causes kube-apiserver high memory consumption. CRD has multiple versions.
Several creations during a short period of time may cause kube-apiserver restarting.
What you expected to happen:
kube-apiserver shouldn't consume an unreasonably huge amount of memory during processing objects such as CRD
How to reproduce it (as minimally and precisely as possible):
Apply yaml and watch kube-apiserver memory consumption:
Anything else we need to know?:
/sig api-machinery
Environment:
cat /etc/os-release): CentOS Linux release 7.6.1810uname -a): 3.10.0-957.27.2.el7.x86_64The text was updated successfully, but these errors were encountered: