Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: customize pod informer to reduce memory usage #46

Merged
merged 2 commits into from
May 11, 2023

Conversation

JackZxj
Copy link
Collaborator

@JackZxj JackZxj commented Apr 27, 2023

Use a custom Pod Informer to limit the number of concurrent list pods from member clusters, and simplify the metadata of the pods before we store them into the cache.

The reference conversations: #14

As result:
image
4 clusters, 100 kwok-node/cluster, 10 deployment/cluster, 1000 pod/deployment, total: 10k pod/cluster

  • case1: simplify the pod (In the last three rounds, kwok failed to delete pods of a cluster so the pods of cluster came to 20k; and some kwok nodes got not ready, so the available cpu,gpu came to zero, but the pods still in work. The overall impact is not significant.)
  • case2: regular pod informer
  • case3: simplify the pod and limit 2 available listers
  • case4: simplify the pod, ignore modified event of watch, and limit 2 available listers

summary: It can save at least half of the memory, and reduce the rate of change of memory usage

@CLAassistant
Copy link

CLAassistant commented Apr 27, 2023

CLA assistant check
All committers have signed the CLA.

pkg/controllers/util/federatedclient/client.go Outdated Show resolved Hide resolved
pkg/controllers/util/federatedclient/podinformer.go Outdated Show resolved Hide resolved
@gary-lgy gary-lgy requested a review from limhawjia May 2, 2023 08:29
@JackZxj JackZxj force-pushed the feat/customize-pod-informer branch 2 times, most recently from 65f566b to d75c33d Compare May 8, 2023 03:52
@JackZxj JackZxj changed the title WIP:feat: customize pod informer to reduce memory usage feat: customize pod informer to reduce memory usage May 8, 2023
@JackZxj JackZxj force-pushed the feat/customize-pod-informer branch from d75c33d to 329980d Compare May 8, 2023 11:21
}
defer semaphore.Release(1)
}
pods, err := client.CoreV1().Pods(corev1.NamespaceAll).List(ctx, options)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client-go informers use context.TODO(). Intuitively I think it does make sense to shutdown the listwatch when the federatedclient is shutting down, but I can't be sure. @JackZxj @limhawjia @SOF3 @mrlihanbo WDYT?

Copy link
Member

@SOF3 SOF3 May 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can reorganize all context usages in a separate PR; the scope is too big and doesn't need to be fixed here immediately.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree that context usage should be fixed in a separate PR. I'm just wondering whether using ctx instead of context.TODO here is correct. Using ctx would mean the ListWatch would shutdown when ctx is cancelled.

@JackZxj JackZxj force-pushed the feat/customize-pod-informer branch from 329980d to 33c28c4 Compare May 9, 2023 03:37
@JackZxj JackZxj force-pushed the feat/customize-pod-informer branch from 33c28c4 to 27916af Compare May 10, 2023 10:07
@gary-lgy
Copy link
Member

@JackZxj Great enhancement. Thanks for the contribution!

@SOF3 SOF3 enabled auto-merge May 11, 2023 09:38
@SOF3 SOF3 merged commit 7892016 into kubewharf:main May 11, 2023
5 checks passed
miraclejzd pushed a commit to miraclejzd/kubeadmiral that referenced this pull request Jan 3, 2024
feat: customize pod informer to reduce memory usage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants