Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster-autoscaler clusterapi provider performance degrades when there are a high number of node groups #6784

Open
elmiko opened this issue Apr 30, 2024 · 4 comments
Labels
area/provider/cluster-api Issues or PRs related to Cluster API provider kind/bug Categorizes issue or PR as related to a bug.

Comments

@elmiko
Copy link
Contributor

elmiko commented Apr 30, 2024

Which component are you using?:

cluster-autoscaler

What version of the component are you using?:

Component version: all versions up to and including 1.30.0

What k8s version are you using (kubectl version)?:

this affects all kubernetes versions that are compatible with the cluster autoscaler

What environment is this in?:

clusterapi provider, with more than 50 node groups (eg. MachineDeployments, MachineSets, MachinePools)

What did you expect to happen?:

expect cluster autoscaler to operate as normal

What happened instead?:

as the number of node groups increases, the performance of the autoscaler appears to degrade. it takes longer and longer to process the scan interval and in some cases (when node groups are in the 100s) it can take more than 40 minutes to add a new node when pods are pending.

How to reproduce it (as minimally and precisely as possible):

  1. setup a cluster with clusterapi and cluster autoscaler
  2. create 100 machinedeployments
  3. configure autoscaler to recognize all 100 machinedeployments as node groups
  4. creating a pending job to the cluster
  5. observe the autoscaler behavior

Anything else we need to know?:

this problem appears related to how the clusterapi provider interacts with the api server. when assessing activity in the cluster, the provider will query the api server for all the node groups, then query again for scalable resources, and potentially another time for the infrastructure machine template. i have a feeling that this interaction is causing the issues.

i think it's possible that extending the scan interval time might alleviate some of the issues, but i hove not confirmed anything yet.

@elmiko elmiko added the kind/bug Categorizes issue or PR as related to a bug. label Apr 30, 2024
@enxebre
Copy link
Member

enxebre commented Apr 30, 2024

/area provider/cluster-api

@k8s-ci-robot k8s-ci-robot added the area/provider/cluster-api Issues or PRs related to Cluster API provider label Apr 30, 2024
@elmiko
Copy link
Contributor Author

elmiko commented Apr 30, 2024

i've been hacking on a PR to add some timing metrics on the NodeGroups interface function. i believe we spend the most time in this function and have been trying to prove out how the number of node groups affects the time that this call takes.

elmiko@1a5d9cd

@enxebre
Copy link
Member

enxebre commented May 7, 2024

I don't think kas calls are the main bottle neck but rather the cloudprovider.NodeGroup function implementation. Currently it takes ~20 seconds with ~90 MachineSets. This #6796 avoids expensive loop and copy pointers resulting in ~5 seconds each NodeGroups call .

@elmiko
Copy link
Contributor Author

elmiko commented May 7, 2024

it seems we might have multiple areas for improvement, when i observed behavior with 50 to 75 node groups, i could see the performance becoming worse over time. it appeared that we might have inefficiency in the way we handle all the various cluster-api CRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/cluster-api Issues or PRs related to Cluster API provider kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants