New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GCE Node Controller is very inefficient with multiple zones #59893
Comments
The error message has already removed by #54720 in 1.9 As for making this more efficient, I looked at the code briefly, and it seems the best way to do this is stop relying on using "node name" to find information about the GCE instance in the cloudprovider code. The preferred way should be using the ProviderID (aka InstanceID), which embeds the zonal information. Most of the code (if not all) in the node "lifecycle" controller only fall back to using names on error. I did notice that the node IPAM controller relies on the node name more. +@bowei for the code in the IPAM controller. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/reopen |
@mml: Reopening this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
@cheftako relevant to your interests. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
/cc @cheftako do you have plans to improve this by storing instanceID? I'm interested from the perspective of better support the PVMs on GKE. When it is being recreated fast with the same name, it creates some confusion for the cluster. |
This issue look like it is more related to GCE/GKE and cloud provider |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Looking at getInstanceByName https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L461, we do a very inefficient search for instances by name. This is free when there is only one zone, and cheap if the product
(number of zones) x (number of nodes)
is small, but it can get out of hand quickly with large, multi-zone clusters.It not only wastes effort, but it generates a noisy signal on the cloud provider end, as kube-controller-manager starts racking up huge numbers of 404s.
Can we make this more efficient? At the very least, can we please stop logging these as errors? They are expected.
https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L483
cc @cheftako
/kind bug
/sig node
/area nodecontroller
The text was updated successfully, but these errors were encountered: