-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate usage/requirements for ClusterID #12
Comments
So on AWS the ClusterID is used as a tag to allow multiple clusters to coexist in the same "scope". AWS doesn't have a project concept like GCP does, so originally this was needed to allow multiple clusters to exist in the same VPC at all. Drilling down, there are two use cases:
Cleanup is a nice-to-have - in theory if you shut down k8s cleanly you don't need it, but it is nice. Isolation is trickier:
AFAICT, what GCE calls the cluster-id is currently only used in the load balancer, as part of the name for backend rules. I'm not sure if that is for cleanup, isolation, or both. |
I put up a WIP PR to show what is required to move to passing around v1.Node, BTW: kubernetes/kubernetes#74304 It's just one call, and it already looks bad, but the only complicated thing was the desiredStateOfWorld map on node-update - and the volume reconciler is where I've always got stuck before. |
@justinsb is it safe to say that if we replace where we pass in node name only with the entire node object this would remove the need for clusterID? Seems like a reasonable thing to do to me because there are a lot of places where we have to "search node by name" when we could just get the node ID from |
For the GCE case (and please correct me if I'm wrong), it seems like it supports cluster ID but not in the same way as AWS. GCE generates it's own cluster ID dynamically and stores the value in a ConfigMap. Its main purpose is isolation given the cluster ID value is propagated to LB resources like you mentioned. For what it's worth, it seems like cluster ID is an internal implementation detail and not something that requires configuration on the control plane, in which case removing "Cluster ID" as a feature wouldn't really change things for GCE. |
TODO: investigate if other providers need this /milestone Next |
FWIW, DigitalOcean is similar to AWS in the sense that we do not have a dedicated notion of a cluster scope / project. We currently use a cluster ID (injected via an environment variable since we weren't certain on the future of the interface-provided cluster ID) for billing / grouping purposes (translating them to tags on DO resources like load-balancers) only. The isolation and cleanup goals could help us as well to better deal with certain edge cases. That said, a more reliable way look up nodes as suggested by @justinsb may already suffice for us. Given that approach works, I feel that DO does not require a SIG-blessed cluster ID. |
/assign @nckturner |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@nckturner Did you have a chance to investigate this? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Resolves warning 2 from k3s-io#2471. As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future along with the warning it currently triggers. Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Resolves warning 2 from k3s-io#2471. As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future along with the warning it currently triggers. Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Resolves warning 2 from k3s-io#2471. As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future. Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Resolves warning 2 from k3s-io#2471. As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future. One side-effect of this is that the core k8s cloud-controller-manager also wants to watch nodes, and needs RBAC to do so. Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Resolves warning 2 from #2471. As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future. One side-effect of this is that the core k8s cloud-controller-manager also wants to watch nodes, and needs RBAC to do so. Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future. See also brandond/k3s@e109f13 I tested this and I didn't notice any side-effects. Signed-off-by: Mateusz Gozdek <mateusz@kinvolk.io>
As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future. See also brandond/k3s@e109f13 I tested this and I didn't notice any side-effects. Signed-off-by: Mateusz Gozdek <mateusz@kinvolk.io>
As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future. See also brandond/k3s@e109f13 I tested this and I didn't notice any side-effects. Signed-off-by: Mateusz Gozdek <mateusz@kinvolk.io>
Resolves warning 2 from k3s-io#2471. As per kubernetes/cloud-provider#12 the ClusterID requirement was never really followed through on, so the flag is probably going to be removed in the future. One side-effect of this is that the core k8s cloud-controller-manager also wants to watch nodes, and needs RBAC to do so. Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
In kubernetes/kubernetes#48954 & kubernetes/kubernetes#49215 we made
ClusterID
a requirement, and added a flag--allow-untagged-cloud
on the kube-controller-manager. The intention there was to allow clusters to get away with not setting ClusterID for a few releases but eventually make it a requirement. It seems we never followed through with cleaning up the--allow-untagged-cloud
flag.More interestingly, it's not exactly clear how ClusterID is being consumed by both in-tree and out-of-tree cloud providers. It seems it's critical to AWS/GCE but not really used by others. Do we still need ClusterID? Should we use a more generic approach with labels/annotations? If we need it, should we go ahead and remove the
--allow-untagged-cloud
flag?If the plan is to continue to support ClusterID, we should at least add better documentation for how this works.
cc @justinsb @rrati
The text was updated successfully, but these errors were encountered: