-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change mechanism of zone selection for dynamic provisioning #48504
Comments
What Cinder provisioner does is that it lists nodes from API server. They have region+zone labels and we can be sure they belong to the right cluster. On the other way, this list is not cached... Perhaps we could enhance |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
FWIW for gce, we added a call to the cloud provider to list the nodes from the informer and cache the zones. |
ref #52322 |
/remove-lifecycle stale |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Using AWS as an example, the current zone selection mechanism looks like following (assuming no zone is specified in StorageClass)
This creates bunch of problems if there are more than one clusters running in the region or there are nodes which don't participate in Kube cluster.
Our current fix to solve this problem is - tag all instances participating in clustering with some sort of cluster-id (we have 2 distinct ways of tagging instances right now but leaving that aside). The tagging mechanism is very much tool specific and relying on it completely leads to surprising behaviour.
In stead of relying on running instances, can we not instead keep a set of zones via node information present in node informer and use this set for determining zone of PV?
This will most likely result in change in
Provision
interface because for most part- volume plugins don't initialize or have access to informers, sozone
has to be selected before makingProvision
call or some other related change has to be done.cc @jsafrane @kubernetes/sig-storage-api-reviews
The text was updated successfully, but these errors were encountered: