New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic PVs are always created in the master zone #23330
Comments
@kubernetes/rh-storage |
I'm in this exact boat now needing persistent storage with my pods. I might be able to help out since I either need to get this working across zones, or setup GlusterFS (not sure which is more work). |
@stevesloka It would be great if you could help out on this. Let me or @justinsb know if you need any pointers as to how to get it done. |
@quinton-hoole I would love some pointers! I just started to look into everything to see what might need to be done. Would be great to help outline the pieces and parts that need touched. I've done some integrations with the k8s client, but never yet in the core components. |
@stevesloka #22602 is a good place to start, to get a feel for how the volume creation code is structured, and where you'll likely need to make your changes. @saad-ali might have additional pointers. |
@stevesloka hi! There isn't a dynamic provisioner for GlusterFS. I can point you to some e2e examples for this. Are you also looking for documentation on setting up gluster or just the PV/PVC portion? |
@erinboyd I don't want to muddy this issue, but can quickly explain my needs. I need to persist some data for mysql and redis. I need this to be over AWS AZ's. Now that EBS mounts work on 1.2, I need to either utilize the Ubernetes-lite and scale my pods across AZ's using EBS mounts and fixing this issue with PVC or I use Gluster to acomplish the same, but then need to implement a way to get a gluster client on my CoreOS instance using a daemonset or something to be my mounter. I'm thinking this route with EBS's might be simpler as my team doesn't have a lot of production experience with gluster. |
Dynamic volume provisioning was released in v1.2 as an experimental feature. It is triggered via a For v1.3 we'd like to complete this feature by finalizing the UX/API. One of our goals is to introduce an abstraction layer (preliminarily called "storage classes") that enable users to request different grades of storage without having to worry about the specifics of any one storage implementation. At the same time, we still need to expose some knobs to the user (for example, as requested by this issue, zone). We've had detailed discussions in the storage-SIG about possible implementations. But those discussions are on hold at the moment as we are prioritizing other work. I'd suggest we not tackle dynamic PV zoning as a one off, but as part of that holistic design. If we do want a quick and dirty solution, however, introducing an experimental annotation should work. |
Just for the reference, there is a PR that makes dynamic provisioning configurable via ConfigMaps. Zones was one of the use cases that it should solve: As @saad-ali pointed out, we got distracted by other work. |
Seeing as we need more discussion and there is some work in flight, I'll just roll with the manual PV and setup my RC's manually to target my AWS AZ's. I may join the next sig hangout to maybe follow along on the discussion, but I do think the auto provisioning would help eliminate a bunch of upfront effort so I'm in favor. Thanks everyone! |
Has there been any work done on the dynamic provisioning of claims? I'd like to tackle a simple approach to help my use-case but didn't want to step on any wip. |
@stevesloka, there is PR #25413 to design how various storage properties incl. availability zone should be handled in PersistentVolumes and the same approach will then be reused in dynamic provisioning. |
DP enhancements are targeted for 1.4 |
On a small note to future readers, I believe this is partially fixed by #27256, in 1.3.0+. |
I think we should not bother to fix this for the alpha annotation and it should be fixed for the beta provisioning change, right @jsafrane ?? |
This is "works in practice" fixed; dynamic PVs are distributed across zones (based on a hash of their name), and with PetSets they round-robin around the zones (hash + offset)). |
When running with multi-zone (ubernetes-lite), a dynamic persistent volume is always created in the zone of the master. It would be nice to fix this, likely through an annotation.
Brought up in code review of the docs: kubernetes/website#140
cc @quinton-hoole @saad-ali
The text was updated successfully, but these errors were encountered: