Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic PVs are always created in the master zone #23330

Closed
justinsb opened this issue Mar 22, 2016 · 16 comments
Closed

Dynamic PVs are always created in the master zone #23330

justinsb opened this issue Mar 22, 2016 · 16 comments
Assignees
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Milestone

Comments

@justinsb
Copy link
Member

When running with multi-zone (ubernetes-lite), a dynamic persistent volume is always created in the zone of the master. It would be nice to fix this, likely through an annotation.

Brought up in code review of the docs: kubernetes/website#140

cc @quinton-hoole @saad-ali

@thockin thockin added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Mar 22, 2016
@thockin
Copy link
Member

thockin commented Mar 22, 2016

@kubernetes/rh-storage

@stevesloka
Copy link
Contributor

I'm in this exact boat now needing persistent storage with my pods. I might be able to help out since I either need to get this working across zones, or setup GlusterFS (not sure which is more work).

@ghost
Copy link

ghost commented Mar 22, 2016

@stevesloka It would be great if you could help out on this. Let me or @justinsb know if you need any pointers as to how to get it done.

@stevesloka
Copy link
Contributor

@quinton-hoole I would love some pointers! I just started to look into everything to see what might need to be done. Would be great to help outline the pieces and parts that need touched. I've done some integrations with the k8s client, but never yet in the core components.

@ghost
Copy link

ghost commented Mar 22, 2016

@stevesloka #22602 is a good place to start, to get a feel for how the volume creation code is structured, and where you'll likely need to make your changes. @saad-ali might have additional pointers.

@erinboyd
Copy link

@stevesloka hi! There isn't a dynamic provisioner for GlusterFS. I can point you to some e2e examples for this. Are you also looking for documentation on setting up gluster or just the PV/PVC portion?

@stevesloka
Copy link
Contributor

@erinboyd I don't want to muddy this issue, but can quickly explain my needs. I need to persist some data for mysql and redis. I need this to be over AWS AZ's. Now that EBS mounts work on 1.2, I need to either utilize the Ubernetes-lite and scale my pods across AZ's using EBS mounts and fixing this issue with PVC or I use Gluster to acomplish the same, but then need to implement a way to get a gluster client on my CoreOS instance using a daemonset or something to be my mounter. I'm thinking this route with EBS's might be simpler as my team doesn't have a lot of production experience with gluster.

@saad-ali
Copy link
Member

Dynamic volume provisioning was released in v1.2 as an experimental feature. It is triggered via a volume.alpha.kubernetes.io/storage-class annotation. See README here.

For v1.3 we'd like to complete this feature by finalizing the UX/API. One of our goals is to introduce an abstraction layer (preliminarily called "storage classes") that enable users to request different grades of storage without having to worry about the specifics of any one storage implementation. At the same time, we still need to expose some knobs to the user (for example, as requested by this issue, zone).

We've had detailed discussions in the storage-SIG about possible implementations. But those discussions are on hold at the moment as we are prioritizing other work.

I'd suggest we not tackle dynamic PV zoning as a one off, but as part of that holistic design. If we do want a quick and dirty solution, however, introducing an experimental annotation should work.

CC @childsb @thockin @kubernetes/sig-storage

@jsafrane
Copy link
Member

Just for the reference, there is a PR that makes dynamic provisioning configurable via ConfigMaps. Zones was one of the use cases that it should solve:
https://github.com/kubernetes/kubernetes/pull/17056/files#diff-75765e8f566753da9b95d2077b1d7418R77

As @saad-ali pointed out, we got distracted by other work.

@stevesloka
Copy link
Contributor

Seeing as we need more discussion and there is some work in flight, I'll just roll with the manual PV and setup my RC's manually to target my AWS AZ's. I may join the next sig hangout to maybe follow along on the discussion, but I do think the auto provisioning would help eliminate a bunch of upfront effort so I'm in favor.

Thanks everyone!

@stevesloka
Copy link
Contributor

Has there been any work done on the dynamic provisioning of claims? I'd like to tackle a simple approach to help my use-case but didn't want to step on any wip.

@jsafrane
Copy link
Member

@stevesloka, there is PR #25413 to design how various storage properties incl. availability zone should be handled in PersistentVolumes and the same approach will then be reused in dynamic provisioning.

@childsb childsb added this to the v1.4 milestone Jul 14, 2016
@childsb
Copy link
Contributor

childsb commented Jul 14, 2016

DP enhancements are targeted for 1.4

@mikekap
Copy link
Contributor

mikekap commented Jul 22, 2016

On a small note to future readers, I believe this is partially fixed by #27256, in 1.3.0+.

@matchstick matchstick added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Aug 12, 2016
@thockin
Copy link
Member

thockin commented Aug 12, 2016

I think we should not bother to fix this for the alpha annotation and it should be fixed for the beta provisioning change, right @jsafrane ??

@thockin thockin closed this as completed Aug 12, 2016
@justinsb
Copy link
Member Author

This is "works in practice" fixed; dynamic PVs are distributed across zones (based on a hash of their name), and with PetSets they round-robin around the zones (hash + offset)).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

10 participants