Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend Persistent Volume Claims to select on labels #14908

Closed
tobad357 opened this issue Oct 1, 2015 · 9 comments
Closed

Extend Persistent Volume Claims to select on labels #14908

tobad357 opened this issue Oct 1, 2015 · 9 comments
Assignees

Comments

@tobad357
Copy link
Contributor

@tobad357 tobad357 commented Oct 1, 2015

We are currently evaluating using a large number of volumes with different characteristics for different apps. These volumes could as an example be ssd vs hd, different snapshot policies, different replication strategies etc. These volumes are then intended for different usage. Webserver vs Database

What I would like is to extend the claim to have a selector and then use that selector against the persistent volumes labels. This would be inline with how Services selects pods.

If this is an acceptable solution I would be happy to see if I can implement it.

@davidopp
Copy link
Member

@davidopp davidopp commented Oct 2, 2015

@thockin @kubernetes/rh-storage

@markturansky
Copy link
Contributor

@markturansky markturansky commented Oct 2, 2015

@tobad357 Thanks for the suggestion.

Users definitely need better ways to select PVs. The approach we're currently implementing is additional "quality of service" annotations on a claim that are used by the various PV components (binder, provisioner, etc).

The goal is to allow easy customization of the components via config so that you have your snapshot policies, ssd v. hd volumes, etc.

@tobad357
Copy link
Contributor Author

@tobad357 tobad357 commented Oct 2, 2015

@markturansky
So this is in the works already? and if it is, any issue number I can follow?
Sorry if off topic
And additionally in the new implementation is there any work for a claim to claim X number of PV's where X is the amount of replicas in a replication controller. This would be needed for non shared storage, such as iSCSI where each replica should have their own volume (clustered DB server, Cassandra, Zookeeper)

@markturansky
Copy link
Contributor

@markturansky markturansky commented Oct 2, 2015

See #260 for more discussion about each RC replica getting a distinct PV.

I'm starting with QoS tiers for provisioning, but I realize now how I have to make all pieces pluggable so that admin can customize all of this behavior.
#14537

@pmorie
Copy link
Member

@pmorie pmorie commented Oct 3, 2015

@kubernetes/rh-storage

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Nov 6, 2015

I could maybe imagine a volumeSelector as an escape hatch, similar to nodeSelector. If we do that, we should use the new-style selector we used for Job.

@thockin
Copy link
Member

@thockin thockin commented Nov 23, 2015

I think in the longer term we can do better than this, but I am not against
this.

On Fri, Nov 6, 2015 at 9:10 AM, Brian Grant notifications@github.com
wrote:

I could maybe imagine a volumeSelector as an escape hatch, similar to
nodeSelector. If we do that, we should use the new-style selector we used
for Job.


Reply to this email directly or view it on GitHub
#14908 (comment)
.

@wkruse
Copy link

@wkruse wkruse commented Apr 27, 2017

@tobad357
Copy link
Contributor Author

@tobad357 tobad357 commented Apr 28, 2017

Yes it is, will close it

@tobad357 tobad357 closed this Apr 28, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
7 participants
You can’t perform that action at this time.