New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend Persistent Volume Claims to select on labels #14908

Closed
tobad357 opened this Issue Oct 1, 2015 · 9 comments

Comments

Projects
None yet
7 participants
@tobad357
Contributor

tobad357 commented Oct 1, 2015

We are currently evaluating using a large number of volumes with different characteristics for different apps. These volumes could as an example be ssd vs hd, different snapshot policies, different replication strategies etc. These volumes are then intended for different usage. Webserver vs Database

What I would like is to extend the claim to have a selector and then use that selector against the persistent volumes labels. This would be inline with how Services selects pods.

If this is an acceptable solution I would be happy to see if I can implement it.

@davidopp

This comment has been minimized.

Show comment
Hide comment
@markturansky

This comment has been minimized.

Show comment
Hide comment
@markturansky

markturansky Oct 2, 2015

Member

@tobad357 Thanks for the suggestion.

Users definitely need better ways to select PVs. The approach we're currently implementing is additional "quality of service" annotations on a claim that are used by the various PV components (binder, provisioner, etc).

The goal is to allow easy customization of the components via config so that you have your snapshot policies, ssd v. hd volumes, etc.

Member

markturansky commented Oct 2, 2015

@tobad357 Thanks for the suggestion.

Users definitely need better ways to select PVs. The approach we're currently implementing is additional "quality of service" annotations on a claim that are used by the various PV components (binder, provisioner, etc).

The goal is to allow easy customization of the components via config so that you have your snapshot policies, ssd v. hd volumes, etc.

@tobad357

This comment has been minimized.

Show comment
Hide comment
@tobad357

tobad357 Oct 2, 2015

Contributor

@markturansky
So this is in the works already? and if it is, any issue number I can follow?
Sorry if off topic
And additionally in the new implementation is there any work for a claim to claim X number of PV's where X is the amount of replicas in a replication controller. This would be needed for non shared storage, such as iSCSI where each replica should have their own volume (clustered DB server, Cassandra, Zookeeper)

Contributor

tobad357 commented Oct 2, 2015

@markturansky
So this is in the works already? and if it is, any issue number I can follow?
Sorry if off topic
And additionally in the new implementation is there any work for a claim to claim X number of PV's where X is the amount of replicas in a replication controller. This would be needed for non shared storage, such as iSCSI where each replica should have their own volume (clustered DB server, Cassandra, Zookeeper)

@markturansky

This comment has been minimized.

Show comment
Hide comment
@markturansky

markturansky Oct 2, 2015

Member

See #260 for more discussion about each RC replica getting a distinct PV.

I'm starting with QoS tiers for provisioning, but I realize now how I have to make all pieces pluggable so that admin can customize all of this behavior.
#14537

Member

markturansky commented Oct 2, 2015

See #260 for more discussion about each RC replica getting a distinct PV.

I'm starting with QoS tiers for provisioning, but I realize now how I have to make all pieces pluggable so that admin can customize all of this behavior.
#14537

@pmorie

This comment has been minimized.

Show comment
Hide comment
Member

pmorie commented Oct 3, 2015

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Nov 6, 2015

Member

I could maybe imagine a volumeSelector as an escape hatch, similar to nodeSelector. If we do that, we should use the new-style selector we used for Job.

Member

bgrant0607 commented Nov 6, 2015

I could maybe imagine a volumeSelector as an escape hatch, similar to nodeSelector. If we do that, we should use the new-style selector we used for Job.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Nov 23, 2015

Member

I think in the longer term we can do better than this, but I am not against
this.

On Fri, Nov 6, 2015 at 9:10 AM, Brian Grant notifications@github.com
wrote:

I could maybe imagine a volumeSelector as an escape hatch, similar to
nodeSelector. If we do that, we should use the new-style selector we used
for Job.


Reply to this email directly or view it on GitHub
#14908 (comment)
.

Member

thockin commented Nov 23, 2015

I think in the longer term we can do better than this, but I am not against
this.

On Fri, Nov 6, 2015 at 9:10 AM, Brian Grant notifications@github.com
wrote:

I could maybe imagine a volumeSelector as an escape hatch, similar to
nodeSelector. If we do that, we should use the new-style selector we used
for Job.


Reply to this email directly or view it on GitHub
#14908 (comment)
.

@wkruse

This comment has been minimized.

Show comment
Hide comment
@tobad357

This comment has been minimized.

Show comment
Hide comment
@tobad357

tobad357 Apr 28, 2017

Contributor

Yes it is, will close it

Contributor

tobad357 commented Apr 28, 2017

Yes it is, will close it

@tobad357 tobad357 closed this Apr 28, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment