Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify PersistentVolumeClaimSource by Selector, not Name #9712

Closed
justinsb opened this issue Jun 12, 2015 · 10 comments
Closed

Specify PersistentVolumeClaimSource by Selector, not Name #9712

justinsb opened this issue Jun 12, 2015 · 10 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@justinsb
Copy link
Member

My understanding is that when we want a cluster of e.g. 3 pods with persistent storage, the best we can do today is to launch 3 RCs, specifying a different persistent volume for each. We can still group those into one service.

I think it would be nice to be able to just specify one RC, and have it map to a group of persistent volumes.

One way to do that would be to specify a list of persistent volumes in the RC, with some sort of parameterization to say "this PV is for pod 1, this PV for pod 2, etc"

An alternative which would be "more k8s" would be to allow specifying a selector, which would match labels on our PersistentVolumeClaims. The "identity" of a pod would be determined not by the rc passing down an identifier to the pod, but by whatever PV it ended up with.

For unaware databases, this might require a smarter management process in the pod, but I'm thinking about that separately ;-)

One downside would be that it would be hard to specify groupings; for example we might want a data-volume and a log-volume to be assigned together. We could specify a further list of fields that must match across all volume allocations to a single pod, but there may be neater solutions (and this may not be that important).

Is this a good idea? Or is there a trick I am missing to do this today?

@justinsb justinsb changed the title Specify PersistentVolumeClaim by Selector, not Name Specify PersistentVolumeClaimSource by Selector, not Name Jun 12, 2015
@markturansky
Copy link
Contributor

+1.

Yes, we need a way for replicas from a replication controller to get their own volume but use the same single claim in the RC. That was out of scope for the original implementation but it is a known limitation that will be worked post 1.0.

A good use case is MongoDB where each node in the mongo cluster requires its own volume.

@ArtfulCoder ArtfulCoder added this to the v1.0-post milestone Jun 12, 2015
@thockin
Copy link
Member

thockin commented Jun 16, 2015

It's the right direction, we just (as you know :) ran out of time for 1.0.

On Fri, Jun 12, 2015 at 7:00 AM, Justin Santa Barbara <
notifications@github.com> wrote:

My understanding is that when we want a cluster of e.g. 3 pods with
persistent storage, the best we can do today is to launch 3 RCs, specifying
a different persistent volume for each. We can still group those into one
service.

I think it would be nice to be able to just specify one RC, and have it
map to a group of persistent volumes.

One way to do that would be to specify a list of persistent volumes in the
RC, with some sort of parameterization to say "this PV is for pod #1
#1, this PV for
#2 #2, etc"

An alternative which would be "more k8s" would be to allow specifying a
selector, which would match labels on our PersistentVolumeClaims. The
"identity" of a pod would be determined not by the pod passing down an
identifier, but by whatever PV it ended up with.

For unaware databases, this might require a smarter management process in
the pod, but I'm thinking about that separately ;-)

One downside would be that it would be hard to specify groupings; for
example we might want a data-volume and a log-volume to be assigned
together. We could specify a further list of fields that must match across
all volume allocations, but there may be neater solutions (and this may not
be that important).

Is this a good idea? Or is there a trick I am missing to do this today?


Reply to this email directly or view it on GitHub
#9712.

@smarterclayton
Copy link
Contributor

xref #260

@saad-ali saad-ali added priority/backlog Higher priority than priority/awaiting-more-evidence. team/cluster labels Jun 18, 2015
@bgrant0607 bgrant0607 removed this from the v1.0-post milestone Jul 24, 2015
@fd
Copy link

fd commented Sep 14, 2016

What is the status of this feature?

/cc @markturansky @thockin

@thockin
Copy link
Member

thockin commented Sep 15, 2016

Nobody has worked on speccing this any further yet. The most immediate problem I see is that we don't currently track which claims are used by pods, so we need a way to back-link and not select PVCs that are "in use".

@fd
Copy link

fd commented Sep 15, 2016

How hard would the back-link be to implement. I might want to give that a shot.

@thockin
Copy link
Member

thockin commented Sep 15, 2016

I think it's more a caseof whether we want to build that backlink, or find
a different solution. Inter-object linkages are tricky to get right and
keep right

On Thu, Sep 15, 2016 at 12:42 AM, Simon Menke notifications@github.com
wrote:

How hard would the back-link be to implement. I might want to give that a
shot.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#9712 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVON1oQ6g2M2GBbDaf8aS3j_fyjpfks5qqPcCgaJpZM4FBC2z
.

@0xmichalis 0xmichalis added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed team/cluster (deprecated - do not use) labels Mar 20, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2018
@justinsb
Copy link
Member Author

Closing, we did PetSets instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests