-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify PersistentVolumeClaimSource by Selector, not Name #9712
Comments
+1. Yes, we need a way for replicas from a replication controller to get their own volume but use the same single claim in the RC. That was out of scope for the original implementation but it is a known limitation that will be worked post 1.0. A good use case is MongoDB where each node in the mongo cluster requires its own volume. |
It's the right direction, we just (as you know :) ran out of time for 1.0. On Fri, Jun 12, 2015 at 7:00 AM, Justin Santa Barbara <
|
xref #260 |
What is the status of this feature? |
Nobody has worked on speccing this any further yet. The most immediate problem I see is that we don't currently track which claims are used by pods, so we need a way to back-link and not select PVCs that are "in use". |
How hard would the back-link be to implement. I might want to give that a shot. |
I think it's more a caseof whether we want to build that backlink, or find On Thu, Sep 15, 2016 at 12:42 AM, Simon Menke notifications@github.com
|
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Closing, we did PetSets instead |
My understanding is that when we want a cluster of e.g. 3 pods with persistent storage, the best we can do today is to launch 3 RCs, specifying a different persistent volume for each. We can still group those into one service.
I think it would be nice to be able to just specify one RC, and have it map to a group of persistent volumes.
One way to do that would be to specify a list of persistent volumes in the RC, with some sort of parameterization to say "this PV is for pod 1, this PV for pod 2, etc"
An alternative which would be "more k8s" would be to allow specifying a selector, which would match labels on our PersistentVolumeClaims. The "identity" of a pod would be determined not by the rc passing down an identifier to the pod, but by whatever PV it ended up with.
For unaware databases, this might require a smarter management process in the pod, but I'm thinking about that separately ;-)
One downside would be that it would be hard to specify groupings; for example we might want a data-volume and a log-volume to be assigned together. We could specify a further list of fields that must match across all volume allocations to a single pod, but there may be neater solutions (and this may not be that important).
Is this a good idea? Or is there a trick I am missing to do this today?
The text was updated successfully, but these errors were encountered: