Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistentVolume Config & Provisioning proposal #17056

Closed
wants to merge 5 commits into from

Conversation

@markturansky
Copy link
Contributor

markturansky commented Nov 10, 2015

This proposal ties together several pieces of PersistentVolumes that are currently missing or sub-optimal.

See provisioning.md for details.

Summary of proposed changes:

  1. Structured config data -- file config? extended API?
  2. Allow many recyclers and provisioners (we have 1 today) -- #13338
  3. Use selectors for everything (#14908)
  4. Refactor provisioning code from volumes plugin code (#14217)
  5. executable driver model for recyclers and provisioners

Tasks:

  1. Add pvc.Spec.PersistentVolumeSelector
  2. Expose more volume attributes (e.g, EBS volumeType) A map of arguments was added to config and plugins. Varying attribute types to be set there and to be used by the plugin.
  3. Add config data and initialize controller w/ config
  4. Controller consolidation: PRs #14537 and #16432 (issue #15632)
  5. Make controllers work with selectors per the proposal.
  6. Refactor current binding/recycling behavior into plugins and use in default config
  7. Bonus: executable drivers -- not required immediately

@thockin @saad-ali @kubernetes/rh-storage

A tech design session to discuss this would be helpful. Many of the tasks will be parallelizable given enough review bandwidth.


Binders contains two selectors against which they match claims for binding. One selector matches claims and the other selector matches volumes. Both must be true in order to bind, in addition to other matching semantics (e.g, capacity). Claims can remain unbound indefinitely.

Provisioners create resources on demand for claims matching its selector. PersistentVolumeClaims with matching labels will be dynamically provisioned by a plugin named in the config. Volume attributes are copied from a template. A specific security context can be assigned to the provisioner. Volumes created by the provisioner automatically bind to the claim for which it was provisioned.

This comment has been minimized.

Copy link
@deads2k

deads2k Nov 11, 2015

Contributor

I think the system will be easier to understand if provisioners simply add the PV and allow a Binder to handle the binding step.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 11, 2015

Author Contributor

That's exactly what happens now. I will make this text clearer.

The provisioner creates the PV w/ a ClaimRef to the Claim that was provisioned. The ClaimRef on the PV automatically binds to the PVC.

This comment has been minimized.

Copy link
@deads2k

deads2k Nov 11, 2015

Contributor

The provisioner creates the PV w/ a ClaimRef to the Claim that was provisioned. The ClaimRef on the PV automatically binds to the PVC.

I don't know that I'm pro-claimRef. Having a provisioner create things seems great, but specifically tagging them feels more questionable. If its only job is to create PVs and the binder can decide what to do with the PV, it seems like a cleaner separation of concerns.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 11, 2015

Author Contributor

Making a ClaimRef is the transaction that forms a bind. If a provisioner is using a claim from which it creates a PV, why not also add the ClaimRef on the very transaction that makes the PV?

It succeeds and is bound or it fails and there is no PV.

The binding order part of the doc explains the precedence when binding. Volumes first match their ClaimRef.


Administrators map binders, provisioners, and recyclers to specific types of persistent claims and volumes by creating a PersistentStorageConfig file. The config file is set as a CLI flag.

Binders contains two selectors against which they match claims for binding. One selector matches claims and the other selector matches volumes. Both must be true in order to bind, in addition to other matching semantics (e.g, capacity). Claims can remain unbound indefinitely.

This comment has been minimized.

Copy link
@deads2k

deads2k Nov 11, 2015

Contributor

Will a Binder look for a Provisioner to kick if no matching claims are available? Also, will a binder update the status of a PVC to indicate that it looked for a match, but couldn't find one? That could be used to indirectly kick the Provisioner and it would easily allow for adding a provisioner after the fact.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 11, 2015

Author Contributor

I'd love your help reviewing that logic in the controller when I implement this functionality :) Your feedback on the last one was very helpful.

This comment has been minimized.

Copy link
@smarterclayton

smarterclayton Nov 19, 2015

Contributor

It should be possible to define a default provisioner and a default binder. To me this proposal is roughly analogous to schedulers - there must be a default (even if the default is do nothing, or bind anything).

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 19, 2015

Author Contributor

The current binding behavior would be the default. I will make that into a plugin and add it to the default configuration. Same with the recycler.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 19, 2015

Author Contributor

Am I wrong to think of the binder and provisioner as the same? They both work in reaction to a claim's selector.

One implementation might create a PersistentVolume object pre-populated with a ClaimRef. Another implementation might look for PV's without a ClaimRef and check for a match. Either way, the return value is a PersistentVolume w/ ClaimRef bound to the claim (return value can also be nil)

A different process reacts to a newly created PV with phase Provisioning and creates the resource in the infrastructure. After fulfillment in the infrastructure, the volume is Available and the current claim sync period would bring the claim to Bound phase and ready for use.

OOTB behavior is the existing "binder" as a plugin, but I don't think I need to introduce a BinderPlugin when the provisioner can do whatever it wants to find/create a PV for a PVC.

// Driver is the name of the plugin to use for this Recycle or the name of the driver located beneath
// /usr/libexec/kubernetes/controllermanager-plugins/provisioning/exec
// A Recycler must specify a driver/plugin name.
Driver string

This comment has been minimized.

Copy link
@deads2k

deads2k Nov 11, 2015

Contributor

We're currently using the RecyclerPodTemplate right? If so, does this have to be in this particular pull? It seems like it would add something to argue about without providing a lot of power.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 11, 2015

Author Contributor

You're referencing the driver in this comment? If so, I agree and yes, just the pod template would suffice for today's functionality.

// A Recycler must specify a driver/plugin name.
Driver string
// Args is a map of key/value pairs passed to the driver (but not plugin) as arguments
Args *map[string]string

This comment has been minimized.

Copy link
@deads2k

deads2k Nov 11, 2015

Contributor

If you're going to have a Driver and this is part of its config, I'd create a struct to hold and clearly relate these two pieces. That would also provide an easier way to indicate mutual exclusivity with RecyclerPodTemplate

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 11, 2015

Author Contributor

This is probably correct, but for the short term, I'm thinking of dropping the executable driver model for the first pass. So long as we don't preclude it in the future.

The current recycler just needs a pod, so a template w/ labels to match specific volumes makes sense.

The provisioners need to work for Cloud APIs, and they do today. These, too, can be configured with what is proposed.

I think an executable driver is v2 of this feature.

Args *map[string]string
}
type ProvisionerConfig struct {

This comment has been minimized.

Copy link
@deads2k

deads2k Nov 11, 2015

Contributor

Would it be easier for people to produce and debug PersistentVolumeProvisioner plugins if they were simply pods that were called similar to recyclers? It seems like people may be more comfortable writing bash. That would allow a company to create their own custom procedure layered on top and it would allow easy debugging.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 11, 2015

Author Contributor

Not sure what you mean about "customer procedure" on top, unless you're referring to the driver model. I think the driver model is not strictly needed. It's a good enhancement but existing functionality can be implemented without it.

When/if we do the driver model, the pattern is clearly laid out in Networking in a way that makes it sane and safe. That pattern is being replicated in FlexVolume. It's the right way to go if we need this feature now, which I don't think we strictly do.

@rootfs
Copy link
Member

rootfs commented Nov 11, 2015

A customer recycler may choose snapshot to recycle the volume: once the claim is bound, the volume is snapshot; during recycle, the snapshot is restored. In this case, both pv claim and the recycler have to be stateful: recycler must know the name of the snapshot that created during claim.


Provisioning and Recycling both follow a plugin model where the plugin accepts generic arguments. A future enhancement can scan for drivers by name and consume the same arguments (following a similar pattern in [Networking](../../pkg/kubelet/network/exec/exec.go)).

The plugin model is similar to volume plugins, where each has a unique name and implements various interfaces. Arguments as passed as string key/value pairs to a plugin in `Init(host, args)`, which will be called exactly once before any plugin operation is invoked. Many provisioners can be configured using the same plugin with different arguments, such as "volumeType" for a Cloud API.

This comment has been minimized.

Copy link
@smarterclayton

smarterclayton Nov 19, 2015

Contributor

I'm not yet sold on the model proposed here. Volumes suffer from a serious problem today - they aren't extensible except via checking code into Kube. A provisioner and a binder are controllers - the details of how they bind are really their own domain. An admin configures their applicability - that problem is solely the domain of administrators.

I'm going to describe an alternate model that I think is superior:

  1. We treat binders and provisioners as sets of controllers
  2. The implementation of those controllers is shared, and may be in the Kube source tree or another
  3. It must be easy for someone to glue together their own provisioner or binder with or without writing Go code
  4. When someone wants to write a new provisioner, they simply watch for PVCs based on some criteria
  5. If they are going to provision for the PVC, they annotate the PVC first (to take ownership) and then create a PV, then satisfy the PVC
  6. There should be a generic provisioner controller that can shell out to bash. The arguments to bash are details about the PVC. The arguments to the controller are a set of PVC labels to watch. The bash command is responsible for provisioning a PV and then returning 0 and the name of the new PV, or 1 and the exit code.
  7. There can be in-tree provisioners that replace the call out to bash with "call out to Go code compiled with me" via the interfaces described here.

The model above defines how anyone (not just a Kube programmer) can easily extend the logic. An admin can separate those controllers by their label matchers.

Kubernetes is intended to be a compositional system - whenever we add new abstractions, we have to clear a pretty high bar that the abstraction is easily composible. I think the proposal here does not go far enough to focus on that composability and extensibility.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 19, 2015

Author Contributor

I believe the driver model in the networking plugins allows that, by calling out safely to a binary at a known location. That's your bash model.

Plugins require compilation into Kube, of course. If admins configure no plugins, PVCs are pending indefinitely and require outside help.

This leads me to think we need examples of !Go languages watching the apiserver and responding in kind. In the end, my suggested plugins running in a controller are also just working against the API. All controllers are the same that way.

This comment has been minimized.

Copy link
@markturansky

markturansky Nov 19, 2015

Author Contributor

Also, to your point about volume extensibility generally, there's FlexVolume (#13840) in the works to allow the same driver model to exec safely on a node to setup $FOO volume that's not compiled into Kube.


`PersistentVolumeRecycler` is a plugin that reclaims discarded resources.

### Goals

This comment has been minimized.

Copy link
@pmorie

pmorie Dec 1, 2015

Member

I suggest making this the first part of the TLDR section, since it's the actual TLDR content. The list of types is not the kind of thing I expect to read after reading TLDR.

// Args is a map of key/value pairs passed to the driver (but not plugin) as arguments
Args *map[string]string
// SecurityContextName is the name of the security context to use when performing recycling operations
SecurityContextName string

This comment has been minimized.

Copy link
@pmorie

pmorie Dec 1, 2015

Member

@markturansky

I don't understand what this represents, since security contexts are not resources and cannot be looked up by name.

@pmorie
Copy link
Member

pmorie commented Dec 1, 2015

I think the actual proposal would be a lot easier to review if the types were not part of this PR. IMO there's probably a fair amount of discussion left for this, and my experience has been that it's significantly easier for everyone involved if you don't try to carry API changes for an evolving API while a proposal is being discussed.

@markturansky
Copy link
Contributor Author

markturansky commented Dec 1, 2015

I agree, Paul. I included types in the beginning only to know that what I
was writing in the proposal actually made sense and worked. I'll remove
them because it is clutter.

On Tue, Dec 1, 2015 at 1:08 PM, Paul Morie notifications@github.com wrote:

I think the actual proposal would be a lot easier to review if the types
were not part of this PR. IMO there's probably a fair amount of discussion
left for this, and my experience has been that it's significantly easier
for everyone involved if you don't try to carry API changes for an evolving
API while a proposal is being discussed.


Reply to this email directly or view it on GitHub
#17056 (comment)
.


## New Kinds:

`PersistentVolumeProvisioner` -- is labeled by admins and requested by users via pvc.Spec.PersistentVolumeProvisionerSelector. Some provisioners fulfill user requests by creating volumes on demand while others may seeks to match claims against existing, unbound volumes.

This comment has been minimized.

Copy link
@ncdc

ncdc Dec 8, 2015

Member

s/may seeks/may seek/


`PersistentVolumeProvisioner` -- is labeled by admins and requested by users via pvc.Spec.PersistentVolumeProvisionerSelector. Some provisioners fulfill user requests by creating volumes on demand while others may seeks to match claims against existing, unbound volumes.

`PersistentVolumeRecycler` -- contains pvr.Spec.PersistentVolumeSelector and recycles volumes matching the selector. PVs are labeled by admins to match.

This comment has been minimized.

Copy link
@ncdc

ncdc Dec 8, 2015

Member

s/pvr/pvc/ ?

@markturansky
Copy link
Contributor Author

markturansky commented Dec 9, 2015

This requires a rewrite now with the latest from Storage SIG.

  1. I'll replace the entire config section with the impending ConfigMap feature.
  2. I'll clarify the plugin approach: some built-ins, a built-in for exec on master, and a built-in for "run and watch this pod until completion".
@markturansky
Copy link
Contributor Author

markturansky commented Feb 9, 2016

I added "description" to the configmap and left a placeholder for "parameterization" of plugins to avoid the combinatorial increase in number of provisioners by attribute count. I don't think we are blocked on that.

As for selectors, i think it stands that selectors on the claim must match the labels on the volume (first), then match labels on a provisioner that can provide a volume. In both cases $N selectors on pvc.Spec.Selector will be found in $V labels on a volume. The PV may have more labels. Similarly, a provision may have more labels, such as "kubernetes.io/type":"provisioner". This follows Pod's NodeSelector.

@eparis
Copy link
Member

eparis commented Feb 10, 2016

I'm ok with this PR as it stands. I think we need to put serious thought into 'labels on selectors as parameters.' Its ok if the dynamic provisioner/configmap has 10 labels and the selector only 3. My problem comes when we start thinking about having 10 labels on the PVSelector and only 3 on the DP. And maybe, somehow, magically, 7 of those become parameters to the DP or something....

Since this PR doesn't talk about that, and requires exact matching, I think it is good as it stands...


# Abstract

This document proposes a model for the configuration and management of dynamically provisioned persistent volumes in Kubernetes. Familiarity [Persistent Volumes](../user-guide/persistent-volumes/) is assumed.

This comment has been minimized.

Copy link
@saad-ali

saad-ali Feb 16, 2016

Member

...Familiarity with...

example usage:
kubectl --namespace=system-namespace get configmap -l type=provisioner-config

This comment has been minimized.

Copy link
@saad-ali

saad-ali Feb 16, 2016

Member

Shouldn't namespace be foobar not system-namespace

CLI flags:
--pv-provisioner-namespace=foobar
--pv-provisioner-label="type=provisioner-config

This comment has been minimized.

Copy link
@saad-ali

saad-ali Feb 16, 2016

Member

Missing closing quote

"labels": {
"type": "provisioner-config",
"storage-class": "gold",
"zone": "us-east"

This comment has been minimized.

Copy link
@saad-ali

saad-ali Feb 16, 2016

Member

Why do storage-class and zone need to be labels on the config map? The only match that will happen is on the config map name, which should be the storage class name.

This comment has been minimized.

Copy link
@markturansky

markturansky Feb 16, 2016

Author Contributor

We're not matching by configMap.Name. We're matching labels in the claim's PVSelector with configMap.Labels. This allows us to have "gold" mean one thing, instead of requiring the user to add "gold-east" and "gold-west" to their selector. Using separate labels ("storage-class"="gold" and "zone"="east") allows that.

"persistentVolumeSelector":{
"matchLabels":{
"storage-class":"gold",
"zone":"us-east",

This comment has been minimized.

Copy link
@saad-ali

saad-ali Feb 16, 2016

Member

This does not follow the example above. This should demonstrate the case where the cluster admin decided to bake the zone into his storage classes.

So zone should not be specified, and the storage-class should be gold-east.

The example below with selectors should show the counter example where the cluster admin did not bake the zone into the storage classes, and it is configurable via label.

This comment has been minimized.

Copy link
@markturansky

markturansky Feb 16, 2016

Author Contributor

Both are possible with the current model.

The admin could use "storage-class": "gold-east" and have that param map create a "gold" volume in the east zone.

The admin could also use "storage-class":"gold" and "zone":"east" as separate labels and the user would put both in their selector. The param map on the config is still set by the admin to be gold and east, respectively.

If we parameterize those config's params maps, the separate labels (for "zone") allow us to do that.

This comment has been minimized.

Copy link
@eparis

eparis Feb 16, 2016

Member

@saad-ali I know on the SIG call 2 weeks ago we talked a little bit about using the LabelSelector as a form of parameterization but I'm highly skeptical of that concept. I don't think we do anything like it elsewhere in kube. I kinda feel like 'well known' parameters to the DP should live in the API of the PVC itself (like size is a resource/request) and DP specific parameters should live maybe as annotations.

If we do decide to use the LabelSelector as a mechanism for parameterization of the ConfigMap/DP I think that should be a follow-on PR to loosen the 'Must match everything' nature of this proposed Selector and should be given special thought in an of itself...

The same claim above (gold+east) can match on a PersistentVolume that is labelled similarly.

```
apiVersion: v1

This comment has been minimized.

Copy link
@saad-ali

saad-ali Feb 16, 2016

Member

We should add in the ability for a cluster admin to decide what labels get passed through to the provisioner. This can be done by adding a whitelist in the config map of labels to pass through to the provisioner.

Therefore if an admin, for example, wants to retain control of "zone" he can exclude that from the whitelisted labels. If a user passes it in we can either fail the request or just silently not pass it through to the provisioner. In either case this is nice because the cluster admin can be guaranteed control, even if a particular provisioner exposes powerful option A B and C, unless the cluster admin explicitly enables those options, an end user can't use them.

We can discuss this further in the Storage SIG meeting in the morning.

This comment has been minimized.

Copy link
@markturansky

markturansky Feb 16, 2016

Author Contributor

I understand. It seems easy to accommodate.

What about "zone" and relying on the end user for the correct value "us-east-c1"?

Override is disallowed by default (empty whitelist). It's up to the admin to accept responsibility? Do we need any validation now on inputs?

How much of this is "must have" versus "good next enhancement"? I don't think we're block on adding this.

I am open to it if it doesn't increase scope too much.

This comment has been minimized.

Copy link
@eparis

eparis Feb 16, 2016

Member

This is (to my knowledge) the first introduction of security policy and controls in kube... I think such a whitelist walks in the face of 'simple and obvious first.' There is no distinction between 'admin' and 'user' when we are talking about this stuff. I can just change the ConfigMap as a 'user.'

So really it's about keeping them from being able to shoot themselves in the foot? It seems like a future topic (and not necessarily a bad one) but not something we need to get moving (especially if the LabelSelector must match all labels, as I think this currently reads, in which case all labels must be on the whitelist)

This comment has been minimized.

Copy link
@markturansky

markturansky Feb 17, 2016

Author Contributor

Per our SIG discussion, the admin has the guaranteed fine grained control you mention as a requirement and it has it to the point of tedium. Parameterization or other tools to manage ConfigMaps was agreed as non-blocked by the current proposal.

The example above contains `"params": "${OPAQUE-JSON-MAP}"`, which is a string representing a JSON map of parameters. Each provisionable volume plugin will document and publish the parameters it accepts.

Plugin params can vary by whatever attributes are exposed by a storage provider. An AWS EBS volume, for example, can be a general purpose SSD (gp2), provisioned IOPs (io1) for speed and throughput, and magnetic (standard) for low cost. Additionally, EBS volumes can optionally be encrypted. Configuring each volume type with optional encryption would require 6 distinct provisioners. Administrators can mix and match any attributes exposed by the provisioning plugin to create distinct storage classes.

This comment has been minimized.

Copy link
@saad-ali

saad-ali Feb 16, 2016

Member

Should add in that plugins will be responsible for handling labels as well, and should fail(?) if they are not able to handle all the labels passed in. And that these labels should be propagated to the provisioned PV.

This comment has been minimized.

Copy link
@markturansky

markturansky Feb 17, 2016

Author Contributor

Added.

@markturansky markturansky force-pushed the markturansky:pv_config branch from f9cbcde to e32cdf2 Feb 16, 2016
@markturansky
Copy link
Contributor Author

markturansky commented Feb 16, 2016

Just pushed another revision of the document. Some of the feedback above is implemented. More detail added.

@k8s-teamcity-mesosphere

This comment has been minimized.

Copy link

k8s-teamcity-mesosphere commented on e32cdf2 Feb 16, 2016

TeamCity OSS :: Kubernetes Mesos :: 4 - Smoke Tests Build 15996 outcome was SUCCESS
Summary: Tests passed: 1, ignored: 225 Build time: 00:08:48

@k8s-bot
Copy link

k8s-bot commented Feb 17, 2016

Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist")

If this message is too spammy, please complain to ixdy.

@kangarlou
Copy link
Contributor

kangarlou commented Feb 17, 2016

After yesterday's call, I had a chance to go over the design document and the discussion in this thread. It appears that there are two sets of parameters for dynamic provisioning:

  1. Parameters to select a dynamic provisioner: These parameters are defined in pvc.Spec.PersistentVolumeSelector and need to match with the labels defined in a provisioner’s ConfigMap.
  2. Arguments for the dynamic provisioner plugin:
    • Generic arguments (e.g., volume size).
    • Plugin-specific arguments as defined by the parameters field in ConfigMap.

The current design is flexible enough to support the following two use cases:

  1. Provider-agnostic provisioning: An abstract parameter in PersistentVolumeClaim (e.g., storage-class) gets mapped to a specific configuration on the storage backend by the provisioner.
  2. Provider-aware provisioning: By using provider-specific labels (say by using provisioner-name label) in ConfigMap and pvc.Spec.PersistentVolumeSelector, one can force selecting a certain provisioner to take advantage of the full capabilities of that backend.

I believe the above capabilities are sufficient for many use cases. However, as pointed out by @eparis, they lead to a “combinatorial explosion of DPs/configMaps” for user-supplied, numeric parameters. For instance, if one needs to set the IOPS limit, snapshot frequency, numbers of replicas, etc. for a volume, we end up with a large number of ConfigMaps/provisioners.

I would put IOPS in the same category as volume size as it's a generic parameter that denotes a required bound, so pvc.Spec.Resources may be the right place to set IOPS. However, I’m less clear about how we want to handle plugin-specific arguments like snapshot frequency, number of replicas, etc. Perhaps, there has to be a field in PersistentVolumeClaim to set such parameters for provider-aware provisioning. The parameters field in ConfigMap seems most suitable for setting the default configuration as defined by the storage admin but not for setting user-supplied parameters.  

The whitelist discussion seems to stem from the fact that not all labels are equal. Some labels denote a requirement and need to match across PVs, PVCs, and ConfigMaps, while others are desired and can potentially be ignored by the provisioner. Some labels may denote an absolute value (e.g., number of replicas, snapshot frequency) while some are limits (e.g., capacity, IOPS). In absence of a mechanism to properly support user-supplied parameters, some labels may just be arguments to the provisioner so matching across those labels may not make sense. We probably need to formalize different types of labels and the label matching criteria.

@markturansky
Copy link
Contributor Author

markturansky commented Feb 17, 2016

@kangarlou I think you eloquently highlighted the mismatch between some things that are labels and other things that are part of the request. Capacity is part of the request and other numerical values should be as well. Expressing a range of numerical values in ConfigMap would be bad.

@eparis and I have spoke a little bit about this mismatch. We can give prudent advice in our docs to keep these labels simple if we have a good way to enhance what the request for the resource looks like.

@markturansky
Copy link
Contributor Author

markturansky commented Apr 19, 2016

@jsafrane @childsb is this PR still needed?

@livelace
Copy link

livelace commented Apr 20, 2016

Labels are needed. Possibility of choice by labels is needed.

@k8s-bot
Copy link

k8s-bot commented May 16, 2016

GCE e2e build/test failed for commit e32cdf2.

Please reference the list of currently known flakes when examining this failure. If you request a re-test, you must reference the issue describing the flake.

k8s-github-robot added a commit that referenced this pull request May 23, 2016
Automatic merge from submit-queue

Proposal: persistent volume selector

Partially replaces #17056.  Another proposal will follow dealing with dynamic provisioning on top of storage classes.

@kubernetes/sig-storage
@eparis
Copy link
Member

eparis commented Jun 6, 2016

I'm going to close this PR, hopefully all discussion and thought have been taken into account in #26908

@eparis eparis closed this Jun 6, 2016
k8s-github-robot added a commit that referenced this pull request Jul 14, 2016
Automatic merge from submit-queue

dynamic provisioning proposal

Proposal for dynamic provisioning using storage classes; supercedes #17056

@kubernetes/sig-storage
xingzhou pushed a commit to xingzhou/kubernetes that referenced this pull request Dec 15, 2016
Automatic merge from submit-queue

Proposal: persistent volume selector

Partially replaces kubernetes#17056.  Another proposal will follow dealing with dynamic provisioning on top of storage classes.

@kubernetes/sig-storage
xingzhou pushed a commit to xingzhou/kubernetes that referenced this pull request Dec 15, 2016
Automatic merge from submit-queue

dynamic provisioning proposal

Proposal for dynamic provisioning using storage classes; supercedes kubernetes#17056

@kubernetes/sig-storage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

You can’t perform that action at this time.