Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with /docs/tasks/configure-pod-container/configure-persistent-volume-storage/ #2803

Closed
1 of 2 tasks
shufflingB opened this issue Mar 14, 2017 · 6 comments
Closed
1 of 2 tasks

Comments

@shufflingB
Copy link

This is a...

  • Feature Request
  • Bug Report

Problem:
The example no longer works with minikube version v0.17.1

With the existing pvc config, the current version of minikube is automatically provisioning a new volume and ignoring the manually created one with the its index.html file in it.

This then causes the pod to mount the incorrect volume and the rest of the example doesn't work

e.g. end up seeing

kubectl get pvc
NAME            STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
task-pv-claim   Bound     pvc-2f9a66f1-082f-11e7-ace3-7a0eb8dc83f6   3Gi        RWO           11h

Instead of what I should be seeing, which is:

kubectl get pvc
NAME            STATUS    VOLUME           CAPACITY   ACCESSMODES   AGE
task-pv-claim   Bound     task-pv-volume   10Gi       RWO           11h

This worked for me when I tried it about two weeks ago. The minikube provision config is as default (as it was before), but I have picked up new versions of minikube and docker, my guess is that something in that set of updates has changed the default provisioning policy.

Proposed Solution:
The example can be got going again by telling the system not to automatically provision with the empty storage class annotation in the pvc, e.g.

...
metadata:
  name: task-pv-claim
  "annotations": {
        "volume.beta.kubernetes.io/storage-class": ""
  }
...

Page to Update:
http://kubernetes.io/...

--Kubernetes Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"

@shufflingB
Copy link
Author

I'm not sure quite where the correct behaviour lies here i.e. is it the docs or a problem with minikube. Either way the example https://kubernetes.io/docs/tutorials/stateful-application/run-stateful-application/ might benefit from being updated to explain, or reference the correct thing to do with minikube.

@dlorenc
Copy link
Contributor

dlorenc commented Mar 14, 2017

So I think that this is the "correct" behavior, but it definitely did change. What happens if you try this example in something like GKE instead of minikube?

@shufflingB
Copy link
Author

In GKE, the example as it currently stands, i.e. without the annotation, seems to work correctly with the pvc correctly binding to the expected pv and no unexpected pv's being dynamically created.

@dlorenc
Copy link
Contributor

dlorenc commented Mar 14, 2017

Thanks, it looks like this behavior is a bit under-specified. GKE uses a different mechanism for dynamic provisioning right now. I created a cluster and don't see any storageclass objects:

$ kubectl get storageclass --all-namespaces
No resources found.

Which I think means it's going through this codepath: https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-controller-manager/app/plugins.go#L148

From the docs here it looks like users might have to explicitly disable dynamic provisioning:
https://kubernetes.io/docs/user-guide/persistent-volumes/#dynamic

Claims that request the class "" effectively disable dynamic provisioning for themselves

I'll keep digging around.

@shufflingB
Copy link
Author

shufflingB commented Mar 14, 2017

I think what happens with the example in GKE is that because the PV is using the hostPath plugin (as opposed to GCEPersistentDisk), it ends up only being created on the node where the pod that is using it (via the claim) ends up (that's what it looked like when checked with gcloud compute ssh gke-cluster-*)

I've not really played with storageclass yet, but I would guess this might be the reason why kubectl get storageclass --all-namespaces does not showing the volumes, i.e. the disk space is just coming out of that already allocated to the nodes and visible with gcloud compute disks list

Thanks for investigating this.

@prydonius
Copy link
Contributor

@dlorenc what version of Kubernetes was your GKE cluster on? In 1.5 and lower, GKE used an alpha dynamic provisioning feature that didn't use storage classes. However, in 1.6 GKE now installs a default storage class and beta dynamic provisioning is used.

I can confirm that in GKE with Kubernetes 1.6, the PVC in the current example gets a dynamically provisioned volume instead of getting bound to task-pv-volume. To bind to task-pv-volume, storageClassName: "" must be set in the PVC.

prydonius pushed a commit to prydonius/kubernetes.github.io that referenced this issue Jun 2, 2017
When binding to a manually created PersistentVolume, the claim must
disable dynamic provisioning by specifying an empty storage class.

Fixes kubernetes#2803
prydonius pushed a commit to prydonius/kubernetes.github.io that referenced this issue Jun 9, 2017
When binding to a manually created PersistentVolume, the claim must
disable dynamic provisioning by specifying an empty storage class.

Fixes kubernetes#2803
chenopis pushed a commit that referenced this issue Jun 14, 2017
* update pod persistent volume example storage class

When binding to a manually created PersistentVolume, the claim must
disable dynamic provisioning by specifying an empty storage class.

Fixes #2803

* use specific storageclass

* revert gibibytes -> gigabytes change
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants