Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic volume provisioning support #118

Open
rimusz opened this issue Nov 16, 2018 · 46 comments

Comments

@rimusz
Copy link
Contributor

commented Nov 16, 2018

Dynamic volume provisioning support would be handy to have to test apps which need persistence.

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Nov 16, 2018

So we do have a default storage class (host-path), though I haven't really tested it out yet. This is required for some conformance tests.

const defaultStorageClassManifest = `# host-path based default storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: kube-system
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/host-path`

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Nov 16, 2018

I think we just need to document this
/kind documentation

If not, I'll update this issue with what else is required and follow-up.
/help
/priority important-longterm

@alejandrox1

This comment has been minimized.

Copy link
Contributor

commented Nov 16, 2018

I can help document this and test it out

@rimusz

This comment has been minimized.

Copy link
Contributor Author

commented Nov 17, 2018

yes, I there is default storage class, but it doesn’t work for dynamic volumes provisioning, deployments get stuck waiting for PVCs

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Nov 17, 2018

@davidz627

This comment has been minimized.

Copy link
Member

commented Nov 19, 2018

hostpath doesn't have dynamic provisioning as a hostpath volume is highly tied to the final location of the pod (since you're exposing the host machines storage). Therefore it is only available for use in pre-provisioned/inline volumes.

For Dynamic Volume Provisioning without a cloud provider you can try nfs: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs

If you're on a cloud provider it would probably be easiest to use the cloud volumes.

/cc @msau42

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Nov 19, 2018

@msau42

This comment has been minimized.

Copy link

commented Nov 19, 2018

Sorry can someone explain some context to me? Is this for testing only, or do we actually want to run real production workloads? If it's testing only, there's a hostpath dynamic provisioner that uses the new volume topology feature to schedule correctly to nodes. However, it doesn't handle anything like capacity isolation or accounting.

I forgot, someone at rancher was working on this project. I can't remember the name at the moment though :(

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Nov 19, 2018

@msau42

This comment has been minimized.

Copy link

commented Nov 19, 2018

@rimusz

This comment has been minimized.

@rimusz

This comment has been minimized.

Copy link
Contributor Author

commented Nov 20, 2018

OK, been messing with storage:

  1. https://github.com/kubernetes-incubator/external-storage/tree/master/nfs fails to work in kind
  2. https://github.com/rancher/local-path-provisioner / https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume nothing good went for there as well.

I go luck with https://github.com/rimusz/hostpath-provisioner which is based on https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/tree/master/examples/hostpath-provisioner

  1. deleted default storage class witch comes with kind
  2. installed hostpath-provisioner helm chart which install new default storage class for hostpath
    Then installed with helm 3 releases of postgress to test that it can handle multiple pods, did same for mysql, all worked fine:
mysql         mysql-5dbd494d67-fw7g6                         1/1     Running   0          4m
mysql2        mysql2-67976cdbc9-zd59h                        1/1     Running   0          4m
mysql3        mysql3-c79b9d5dd-tfkgp                         1/1     Running   0          4m
mysql4        mysql4-66d69d4ffc-l2c87                        1/1     Running   0          38s
pg            pg-postgresql-0                                1/1     Running   0          34m
pg2           pg2-postgresql-0                               1/1     Running   0          31m
pg3           pg3-postgresql-0                               1/1     Running   0          28m
$ k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   REASON   AGE
pvc-14f6a7e0-ecc8-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    pg3/data-pg3-postgresql-0   hostpath                27m
pvc-46e3201b-ecc7-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    pg/data-pg-postgresql-0     hostpath                33m
pvc-6116f05a-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql/mysql                 hostpath                4m
pvc-7027dff3-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql2/mysql2               hostpath                3m
pvc-7668b7ae-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql3/mysql3               hostpath                3m
pvc-99b0f805-ecc7-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    pg2/data-pg2-postgresql-0   hostpath                31m
pvc-f191c00d-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql4/mysql4               hostpath                11s

How it look inside kind container:

$ docker exec -it a21a27399140 bash
root@kind-1-control-plane:/# ls -alh /var/kubernetes
total 36K
drwxr-xr-x 9 root root   4.0K Nov 20 13:55 .
drwxr-xr-x 1 root root   4.0K Nov 20 13:14 ..
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:51 mysql-mysql-pvc-6116f05a-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:52 mysql2-mysql2-pvc-7027dff3-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:52 mysql3-mysql3-pvc-7668b7ae-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:55 mysql4-mysql4-pvc-f191c00d-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 3 1001   1001 4.0K Nov 20 13:22 pg-data-pg-postgresql-0-pvc-46e3201b-ecc7-11e8-906e-02425a7d6bdf
drwxrwxrwx 3 1001   1001 4.0K Nov 20 13:24 pg2-data-pg2-postgresql-0-pvc-99b0f805-ecc7-11e8-906e-02425a7d6bdf
drwxrwxrwx 3 1001   1001 4.0K Nov 20 13:28 pg3-data-pg3-postgresql-0-pvc-14f6a7e0-ecc8-11e8-906e-02425a7d6bdf

I think for the time being I will stick with this solution, easy to install and it works very well :-)

It should not too difficult to port it to kind, @munnerz can do it with a blink of an eye :-)

Also docker4mac uses HostPath based provisioner, which is easier to implement comparing to local-volume one.

@yasker

This comment has been minimized.

Copy link

commented Nov 20, 2018

@rimusz What the issue with https://github.com/rancher/local-path-provisioner? Just curious.

@rimusz

This comment has been minimized.

Copy link
Contributor Author

commented Nov 20, 2018

it did not work for me, PV did not get created, so I did not spend too much time digging why.
hostpath-provisioner worked for me straight away :-)

@yasker

This comment has been minimized.

Copy link

commented Nov 20, 2018

@rimusz Weird... If you got time, can you open an issue with the log? You can see how to get the log using kubectl -n local-path-storage logs -f local-path-provisioner-d744ccf98-xfcbk(as seen in the doc). Though if you don't have time, I totally understand.

@rimusz

This comment has been minimized.

Copy link
Contributor Author

commented Nov 21, 2018

@yasker next time when I get free cycles I will look to local-path-provisioner again.
But when kind supports multi-node then we need something else, or maybe local-path-provisioner be used there too :)

@msau42

This comment has been minimized.

Copy link

commented Nov 21, 2018

Oh if you are currently only supporting single node, then the intree hostpath provisioner should have worked fine. It's the same one that localup.sh uses

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Nov 21, 2018

Ah, yeah currently only a single-node, and it should indeed look like hack/local-up-cluster.sh's default storage, but multi-node will happen in the near future I suspect, an implementation exists but is not in currently and may take a bit.

@BenTheElder BenTheElder added this to the 2019 goals milestone Dec 18, 2018

@phisco

This comment has been minimized.

Copy link
Contributor

commented Jan 10, 2019

confirm @rimusz solution is working for me too

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Jan 10, 2019

exciting! perhaps we should ship this by default then :-)

@ks2211

This comment has been minimized.

Copy link

commented Jan 17, 2019

Confirming this solution works for me as well. If anyone is interested in using this solution without going through helm, I converted the chart to k8s resource yaml

kubectl create -f filebelow.yaml

---
# Source: hostpath-provisioner/templates/storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hostpath
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: hostpath

---
# Source: hostpath-provisioner/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
---
# Source: hostpath-provisioner/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
# Source: hostpath-provisioner/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: release-name-hostpath-provisioner
subjects:
  - kind: ServiceAccount
    name: release-name-hostpath-provisioner
    namespace: default
---
# Source: hostpath-provisioner/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: release-name-hostpath-provisioner-leader-locking
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["list", "watch", "create"]
---
# Source: hostpath-provisioner/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: release-name-hostpath-provisioner-leader-locking
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: release-name-hostpath-provisioner-leader-locking
subjects:
  - kind: ServiceAccount
    name: release-name-hostpath-provisioner
    namespace: default
---
# Source: hostpath-provisioner/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app.kubernetes.io/name: hostpath-provisioner
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: hostpath-provisioner
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: release-name-hostpath-provisioner
      containers:
        - name: hostpath-provisioner
          image: "quay.io/rimusz/hostpath-provisioner:v0.2.1"
          imagePullPolicy: IfNotPresent
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: pv-volume
              mountPath: /mnt/hostpath
          resources:
            limits:
              cpu: 100m
              memory: 128Mi
            requests:
              cpu: 100m
              memory: 128Mi
            
      volumes:
        - name: pv-volume
          hostPath:
            path: /mnt/hostpath
@BenTheElder

This comment has been minimized.

Copy link
Member

commented Jan 17, 2019

Very cool, I haven't managed to dig too deep into this yet (just starting to look more now) - is it feasible to adapt to multi-node at all? (I'd guess not so much...)
If not, perhaps we should check in documentation / example config for how to do this in the user guide(s).

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Jan 17, 2019

I do really think we should try to offer a solution to this, and FWIW I think single-node clusters will be most common, but multi-node exists in limited capacity now and will be important for some CI scenarios.

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Mar 10, 2019

I would like to see a solution ship by default in 0.3. I will try to do some more testing with Local Path Provisioner soon, thank you @yasker 🙏

@jdolitsky

This comment has been minimized.

Copy link

commented Mar 21, 2019

Using kind for the first time tonight (..wow and thank you) and I came across this issue-

Just wanted to confirm that rancher/local-path-provisioner does indeed successfully and dynamically provision volumes with a fresh install of both kind and local-path-provisioner.

Had to make a few tweaks after install in order to toggle the default storage class:

# Install local-path-provisioner
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

# Set local-path as default storage class (and unset the current default, i.e standard)
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false", "storageclass.beta.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true", "storageclass.beta.kubernetes.io/is-default-class":"true"}}}'

I was then able to install the HackMD helm chart with default values, which requires 2 persistent volumes, and the PVs were auto-created and the PVCs became bound pretty much immediately:

$ helm install stable/hackmd
NAME:   loping-dingo
...
==> v1/PersistentVolumeClaim
NAME                        STATUS   VOLUME      CAPACITY  ACCESS MODES  STORAGECLASS  AGE
loping-dingo-postgresql     Pending  local-path  0s
loping-dingo-hackmd         Pending  local-path  0s
...


$ kubectl get pvc
NAME                         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
loping-dingo-hackmd          Bound     pvc-1fc9f202-4ba9-11e9-9396-0242d8d39a56   2Gi        RWO            local-path     6s
loping-dingo-postgresql      Bound     pvc-1fc933d4-4ba9-11e9-9396-0242d8d39a56   8Gi        RWO            local-path     6s
...


$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS   REASON   AGE
pvc-1fc933d4-4ba9-11e9-9396-0242d8d39a56   8Gi        RWO            Delete           Bound    default/loping-dingo-postgresql       local-path              17s
pvc-1fc9f202-4ba9-11e9-9396-0242d8d39a56   2Gi        RWO            Delete           Bound    default/loping-dingo-hackmd           local-path              16s
...

$ kubectl get pods
NAME                                          READY   STATUS    RESTARTS   AGE
loping-dingo-hackmd-85f8b9cbf9-xm9rh          1/1     Running   0          55s
loping-dingo-postgresql-6ffbf7d5ff-98w26      1/1     Running   0          55s

Would love to see the next release ship with local-path-provisioner baked-in and as default ❤️

@joejulian

This comment has been minimized.

Copy link
Contributor

commented Mar 22, 2019

volumes are not being automatically provisioned for the "standard" host-path StorageClass because kube-controller-manager doesn't have --enable-hostpath-provisioner set

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Mar 25, 2019

#397 will enable the kubernetes built in host-path provisioner, but for multi-node we probably still want local-path-provisioner. today we meet about kind 0.3 planning 😅

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Mar 25, 2019

should be fixed for out of the box clusters (thanks @joejulian !), re-opening to track for for multi-node clusters

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Mar 25, 2019

/remove-help
/assign
/lifecycle active

@BenTheElder

This comment has been minimized.

Copy link
Member

commented May 1, 2019

It looks like we will need ARM and PPC images, punting to the next milestone since we're overdue for 0.3

@armab

This comment has been minimized.

Copy link

commented Jun 19, 2019

Just a relevant heads up here, switching to rancher/local-path-provisioner as default will fix #622 when some official Helm charts failed to work in KinD with existing hostPath due to its limitations like volume/directory write permissions.

@aojea

This comment has been minimized.

Copy link
Contributor

commented Aug 12, 2019

It looks like we will need ARM and PPC images, punting to the next milestone since we're overdue for 0.3

Seems that's WIP rancher/local-path-provisioner#24

@meyerbro

This comment has been minimized.

Copy link

commented Aug 14, 2019

Hey @yasker, I'm having the same issue as @rimusz had... No PV gets created... Tried with 2 clusters (v1.14.1 and v1.13.5)... I created an issue there: rancher/local-path-provisioner#39

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.