Skip to content

Latest commit

 

History

History
644 lines (527 loc) · 17.5 KB

README.md

File metadata and controls

644 lines (527 loc) · 17.5 KB

Storage

Pods by themselves are useful, but many workloads require exchanging data between containers, or persisting some form of data.

For this task we have Volumes, Persistent Volumes, Persistent Volume Claims, and Storage Classes.

Index


Before you Begin

Kind comes with a default storage class provisioner that can get in the way when trying to explore how storage is used within a Kubernetes cluster. For these exercises, it should be disabled.

$ kubectl annotate --overwrite sc standard  storageclass.kubernetes.io/is-default-class="false"

When done, re-enabling the default-storageclass will automatically turn it back on.

$ kubectl annotate --overwrite sc standard  storageclass.kubernetes.io/is-default-class="true"

Volumes

Volumes within Kubernetes are storage that is tied to the Pod’s lifecycle.

A pod can have one or more type of volumes attached to it. These volumes are consumable by any of the containers within the pod.

They can survive Pod restarts; however their durability beyond that is dependent on the Volume Type.


Exercise: Using Volumes with Pods

Objective: Understand how to add and reference volumes to a Pod and their containers.


  1. Create a Pod with from the manifest manifests/volume-example.yaml or the yaml below.

manifests/volume-example.yaml

apiVersion: v1
kind: Pod
metadata:
  name: volume-example
spec:
  containers:
  - name: nginx
    image: nginx:stable-alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
      readOnly: true
  - name: content
    image: alpine:latest
    volumeMounts:
    - name: html
      mountPath: /html
    command: ["/bin/sh", "-c"]
    args:
      - while true; do
          echo $(date)"<br />" >> /html/index.html;
          sleep 5;
        done
  volumes:
  - name: html
    emptyDir: {}

Command

$ kubectl create -f manifests/volume-example.yaml

Note the relationship between volumes in the Pod spec, and the volumeMounts directive in each container.

  1. Exec into content container within the volume-example Pod, and cat the html/index.html file.
$ kubectl exec volume-example -c content -- /bin/sh -c "cat /html/index.html"

You should see a list of date time-stamps. This is generated by the script being used as the entrypoint (args) of the content container.

  1. Now do the same within the nginx container, using cat to see the content of /usr/share/nginx/html/index.html example.
$ kubectl exec volume-example -c nginx -- /bin/sh -c "cat /usr/share/nginx/html/index.html"

You should see the same file.

  1. Now try to append "nginx" to index.html from the nginx container.
$ kubectl exec volume-example -c nginx -- /bin/sh -c "echo nginx >> /usr/share/nginx/html/index.html"

It should error out and complain about the file being read only. The nginx container has no reason to write to the file, and mounts the same Volume as read-only. Writing to the file is handled by the content container.


Summary: Pods may have multiple volumes using different Volume types. Those volumes in turn can be mounted to one or more containers within the Pod by adding them to the volumeMounts list. This is done by referencing their name and supplying their mountPath. Additionally, volumes may be mounted both read-write or read-only depending on the application, enabling a variety of use-cases.


Clean Up Command

kubectl delete pod volume-example

Back to Index



Persistent Volumes and Claims

Persistent Volumes and Claims work in conjunction to serve as the direct method in which a Pod Consumes Persistent storage.

A PersistentVolume (PV) is a representation of a cluster-wide storage resource that is linked to a backing storage provider - NFS, GCEPersistentDisk, RBD etc.

A PersistentVolumeClaim acts as a namespaced request for storage that satisfies a set of a requirements instead of mapping to the storage resource directly.

This separation of PV and PVC ensures that an application’s ‘claim’ for storage is portable across numerous backends or providers.


Exercise: Understanding Persistent Volumes and Claims

Objective: Gain an understanding of the relationship between Persistent Volumes, Persistent Volume Claims, and the multiple ways they may be selected.


  1. Create PV pv-sc-example from the manifest manifests/pv-sc-example.yaml or use the yaml below. Ensure to note that its labeled with type=hostpath, its Storage Class Name is set to mypvsc, and uses Delete for the Reclaim Policy.

manifests/pv-sc-example.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-sc-example
  labels:
    type: hostpath
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  storageClassName: mypvsc
  hostPath:
    type: DirectoryOrCreate
    path: "/data/mypvsc"

Command

$ kubectl create -f manifests/pv-sc-example.yaml
  1. Once created, list the available Persistent Volumes.
$ kubectl get pv

You should see the single PV pv-sc-example flagged with the status Available. Meaning no claim has been issued that targets it.

  1. Create PVC pvc-selector-example from the manifest manifests/pvc-selector-example.yaml or the yaml below.

manifests/pvc-selector-example.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-selector-example
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      type: hostpath

Command

$ kubectl create -f manifests/pvc-selector-example.yaml

Note that the selector targets type=hostpath.

  1. Then describe the newly created PVC
$ kubectl describe pvc pvc-selector-example

The pvc pvc-selector-example should be in a Pending state with the Error Event FailedBinding and no Persistent Volumes available for this claim and no storage class is set. If a PV is given a storageClassName, ONLY PVCs that request that Storage Class may use it, even if the selector has a valid target.

  1. Now create the PV pv-selector-example from the manifest manifests/pv-selector-example.yaml or the yaml below.

manifests/pv-selector-example.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-selector-example
  labels:
    type: hostpath
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    type: DirectoryOrCreate
    path: "/data/mypvselector"

Command

$ kubectl create -f manifests/pv-selector-example.yaml
  1. Give it a few moments and then look at the Persistent Volumes once again.
$ kubectl get pv

The PV pv-selector-example should now be in a Bound state, meaning that a PVC has been mapped or "bound" to it. Once bound, NO other PVCs may make a claim against the PV.

  1. Create the pvc pvc-sc-example from the manifest manifests/pvc-sc-example.yaml or use the yaml below.

manifests/pvc-sc-example.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-sc-example
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: mypvsc
  resources:
    requests:
      storage: 1Gi

Command

$ kubectl create -f manifests/pvc-sc-example.yaml

Note that this PVC has a storageClassName reference and no selector.

  1. Give it a few seconds and then view the current PVCs.
$ kubectl get pvc

The pvc-sc-example should be bound to the pv-sc-example Volume. It consumed the PV with the corresponding storageClassName.

  1. Delete both PVCs.
$ kubectl delete pvc pvc-sc-example pvc-selector-example
  1. Then list the PVs once again.
$ kubectl get pv

The pv-sc-example will not be listed. This is because it was created with a persistentVolumeReclaimPolicy of Delete meaning that as soon as the PVC was deleted, the PV itself was deleted.

PV pv-selector-example, was created without specifying a persistentVolumeReclaimPolicy and was in turn created with the default for PVs: Retain. It's state of Released means that it's associated PVC has been deleted. In this state no other PVC's may claim it, even if pvc-selector-example was created again. The PV must manually be reclaimed or deleted. This ensures the preservation of the state of the Volume in the event that its PVC was accidentally deleted giving an administrator time to do something with the data before reclaiming it.

  1. Delete the PV pv-selector-example.
$ kubectl delete pv pv-selector-example

Summary: Persistent Volumes and Persistent Volume Claims when bound together provide the primary method of attaching durable storage to Pods. Claims may reference PVs by specifying a storageClassName, targeting them with a selector, or a combination of both. Once a PV is bound to a PVC, it becomes a tightly coupled relationship and no further PVCs may issue a claim against the PV, even if the binding PVC is deleted. How PVs are reclaimed is configured via the PV attribute persistentVolumeReclaimPolicy where they can either be deleted automatically when set to Delete or require manual intervention when set to Retain as a data-preservation safe-guard.


Exercise: Using PersistentVolumeClaims

Objective: Learn how to consume a Persistent Volume Claim within a Pod, and explore some of the ways they may be used.


  1. Create PV and associated PVC html using the manifest manifests/html-vol.yaml

manifest/html-vol.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: html
  labels:
    type: hostpath
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: html
  persistentVolumeReclaimPolicy: Delete
  hostPath:
    type: DirectoryOrCreate
    path: "/tmp/html"

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: html
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: html
  resources:
    requests:
      storage: 1Gi

Command

$ kubectl create -f manifests/html-vol.yaml
  1. Create Deployment writer from the manifest manifests/writer.yaml or use the yaml below. It is similar to the volume-example Pod from the first exercise, but now uses a persistentVolumeClaim Volume instead of an emptyDir.

manifests/writer.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: writer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: writer
  template:
    metadata:
      labels:
        app: writer
    spec:
      containers:
      - name: content
        image: alpine:latest
        volumeMounts:
        - name: html
          mountPath: /html
        command: ["/bin/sh", "-c"]
        args:
        - while true; do
          date >> /html/index.html;
          sleep 5;
          done
      volumes:
      - name: html
        persistentVolumeClaim:
          claimName: html

Command

$ kubectl create -f manifests/writer.yaml

Note that the claimName references the previously created PVC defined in the html-vol manifest.

  1. Create a Deployment and Service reader from the manifest manifests/reader.yaml or use the yaml below.

manifests/reader.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: reader
spec:
  replicas: 3
  selector:
    matchLabels:
      app: reader
  template:
    metadata:
      labels:
        app: reader
    spec:
      containers:
      - name: nginx
        image: nginx:stable-alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
          readOnly: true
      volumes:
      - name: html
        persistentVolumeClaim:
          claimName: html

---

apiVersion: v1
kind: Service
metadata:
  name: reader
spec:
  selector:
    app: reader
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Command

$ kubectl create -f manifests/reader.yaml
  1. With the reader Deployment and Service created, use kubectl proxy to view the reader Service.
$ kubectl proxy

URL

http://127.0.0.1:8001/api/v1/namespaces/default/services/reader/proxy/

The reader Pods can reference the same Claim as the writer Pod. This is possible because the PV and PVC were created with the access mode ReadWriteMany.

  1. Now try to append "nginx" to index.html from one of the reader Pods.
$ kubectl exec reader-<pod-hash>-<pod-id> -- /bin/sh -c "echo nginx >> /usr/share/nginx/html/index.html"

The reader Pods have mounted the Volume as read only. Just as it did with exercise 1, The command should error out with a message complaining about not being able to modify a read-only filesystem.


Summary: Using Persistent Volume Claims with Pods is quite easy. The attribute persistentVolumeClaim.claimName simply must reference the name of the desired PVC in the Pod's Volume definition. Multiple Pods may reference the same PVC as long as their access mode supports it.


Clean Up Command

kubectl delete -f manifests/reader.yaml -f manifests/writer.yaml -f manifests/html-vol.yaml

Back to Index



Storage Classes

Storage classes are an abstraction on top of an external storage resource (PV). They work directly with the external storage system to enable dynamic provisioning and remove the need for the cluster admin to pre-provision Persistent Volumes.


Exercise: Exploring StorageClasses

Objective: Understand how it's possible for a Persistent Volume Claim to consume dynamically provisioned storage via a Storage Class.


  1. Re-enable the kind default-storageclass, and wait for it to become available
$ kubectl annotate --overwrite sc standard  storageclass.kubernetes.io/is-default-class="true"
  1. Describe the new Storage Class
$ kubectl describe sc standard

Note the fields IsDefaultClass, Provisioner, and ReclaimPolicy. The Provisioner attribute references the "driver" for the Storage Class. Kind comes with it's own driver rancher.io/local-path that simply mounts a hostpath from within the VM as a Volume.

  1. Create PVC pvc-standard from the manifest manifests/pvc-standard.yaml or use the yaml below.

manifests/pvc-standard.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-standard
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: standard
  resources:
    requests:
      storage: 1Gi

Command

$ kubectl create -f manifests/pvc-standard.yaml
  1. Describe the PVC pvc-standard
$ kubectl describe pvc pvc-standard

The Events lists the actions that occurred when the PVC was created. The external provisioner standard provisions a Volume for the claim default/pvc-standard and is assigned the name pvc-<pvc-standard uid>.

  1. List the PVs.
$ kubectl get pv

The PV pvc-<pvc-standard uid> will be the exact size of the associated PVC.

  1. Now create the PVC pvc-selector-example from the manifest manifests/pvc-selector-example.yaml or use the yaml below.

manifests/pvc-selector-example.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-selector-example
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      type: hostpath

Command

$ kubectl create -f manifests/pvc-selector-example.yaml
  1. List the PVCs.
$ kubectl get pvc

The PVC pvc-selector-example was bound to a PV automatically, even without a valid selector target. The standard Storage Class was configured as the default, meaning that any PVCs that do not have a valid target will default to using the standard Storage Class.

  1. Delete both PVCs.
$ kubectl delete pvc pvc-standard pvc-selector-example
  1. List the PVs once again.
$ kubectl get pv

The PVs were automatically reclaimed following the ReclaimPolicy that was set by the Storage Class.


Summary: Storage Classes provide a method of dynamically provisioning Persistent Volumes from an external Storage System. They have the same attributes as normal PVs, and have their own methods of being garbage collected. They may be targeted by name using the storageClassName within a Persistent Volume Claim request, or a Storage Class may be configured as default ensuring that Claims may be fulfilled even when there is no valid selector target.


Back to Index



Helpful Resources


Back to Index