diff --git a/workshop/Lab0/README.md b/workshop/Lab0/README.md index ed0d7f1..b967e2a 100644 --- a/workshop/Lab0/README.md +++ b/workshop/Lab0/README.md @@ -1,9 +1,40 @@ -# Pre-work - +# Lab 0: Pre-work ## 1. Setup Kubernetes environment Run through the instructions listed [here](https://github.com/IBM/kube101/tree/master/workshop/Lab0) -## 2. Download or clone the repo +## 2. Cloud shell login. + +## 3. Docker hub account. + +Create a [dockerhub](https://hub.docker.com/) user and set the environment variable. +``` +DOCKERUSER= +``` + +## 4. Set the cluster name + +``` +ibmcloud ks clusters + +OK +Name ID State Created Workers Location Version Resource Group Name Provider +user001-workshop bseqlkkd0o1gdqg4jc10 normal 3 months ago 5 Dallas 4.3.38_1544_openshift default classic + + +CLUSTERNAME=use001-workshop +``` +or + +``` +CLUSTERNAME=`ibmcloud ks clusters | grep Name -A 1 | awk '{print $1}' | grep -v Name` + +echo $CLUSTERNAME +user001-workshop +``` + + + + diff --git a/workshop/Lab1/README.md b/workshop/Lab1/README.md index 7921ac0..bf1d5f4 100644 --- a/workshop/Lab1/README.md +++ b/workshop/Lab1/README.md @@ -1,13 +1,306 @@ -# Lab 1. Container storage and Kubernetes +# Lab 1: Non-persistent storage with Kubernetes -Expolore how local storage works on the pods. -Mount it on Docker. +Storing data in containers or worker nodes are considered as the [non-persistent](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage) forms of data storage. +In this lab, we will explore storage options on the IBM Kubernetes worker nodes. Follow this [lab](https://github.com/remkohdev/docker101/tree/master/workshop/lab-3) is you are interested in learning more about container-based storage. -The application is the [Guestbook App](https://github.com/IBM/guestbook), which is a sample multi-tier web application. +The lab covers the following topics: +- Create and claim IBM Kubernetes [non-persistent](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#non_persistent_overview) storage based on the primary and secondary storage available on the worker nodes. +- Make the volumes available in the `Guestbook` application. +- Use the volumes to store application cache and debug information. +- Access the data from the guestbook container using the Kubernetes CLI. +- Assess the impact of losing a pod on data retention. +- Claim back the storage resources and clean up. -## Scenario 1: Deploy the application using `kubectl` + +The primary storage maps to the volume type `hostPath` and the secondary storage maps to `emptyDir`. Learn more about Kubernetes volume types [here](https://Kubernetes.io/docs/concepts/storage/volumes/). + +## Reserve Persistent Volumes + +From the cloud shell prompt, run the following commands to get the guestbook application and the kubernetes configuration needed for the storage labs. ```bash -git clone https://github.com/IBM/workshop-template -cd workshop-template +cd $HOME +git clone --branch fs https://github.com/IBM/guestbook-nodejs.git +git clone --branch storage https://github.com/rojanjose/guestbook-config.git +cd $HOME/guestbook-config/storage +``` + +Let's start with reserving the Persistent volume from the primary storage. +Review the yaml file `pv-hostpath.yaml`. Note the values set for `type`, `storageClassName` and `hostPath`. + +```console +apiVersion: v1 +kind: PersistentVolume +metadata: + name: guestbook-primary-pv + labels: + type: local +spec: + storageClassName: manual + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + hostPath: + path: "/mnt/data" +``` + +Create the persistent volume as shown in the command below: +``` +kubectl create -f pv-hostpath.yaml +persistentvolume/guestbook-primary-pv created + +kubectl get pv +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +guestbook-primary-pv 10Gi RWO Retain Available manual 13s +``` + +Next +PVC yaml: +``` +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: guestbook-local-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 3Gi +``` + +Create PVC: + +``` +kubectl create -f guestbook-local-pvc.yaml +persistentvolumeclaim/guestbook-local-pvc created +❯ kubectl get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +guestbook-local-pvc Bound guestbook-local-pv 10Gi RWX manual 6s +``` + + +## Guestbook application using storage + +The application is the [Guestbook App](https://github.com/IBM/guestbook-nodejs), which is a simple multi-tier web application built using the loopback framework. + +Change to the guestbook application source directory: + +``` +cd $HOME/guestbook-nodejs/src +``` +Review the source `common/models/entry.js`. The application uses storage allocated using `hostPath` to store data cache in the file `data/cache.txt`. The file `logs/debug.txt` records debug messages provisioned using the `emptyDir` storage type. + +```source +module.exports = function(Entry) { + + Entry.greet = function(msg, cb) { + + // console.log("testing " + msg); + fs.appendFile('logs/debug.txt', "Received message: "+ msg +"\n", function (err) { + if (err) throw err; + console.log('Debug stagement printed'); + }); + + fs.appendFile('data/cache.txt', msg+"\n", function (err) { + if (err) throw err; + console.log('Saved in cache!'); + }); + +... +``` + +Run the commands listed below to build the guestbook image and copy into the docker hub registry: + +``` +cd $HOME/guestbook-nodejs/src +docker build -t $DOCKERUSER/guestbook-nodejs:storage . +docker login -u $DOCKERUSER +docker push $DOCKERUSER/guestbook-nodejs:storage +``` + +Review the deployment yaml file `guestbook-deplopyment.yaml` prior to deploying the application into the cluster. + +``` +cd $HOME/guestbook-config/storage/lab1 +cat guestbook-deployment.yaml +``` + +Replace the first part of `image` name with your docker hub user id. +The section `spec.volumes` lists `hostPath` and `emptyDir` volumes. The section `spec.containers.volumeMounts` lists the mount paths that the application uses to write in the volumes. + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: guestbook-v1 + labels: + app: guestbook + ... + spec: + containers: + - name: guestbook + image: rojanjose/guestbook-nodejs:storage + imagePullPolicy: Always + ports: + - name: http-server + containerPort: 3000 + volumeMounts: + - name: guestbook-primary-volume + mountPath: /home/node/app/data + - name: guestbook-secondary-volume + mountPath: /home/node/app/logs + volumes: + - name: guestbook-primary-volume + persistentVolumeClaim: + claimName: guestbook-primary-pvc + - name: guestbook-secondary-volume + emptyDir: {} + + +... +``` + +Deploy the Guestbook application: + +``` +kubectl create -f guestbook-deployment.yaml +deployment.apps/guestbook-v1 created + +kubectl get pods +NAME READY STATUS RESTARTS AGE +guestbook-v1-6f55cb54c5-jb89d 1/1 Running 0 14s + +kubectl create -f guestbook-service.yaml +service/guestbook created +``` + +Find the URL for the guestbook application by joining the worker node external IP and service node port. + +``` +HOSTNAME=`ibmcloud ks workers --cluster $CLUSTERNAME | grep Ready | head -n 1 | awk '{print $2}'` +SERVICEPORT=`kubectl get svc guestbook -o=jsonpath='{.spec.ports[0].nodePort}'` +echo "http://$HOSTNAME:$SERVICEPORT" +``` + +Open the URL in a browser and create guest book entries. + +![Guestbook entries](images/lab1-guestbook-entries.png) + +Log into the pod: + +``` +kubectl exec -it guestbook-v1-6f55cb54c5-jb89d bash + +kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. + +root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# ls -al +total 256 +drwxr-xr-x 1 root root 4096 Nov 11 23:40 . +drwxr-xr-x 1 node node 4096 Nov 11 23:20 .. +-rw-r--r-- 1 root root 12 Oct 29 21:00 .dockerignore +-rw-r--r-- 1 root root 288 Oct 29 21:00 .editorconfig +-rw-r--r-- 1 root root 8 Oct 29 21:00 .eslintignore +-rw-r--r-- 1 root root 27 Oct 29 21:00 .eslintrc +-rw-r--r-- 1 root root 151 Oct 29 21:00 .gitignore +-rw-r--r-- 1 root root 30 Oct 29 21:00 .yo-rc.json +-rw-r--r-- 1 root root 105 Oct 29 21:00 Dockerfile +drwxr-xr-x 2 root root 4096 Nov 11 03:40 client +drwxr-xr-x 3 root root 4096 Nov 10 23:04 common +drwxr-xr-x 2 root root 4096 Nov 11 23:16 data +drwxrwxrwx 2 root root 4096 Nov 11 23:44 logs +drwxr-xr-x 439 root root 16384 Nov 11 23:20 node_modules +-rw-r--r-- 1 root root 176643 Nov 11 23:20 package-lock.json +-rw-r--r-- 1 root root 830 Nov 11 23:20 package.json +drwxr-xr-x 3 root root 4096 Nov 10 23:04 server + +root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# cat data/cache.txt +Hello Kubernetes! +Hola Kubernetes! +Zdravstvuyte Kubernetes! +Nǐn hǎo Kubernetes! +Goedendag Kubernetes! + +root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# cat logs/debug.txt +Received message: Hello Kubernetes! +Received message: Hola Kubernetes! +Received message: Zdravstvuyte Kubernetes! +Received message: Nǐn hǎo Kubernetes! +Received message: Goedendag Kubernetes! + + +root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# df -h +Filesystem Size Used Avail Use% Mounted on +overlay 98G 3.5G 90G 4% / +tmpfs 64M 0 64M 0% /dev +tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup +/dev/mapper/docker_data 98G 3.5G 90G 4% /etc/hosts +shm 64M 0 64M 0% /dev/shm +/dev/xvda2 25G 3.6G 20G 16% /home/node/app/data +tmpfs 7.9G 16K 7.9G 1% /run/secrets/kubernetes.io/serviceaccount +tmpfs 7.9G 0 7.9G 0% /proc/acpi +tmpfs 7.9G 0 7.9G 0% /proc/scsi +tmpfs 7.9G 0 7.9G 0% /sys/firmware + +``` + +Kill the pod to see the impact of deleting the pod on data. + +``` +kubectl get pods +NAME READY STATUS RESTARTS AGE +guestbook-v1-6f55cb54c5-jb89d 1/1 Running 0 12m + +kubectl delete pod guestbook-v1-6f55cb54c5-jb89d +pod "guestbook-v1-6f55cb54c5-jb89d" deleted + +kubectl get pods +NAME READY STATUS RESTARTS AGE +guestbook-v1-5cbc445dc9-sx58j 1/1 Running 0 86s +``` + +![Guestbook delete data](images/lab1-guestbook-data-deleted.png) + +Enter new data: +![Guestbook reload](images/lab1-guestbook-data-reload.png) + +Log into the pod to view the state of the data. + +``` +kubectl get pods +NAME READY STATUS RESTARTS AGE +guestbook-v1-5cbc445dc9-sx58j 1/1 Running 0 86s + +kubectl exec -it guestbook-v1-5cbc445dc9-sx58j bash +kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. + +root@guestbook-v1-5cbc445dc9-sx58j:/home/node/app# cat data/cache.txt +Hello Kubernetes! +Hola Kubernetes! +Zdravstvuyte Kubernetes! +Nǐn hǎo Kubernetes! +Goedendag Kubernetes! +Bye Kubernetes! +Aloha Kubernetes! +Ciao Kubernetes! +Sayonara Kubernetes! + +root@guestbook-v1-5cbc445dc9-sx58j:/home/node/app# cat logs/debug.txt +Received message: Bye Kubernetes! +Received message: Aloha Kubernetes! +Received message: Ciao Kubernetes! +Received message: Sayonara Kubernetes! +root@guestbook-v1-5cbc445dc9-sx58j:/home/node/app# +``` + +This shows that the storage type `emptyDir` loose data on a pod restart whereas `hostPath` data lives until the worker node or cluster is deleted. + + +## Clean up + ``` +cd $HOME/guestbook-config/storage/lab1 +kubectl delete -f . +``` \ No newline at end of file diff --git a/workshop/Lab1/images/lab1-guestbook-data-deleted.png b/workshop/Lab1/images/lab1-guestbook-data-deleted.png new file mode 100644 index 0000000..6f24cd7 Binary files /dev/null and b/workshop/Lab1/images/lab1-guestbook-data-deleted.png differ diff --git a/workshop/Lab1/images/lab1-guestbook-data-reload.png b/workshop/Lab1/images/lab1-guestbook-data-reload.png new file mode 100644 index 0000000..0d62830 Binary files /dev/null and b/workshop/Lab1/images/lab1-guestbook-data-reload.png differ diff --git a/workshop/Lab1/images/lab1-guestbook-entries.png b/workshop/Lab1/images/lab1-guestbook-entries.png new file mode 100644 index 0000000..272a799 Binary files /dev/null and b/workshop/Lab1/images/lab1-guestbook-entries.png differ diff --git a/workshop/Lab2/README.md b/workshop/Lab2/README.md index b9f67a4..ca36a68 100644 --- a/workshop/Lab2/README.md +++ b/workshop/Lab2/README.md @@ -1,10 +1,18 @@ -# Lab 2 File storage with kubernetes +# Lab 2: File storage with Kubernetes +This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacit ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS. +Following topics are covered in this exercise: +- Claim a classic file storage volume. +- Make the volumes available in the `Guestbook` application. +- Copy media files such as images into the volume using the Kubernetes CLI. +- Use the `Guestbook` application to view the images. +- Claim back the storage resources and clean up. -The application is the [Guestbook App](https://github.com/IBM/guestbook), which is a sample multi-tier web application. -## Review the storage classes for file storage +## Claim file storage volume + +Review the [storage classes](https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_storageclass_reference) for file storage. In addition to the standard set of storage classes, [custom storage classes](https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_custom_storageclass) can be defined to meet the storage requirements. ```bash kubectl get storageclasses | grep file @@ -28,19 +36,310 @@ ibmc-file-silver ibm.io/ibmc-file Delete Immediate ibmc-file-silver-gid ibm.io/ibmc-file Delete Immediate false 27m ``` -This lab uses the default `ibmc-file-gold` +This lab uses the storage class `ibm-file-silver`. Note that the default class is `ibmc-file-gold` is allocated if storgage class is not expliciity definded. ``` -kubectl describe storageclass ibmc-file-gold -Name: ibmc-file-gold -IsDefaultClass: Yes -Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true"},"name":"ibmc-file-gold"},"parameters":{"billingType":"hourly","classVersion":"2","iopsPerGB":"10","sizeRange":"[20-4000]Gi","type":"Endurance"},"provisioner":"ibm.io/ibmc-file","reclaimPolicy":"Delete"} -,storageclass.kubernetes.io/is-default-class=true +$ kubectl describe storageclass ibmc-file-silver + +Name: ibmc-file-silver +IsDefaultClass: No +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true"},"name":"ibmc-file-silver"},"parameters":{"billingType":"hourly","classVersion":"2","iopsPerGB":"4","sizeRange":"[20-12000]Gi","type":"Endurance"},"provisioner":"ibm.io/ibmc-file","reclaimPolicy":"Delete"} + Provisioner: ibm.io/ibmc-file -Parameters: billingType=hourly,classVersion=2,iopsPerGB=10,sizeRange=[20-4000]Gi,type=Endurance +Parameters: billingType=hourly,classVersion=2,iopsPerGB=4,sizeRange=[20-12000]Gi,type=Endurance AllowVolumeExpansion: MountOptions: ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: -``` \ No newline at end of file +``` + +File sliver has an IOPS of 4GB and a max capacity of 12TB. + +## Claim a file storage volume + +Review the yaml for the file storage `PersistentVolumeClaim` + +``` +cd guestbook-config/storage/lab2 +cat pvc-file-silver.yaml + +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: guestbook-pvc + labels: + billingType: hourly + region: us-south + zone: dal10 +spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 20Gi + storageClassName: ibmc-file-silver +``` + +Create the PVC +```bash +kubectl apply -f pvc-file-silver.yaml +``` +Expected output: +``` +$ kubectl create -f pvc-file-silver.yaml +persistentvolumeclaim/guestbook-filesilver-pvc created +``` + +Verify the PVC claim is created with the status `Bound`. +```bash +kubectl get pvc guestbook-filesilver-pvc +``` +Expected output: + +``` +$ kubectl get pvc guestbook-filesilver-pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +guestbook-filesilver-pvc Bound pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 20Gi RWX ibmc-file-silver 2m +``` + +Details associated with the `pv` +```bash +kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 +``` +Expected output: +``` +$ kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 20Gi RWX Delete Bound default/guestbook-filesilver-pvc ibmc-file-silver 90s +``` + +## Use the volume in the Guestbook application + +Change to the guestbook application source directory and review the html files `images.html` and `index.html`. `images.html` has the code to display the images stored in the file storage. + +``` +cd $HOME/guestbook-nodejs/src +cat client/images.html +cat client/inex.html +``` + +Run the commands listed below to build the guestbook image and copy into the docker hub registry: +(Skip this step if you have already completed lab 1.) + +``` +cd $HOME/guestbook-nodejs/src +docker build -t $DOCKERUSER/guestbook-nodejs:storage . +docker login -u $DOCKERUSER +docker push $DOCKERUSER/guestbook-nodejs:storage +``` + +Review the deployment yaml file `guestbook-deplopyment.yaml` prior to deploying the application into the cluster. + +``` +cd $HOME/guestbook-config/storage/lab2 +cat guestbook-deployment.yaml +``` + +Replace the first part of the `image` name with your docker hub user id. +The section `spec.volumes` references the `file` volume PVC. The section `spec.containers.volumeMounts` has the mount path to store images in the volumes. + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: guestbook-v1 +... + spec: + containers: + - name: guestbook + image: rojanjose/guestbook-nodejs:storage + imagePullPolicy: Always + ports: + - name: http-server + containerPort: 3000 + volumeMounts: + - name: guestbook-file-volume + mountPath: /app/public/images + volumes: + - name: guestbook-file-volume + persistentVolumeClaim: + claimName: guestbook-filesilver-pvc +``` + +Deploy the Guestbook application. + +```bash +kubectl create -f guestbook-deployment.yaml +kubectl create -f guestbook-service.yaml +``` +Verify the Guestbook application is runing. +``` +$ kubectl get all +NAME READY STATUS RESTARTS AGE +pod/guestbook-v1-5bd76b568f-cdhr5 1/1 Running 0 13s +pod/guestbook-v1-5bd76b568f-w6h6h 1/1 Running 0 13s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/guestbook LoadBalancer 172.21.238.40 150.238.30.150 3000:31986/TCP 6s +service/kubernetes ClusterIP 172.21.0.1 443/TCP 9d + +NAME READY UP-TO-DATE AVAILABLE AGE +deployment.apps/guestbook-v1 2/2 2 2 13s + +NAME DESIRED CURRENT READY AGE +replicaset.apps/guestbook-v1-5bd76b568f 2 2 2 13s +``` + +Check the mount path inside the pod container. Get the pod listing. + +``` +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +guestbook-v1-5bd76b568f-cdhr5 1/1 Running 0 78s +guestbook-v1-5bd76b568f-w6h6h 1/1 Running 0 78s +``` +Log into any one of the pod. + +```bash +kubectl exec -it guestbook-v1-7fc4684cdb-t8l6w bash +``` + +Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty. + +``` +$ kubectl exec -it guestbook-v1-5bd76b568f-cdhr5 bash +kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. +root@guestbook-v1-5bd76b568f-cdhr5:/home/node/app# ls -alt +total 252 +drwxr-xr-x 1 root root 4096 Nov 13 03:17 client +drwxr-xr-x 1 root root 4096 Nov 13 03:15 . +drwxr-xr-x 439 root root 16384 Nov 13 03:15 node_modules +-rw-r--r-- 1 root root 176643 Nov 13 03:15 package-lock.json +drwxr-xr-x 1 node node 4096 Nov 13 03:15 .. +-rw-r--r-- 1 root root 830 Nov 11 23:20 package.json +drwxr-xr-x 3 root root 4096 Nov 10 23:04 common +drwxr-xr-x 3 root root 4096 Nov 10 23:04 server +-rw-r--r-- 1 root root 12 Oct 29 21:00 .dockerignore +-rw-r--r-- 1 root root 288 Oct 29 21:00 .editorconfig +-rw-r--r-- 1 root root 8 Oct 29 21:00 .eslintignore +-rw-r--r-- 1 root root 27 Oct 29 21:00 .eslintrc +-rw-r--r-- 1 root root 151 Oct 29 21:00 .gitignore +-rw-r--r-- 1 root root 30 Oct 29 21:00 .yo-rc.json +-rw-r--r-- 1 root root 105 Oct 29 21:00 Dockerfile + +root@guestbook-v1-5bd76b568f-cdhr5:/home/node/app# ls -alt client/images +total 8 +drwxr-xr-x 1 root root 4096 Nov 13 03:17 .. +drwxr-xr-x 2 nobody 4294967294 4096 Nov 13 02:02 . + +root@guestbook-v1-5bd76b568f-cdhr5:/home/node/app# df -h +Filesystem Size Used Avail Use% Mounted on +overlay 98G 4.9G 89G 6% / +tmpfs 64M 0 64M 0% /dev +tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup +shm 64M 0 64M 0% /dev/shm +/dev/mapper/docker_data 98G 4.9G 89G 6% /etc/hosts +tmpfs 7.9G 16K 7.9G 1% /run/secrets/kubernetes.io/serviceaccount +fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01 20G 0 20G 0% /home/node/app/client/images +tmpfs 7.9G 0 7.9G 0% /proc/acpi +tmpfs 7.9G 0 7.9G 0% /proc/scsi +tmpfs 7.9G 0 7.9G 0% /sys/firmware +``` + +Note the Filesystem `fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01` is mounted on path `/home/node/app/client/images`. + +Find the URL for the guestbook application by joining the worker node external IP and service node port. + +``` +HOSTNAME=`ibmcloud ks workers --cluster $CLUSTERNAME | grep Ready | head -n 1 | awk '{print $2}'` +SERVICEPORT=`kubectl get svc guestbook -o=jsonpath='{.spec.ports[0].nodePort}'` +echo "http://$HOSTNAME:$SERVICEPORT" +``` + +Verify that the images are missing by viewing the data from the Guestbook application. Click on the hyperlink labled `images` at the bottom of the guestbook home page. The images.html page shows images with broken links. + +![Guestbook broken images](images/lab2-guestbook-images-broken.png) + +## Load the file storage with images + +Run the `kubectl cp` command to move the images into the mounted volume. +```bash +cd $HOME/guestbook-config/storage/lab2 +kubectl cp images guestbook-v1-5bd76b568f-cdhr5:/home/node/app/client/ +``` + +Refresh the page `images.html` page in the guestbook application to view the uploaded images. + +![Guestbook fixed images](images/lab2-guestbook-images.png) + +## Shared storage across pods + +Login into the other pod `guestbook-v1-5bd76b568f-w6h6h` to verify the volume mount. + +``` +kubectl exec -it guestbook-v1-5bd76b568f-w6h6h bash +kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. + +root@guestbook-v1-5bd76b568f-w6h6h:/home/node/app# ls -alt /home/node/app/client/images +total 160 +-rw-r--r-- 1 501 staff 56191 Nov 13 03:44 gb3.jpg +drwxr-xr-x 2 nobody 4294967294 4096 Nov 13 03:44 . +-rw-r--r-- 1 501 staff 21505 Nov 13 03:44 gb2.jpg +-rw-r--r-- 1 501 staff 58286 Nov 13 03:44 gb1.jpg +drwxr-xr-x 1 root root 4096 Nov 13 03:17 .. + +root@guestbook-v1-5bd76b568f-w6h6h:/home/node/app# df -h +Filesystem Size Used Avail Use% Mounted on +overlay 98G 4.2G 89G 5% / +tmpfs 64M 0 64M 0% /dev +tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup +shm 64M 0 64M 0% /dev/shm +/dev/mapper/docker_data 98G 4.2G 89G 5% /etc/hosts +tmpfs 7.9G 16K 7.9G 1% /run/secrets/kubernetes.io/serviceaccount +fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01 20G 128K 20G 1% /home/node/app/client/images +tmpfs 7.9G 0 7.9G 0% /proc/acpi +tmpfs 7.9G 0 7.9G 0% /proc/scsi +tmpfs 7.9G 0 7.9G 0% /sys/firmware +``` + +Note that the volume and the data are available on all the pods running the Guestbook application. + + +## [Optional exercises] + +Back up data. +Delete pods to confirm that it does not impact the data used by the application. +Delete the Kubernetes cluster. +Create a new cluster and reuse the volume. + +## Clean up + +List all the PVCs and PVs +``` +kubectl get pvc +kubectl get pv +``` + +Delete all the pods using the PVC. +Delete the PVC +``` +kubectl delete pvc guestbook-pvc + +persistentvolumeclaim "guestbook-pvc" deleted +``` +List PV to ensure that it is removed as well. +Cancel the physical storage volume from the cloud account. (Note: requires admin permissions?) + +``` +ibmcloud sl file volume-list --columns id --columns notes | grep pvc-6362f614-258e-48ee-a596-62bb4629cd75 + +183223942 {"plugin":"ibm-file-plugin-7b9db9c79f-86x8w","region":"us-south","cluster":"bugql3nd088jsp8iiagg","type":"Endurance","ns":"default","pvc":"guestbook-pvc","pv":"pvc-6362f614-258e-48ee-a596-62bb4629cd75","storageclass":"ibmc-file-silver","reclaim":"Delete"} + + +ibmcloud sl file volume-cancel 183223942 + +This will cancel the file volume: 183223942 and cannot be undone. Continue?> yes +Failed to cancel file volume: 183223942. +No billing item is found to cancel. +``` diff --git a/workshop/Lab2/images/lab2-guestbook-images-broken.png b/workshop/Lab2/images/lab2-guestbook-images-broken.png new file mode 100644 index 0000000..321d221 Binary files /dev/null and b/workshop/Lab2/images/lab2-guestbook-images-broken.png differ diff --git a/workshop/Lab2/images/lab2-guestbook-images.png b/workshop/Lab2/images/lab2-guestbook-images.png new file mode 100644 index 0000000..93e0aae Binary files /dev/null and b/workshop/Lab2/images/lab2-guestbook-images.png differ