Here the steps to upgrade a Kadalu service to the latest version for Kubernetes.
$ curl -fsSL https://github.com/kadalu/kadalu/releases/latest/download/install.sh | sudo bash -x
$ kubectl-kadalu version
[...]
-
Download the relevant Kadalu Operator file definition according to your Kubernetes Cluster:
-
Generic Kubernetes: kadalu-operator.yaml
-
MicroK8s: kadalu-operator-microk8s.yaml
-
OpenShift: kadalu-operator-openshift.yaml
-
-
Apply the file definition to the Kubernetes Cluster.
kubectl apply -f kadalu-operator.yaml
Exemple for a generic Kubernetes Cluster in single command ligne:
curl -Ls https://github.com/kadalu/kadalu/releases/latest/download/kadalu-operator.yaml | sed -e 's/"no"/"yes"/g' | kubectl apply -f -
namespace/kadalu unchanged
serviceaccount/kadalu-operator unchanged
serviceaccount/kadalu-csi-nodeplugin unchanged
serviceaccount/kadalu-csi-provisioner unchanged
serviceaccount/kadalu-server-sa unchanged
customresourcedefinition.apiextensions.k8s.io/kadalustorages.kadalu-operator.storage configured
clusterrole.rbac.authorization.k8s.io/pod-exec unchanged
clusterrole.rbac.authorization.k8s.io/kadalu-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/kadalu-operator unchanged
deployment.apps/operator configured
(wait for operator and csi-provisionner pods in kadalu namespace get in Running state)
-
Download the relevant Kadalu CSI-Nodeplugin file definition according to your Kubernetes Cluster:
-
Generic Kubernetes: csi-nodeplugin.yaml
-
MicroK8s: csi-nodeplugin-microk8s.yaml
-
OpenShift: csi-nodeplugin-openshift.yaml
-
-
Apply the file definition to the Kubernetes Cluster.
kubectl apply -f csi-nodeplugin.yaml
Example for a generic Kubernetes Cluster in single command line:
curl -Ls https://github.com/kadalu/kadalu/releases/latest/download/csi-nodeplugin.yaml | sed -e 's/"no"/"yes"/g' | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/kadalu-csi-nodeplugin unchanged
clusterrolebinding.rbac.authorization.k8s.io/kadalu-csi-nodeplugin unchanged
daemonset.apps/kadalu-csi-nodeplugin configured
-
As the
.spec.updateStrategy.type
of the Nodeplugin Daemonset isonDelete
the onus is on the admin to drain the nodes where kadalu-nodeplugin is running and delete the nodeplugin one at a time -
After the nodeplugin is deleted, the new nodeplugin will be run on that specific node and new workloads can be scheduled to run on that node
-
One of the below three options can be followed for a successful update
-
Assume we have three worker nodes running kadalu nodeplugin, i.e, node-a, node-b, node-c
-
Include a liveness probe for your container to list the mounted volume/device as nodeplugin now remounts corrupted mounts and pods having a liveness probe can recovery mount at their end by a container restart due to liveness failure on the mounted volume
-
Drain node-a and delete the nodeplugin running on node-a
-
Uncordon node-a and await new version of nodeplugin come up on node-a
-
Repeat step-2 for remaining nodes and by the end new version of nodeplugins on all nodes should be in running state
-
Option 1 evicts all the pods running on a node and however the kubectl-evict plugin can be used to evict a specific deployment
-
Cordon one of the nodes, let’s say node-a, evict a deployment running on node-a which is using kadalu pvc
-
Delete the nodeplugin pod running on node-a and uncordon node-a
-
Repeat step-2 for remaining nodes and by the end new version of nodeplugins on all nodes should be in running state
-
This’ll be helpful if the nodes are running workloads with less headroom wherein for various reasons the workload which was evicted may never be re-scheduled on other nodes