-
Create a namespace with name
denver
steps
kubectl create namespace denver
result
namespace/denver created
-
Label namespace
denver
with labeltype: ckad
steps
kubectl label namespace denver type=ckad
verify
kubectl get namespace denver --show-labels
result
NAME STATUS AGE LABELS denver Active 22m kubernetes.io/metadata.name=denver,type=ckad
-
Annotate namespace
denver
with annotation:description: for ckad lab
steps
kubectl annotate namespace denver description='for ckad lab'
verify
kubectl get namespace denver -o jsonpath={.metadata.annotations}
result
{"description":"for ckad lab"}
-
Run a pod name
game
with imagedguyhasnoname/game2048:latest
in namespacedenver
steps
kubectl run game -n denver --image=dguyhasnoname/game2048:latest --restart=Never
verify
kubectl get po -n denver NAME READY STATUS RESTARTS AGE game 1/1 Running 0 6m32s
-
Get yaml for the pod
game
in namespacedenver
steps
kubectl get po game -n denver -o yaml
result
metadata: creationTimestamp: "2021-10-03T05:45:26Z" labels: run: game name: game namespace: denver resourceVersion: "240869" uid: 9f79d237-c090-4cdc-a422-48ba266f3497 spec: containers: - image: dguyhasnoname/game2048:latest imagePullPolicy: Always name: game resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-l4b8x readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: minikube preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-l4b8x projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2021-10-03T05:45:26Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2021-10-03T05:45:59Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2021-10-03T05:45:59Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2021-10-03T05:45:26Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://a22afea1c9fef2c8f0d7c171d71c44901858db40105c53fd6722e3a66a768467 image: dguyhasnoname/game2048:latest imageID: docker-pullable://dguyhasnoname/game2048@sha256:d39bc83cd36b5179e547ce172332e687f83cbe3e4b5ee24f4714073068663708 lastState: {} name: game ready: true restartCount: 0 started: true state: running: startedAt: "2021-10-03T05:45:59Z" hostIP: 192.168.49.2 phase: Running podIP: 172.17.0.23 podIPs: - ip: 172.17.0.23 qosClass: BestEffort startTime: "2021-10-03T05:45:26Z"
-
Execute a simple shell on the
game
pod in namespacedenver
steps
kubectl exec -it game -n denver -- /bin/sh
verify
kubectl exec -it game -n denver -- /bin/sh # hostname game
-
Create a busybox pod with name
bb
in default namespace that echos all the environment variables inside the pod and then exits.steps
kubectl run bb --image=busybox --restart=Never -it -- env
verify
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=bb TERM=xterm KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 KUBERNETES_SERVICE_HOST=10.96.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 HOME=/root
kubectl get po NAME READY STATUS RESTARTS AGE bb 0/1 Completed 0 86s
-
Create pod with name
leaf
in default namespace that runs commadecho This is leaf pod.
. The output should be visible on stdout and after that pod should get deleted by itself. The pod should use imagebusybox
.steps
kubectl run leaf --image=busybox -it --rm --restart=Never -- /bin/sh -c 'echo This is leaf pod.'
verify
kubectl run leaf --image=busybox -it --rm --restart=Never -- /bin/sh -c 'echo This is leaf pod.' This is leaf pod. pod "leaf" deleted
-
Generate yaml for a deployment named
running
in namespaceexercise
that uses imagealpine:latest
and replicas 2. The deployment should have a labelapp=running-daily
. The container should have a commandecho "Hello from running"
. The name of container should berunning-daily
. Do not run the deployment.steps
kubectl create ns exercise
kubectl create deploy running --image=alpine:latest --replicas=2 --namespace=exercise --dry-run=client -o yaml -- /bin/sh -c 'echo "Hello from running"'
Edit the container name to
running-daily
and add the labelapp=running-daily
.apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: running-daily name: running namespace: exercise spec: replicas: 2 selector: matchLabels: app: running-daily strategy: {} template: metadata: creationTimestamp: null labels: app: running-daily spec: containers: - command: - /bin/sh - -c - echo "Hello from running" image: alpine:latest name: running-daily resources: {} status: {}
-
Convert the pod
game
(deployed in question 4) to deploymentgame
in namespacedenver
. Use same image, container name asgame
and number to replicas should be 2.steps
kubectl create deployment game -n denver --image=dguyhasnoname/game2048:latest --replicas=2 --dry-run=client -o yaml > game_deploy.yaml
vi game_deploy.yaml
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: game name: game namespace: denver spec: replicas: 2 selector: matchLabels: app: game strategy: {} template: metadata: creationTimestamp: null labels: app: game spec: containers: - image: dguyhasnoname/game2048:latest name: game2048 resources: {} status: {}
kubectl apply -f game_deploy.yaml
verify
kubectl get po -n denver NAME READY STATUS RESTARTS AGE game 1/1 Running 0 20m game-6dbf688b5f-jqfhn 1/1 Running 0 12s game-6dbf688b5f-qm7gm 1/1 Running 0 12s
kubectl get deployment -n denver NAME READY UP-TO-DATE AVAILABLE AGE game 2/2 2 2 82s
-
Scale the deployment
game
(deployment created in question 10) to 3 replicas.steps
kubectl scale deployment game -n denver --replicas=3
verify
kubectl get po -n denver NAME READY STATUS RESTARTS AGE game 1/1 Running 0 68m game-6dbf688b5f-jqfhn 1/1 Running 0 48m game-6dbf688b5f-qm7gm 1/1 Running 0 48m game-6dbf688b5f-xbmwt 0/1 ContainerCreating 0 5s
-
Expose the deployment
game
in namespacedenver
to the outside world. Run anginx:alpine
pod and test it using curl to see if the deployment is accessible.steps
kubectl expose deployment game -n denver --port=4444 --target-port=80 --type=ClusterIP --name=games
verify
kubectl get svc -n denver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE games ClusterIP 10.111.182.190 <none> 4444/TCP 2m40s
kubectl run curl --rm -it --image=nginx:alpine --restart=Never -n denver -- /bin/sh -c 'curl -I http://10.111.182.190:4444'
HTTP/1.1 200 OK Server: nginx/1.15.12 Date: Sun, 03 Oct 2021 07:09:06 GMT Content-Type: text/html Content-Length: 3378 Last-Modified: Wed, 26 Sep 2018 18:37:14 GMT Connection: keep-alive ETag: "5babd1da-d32" Accept-Ranges: bytes pod "curl" deleted
-
Create a service
games
in namespacedenver
that exposes the deploymentgame
to the outside world. The type of service should be NodePort. Login into the node to verify if the service is accessible using node port.steps
kubectl expose deployment game -n denver --port=3333 --target-port=80 --type=NodePort --name=game
verify
kubectl get svc -n denver NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE game NodePort 10.98.205.242 <none> 3333:32105/TCP 9m16s
kubectl get po -n denver -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES game-6dbf688b5f-jqfhn 1/1 Running 0 72m 172.17.0.24 minikube <none> <none> game-6dbf688b5f-qm7gm 1/1 Running 0 72m 172.17.0.25 minikube <none> <none> game-6dbf688b5f-xbmwt 1/1 Running 0 23m 172.17.0.26 minikube <none> <none>
minikube ssh Last login: Sun Oct 3 07:03:31 2021 from 192.168.49.1 docker@minikube:~$
In this example, minikube is being used. If you are using some other cluster, you can login on the corresponding node where the game pod is running and then run the following command(replace minikube with your node name) to verify if the service is accessible.
docker@minikube:~$ curl -I http://minikube:32105 HTTP/1.1 200 OK Server: nginx/1.15.12 Date: Sun, 03 Oct 2021 07:15:15 GMT Content-Type: text/html Content-Length: 3378 Last-Modified: Wed, 26 Sep 2018 18:37:14 GMT Connection: keep-alive ETag: "5babd1da-d32" Accept-Ranges: bytes
-
Deploy
tri-color
helm release in thedelhi
namespace using helm. Use bitnami/nginx helm chart to deploy the release. Deploy the helm chart with default values. Check the rollout history of deployment created by the helm release.steps
Dowload the helm chart from bitnami/nginx repository.helm repo add bitnami https://charts.bitnami.com/bitnami
Update the helm repo
helm repo update
Install the release
helm install tri-color bitnami/nginx --create-namespace --namespace delhi
Check deployment created and rollout status.
kubectl get deploy -n delhi kubectl rollout status deploy tri-color-nginx -n delhi
verify
helm install tri-color bitnami/nginx --create-namespace --namespace delhi NAME: tri-color LAST DEPLOYED: Sun Oct 3 13:14:45 2021 NAMESPACE: delhi STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ** Please be patient while the chart is being deployed ** NGINX can be accessed through the following DNS name from within your cluster: tri-color-nginx.delhi.svc.cluster.local (port 80) To access NGINX from outside the cluster, follow the steps below: 1. Get the NGINX URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace delhi -w tri-color-nginx' export SERVICE_PORT=$(kubectl get --namespace delhi -o jsonpath="{.spec.ports[0].port}" services tri-color-nginx) export SERVICE_IP=$(kubectl get svc --namespace delhi tri-color-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "http://${SERVICE_IP}:${SERVICE_PORT}"
helm list -n delhi NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION tri-color delhi 1 2021-10-03 13:14:45.184267 +0530 IST deployed nginx-9.5.5 1.21.3
kubectl get po -n delhi NAME READY STATUS RESTARTS AGE tri-color-nginx-c987dd577-f4lg6 1/1 Running 0 110s
Check rollout status of deployment created.
kubectl get deploy -n delhi NAME READY UP-TO-DATE AVAILABLE AGE tri-color-nginx 1/1 1 1 3m45s
kubectl rollout status deploy tri-color-nginx -n delhi deployment "tri-color-nginx" successfully rolled out
-
Change the tri-color helm release to use some other version of the chart. Check the rollout status of deployment created by the helm release.
steps
Search version for bitnami/nginx chart.helm search repo bitnami/nginx --versions
Update the helm release
tri-color
helm upgrade --install tri-color --version=9.5.0 bitnami/nginx --create-namespace --namespace delhi
Check the rollout history for deployment.
kubectl rollout history deploy tri-color-nginx -n delhi
verify
Check the new release.List the helm release.helm upgrade --install tri-color --version=9.5.0 bitnami/nginx --create-namespace --namespace delhi Release "tri-color" has been upgraded. Happy Helming! NAME: tri-color LAST DEPLOYED: Sun Oct 3 13:26:24 2021 NAMESPACE: delhi STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: ** Please be patient while the chart is being deployed ** NGINX can be accessed through the following DNS name from within your cluster: tri-color-nginx.delhi.svc.cluster.local (port 80) To access NGINX from outside the cluster, follow the steps below: 1. Get the NGINX URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace delhi -w tri-color-nginx' export SERVICE_PORT=$(kubectl get --namespace delhi -o jsonpath="{.spec.ports[0].port}" services tri-color-nginx) export SERVICE_IP=$(kubectl get svc --namespace delhi tri-color-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "http://${SERVICE_IP}:${SERVICE_PORT}"
helm list -n delhi NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION tri-color delhi 2 2021-10-03 13:26:24.203328 +0530 IST deployed nginx-9.5.0 1.21.1
Check rollout status of deployment created.
kubectl rollout history deploy tri-color-nginx -n delhi deployment.apps/tri-color-nginx REVISION CHANGE-CAUSE 1 <none> 2 <none>
-
Delete the tri-color helm release.
steps
Delete the helm release `tri-color`.helm uninstall tri-color -n delhi
verify
helm uninstall tri-color -n delhi release "tri-color" uninstalled
-
Create yaml for a statefulset with name
nginx
in deafult namespace. Use imagek8s.gcr.io/nginx-slim:0.8
. Label the statefulset withapp=nginx-sts
. It should run 3 replicas. Configure the container name asnginx-sts
and it should not take more than 5s to shutdown the pod if the statefulset pod is terminated. Use PVCs for sts inRWO
accessMode and mount them at/usr/share/nginx/html
in statefulset pods. Use thedefault
storage class to create the pvcs for the statefulset. Check the statefulset yaml and deploy it. Verify the sts, pod and PVCs created and their mappings.steps
Prepare yaml for statefulset.apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx spec: selector: matchLabels: app: nginx-sts serviceName: "nginx" replicas: 3 # by default is 1 template: metadata: labels: app: nginx-sts spec: terminationGracePeriodSeconds: 5 containers: - name: nginx-sts image: k8s.gcr.io/nginx-slim:0.8 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 200Mi
deploy statefulset.
kubectl apply -f nginx-sts.yaml
Check the statefulset, pods and PVCs.
kubectl get statefulset,po,pvc
verify
kubectl get statefulset,po,pvc -o wide NAME READY AGE CONTAINERS IMAGES statefulset.apps/nginx 3/3 5m11s nginx-sts k8s.gcr.io/nginx-slim:0.8 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-0 1/1 Running 0 5m11s 172.17.0.27 minikube <none> <none> pod/nginx-1 1/1 Running 0 5m9s 172.17.0.28 minikube <none> <none> pod/nginx-2 1/1 Running 0 5m8s 172.17.0.31 minikube <none> <none> NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/www-nginx-0 Bound pvc-2b120e12-53e6-40b4-bb5b-6bb5c1aa0698 200Mi RWO standard 2m59s persistentvolumeclaim/www-nginx-1 Bound pvc-a34cf366-061c-4ef3-932f-ea2274d9154e 200Mi RWO standard 2m56s persistentvolumeclaim/www-nginx-2 Bound pvc-29417f58-6048-457a-a30f-ae3194be4e39 200Mi RWO standard 2m53s
-
Create a deployment with name
nginx-deployment
indenver
namespace. Use imagenginx:1.14.2
with container name asnginx-deployment
. Label the deployment withapp=nginx-deployment
. It should run 3 replicas. The deployment should also run another container with image busybox and commandsleep 3600
. Once the pods for the deployment are up, update the image tonginx:1.16.1
fromnginx:1.14.2
. and verify the pods are updated and see the rollout status. Once the pods are updated, rollback the image to previous version and ensure pod are runnig fine after rollback and no more accidental rollout happens.steps
Create initial deployment yaml and store in nginx-deployment.yaml.Edit the deployment yaml to add another container.kubectl create deploy nginx-deployment -n denver --image=nginx:1.14.2 --replicas=3 --dry-run=client -o yaml
Apply the deployment yaml.apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx-deployment name: nginx-deployment namespace: denver spec: replicas: 3 selector: matchLabels: app: nginx-deployment strategy: {} template: metadata: creationTimestamp: null labels: app: nginx-deployment spec: containers: - image: busybox name: busybox command: ["/bin/sh", "-c", "sleep 3600"] - image: nginx:1.14.2 name: nginx-deployment resources: {} status: {}
Update the image to nginx:1.16.1.kubectl apply -f nginx-deployment.yaml
Verify the pods are updated and see the rollout status.kubectl set image deploy nginx-deployment nginx-deployment=nginx:1.16.2 -n denver
Rollback the image to nginx:1.14.2.kubectl rollout status deploy nginx-deployment -n denver
Pause the rollout to avoid any further release.kubectl rollout undo nginx-deployment -n denver --to-revision=1
kubectl rollout pause nginx-deployment -n denver
result
Verify that deployment pods are running fine.┗━ ॐ kubectl get po -n denver -l app=nginx-deployment NAME READY STATUS RESTARTS AGE nginx-deployment-6774947b7d-cprzd 2/2 Running 0 36s nginx-deployment-6774947b7d-g9wfm 2/2 Running 0 59s nginx-deployment-6774947b7d-v4zzm 2/2 Running 0 80s
Verify that the image is updated.
Verify if the rollout is paused. Try updating a new image and see if changes gets rolled out.kubectl get po -n denver -l app=nginx-deployment -o jsonpath='{.items[*].spec.containers[?(@.name=="nginx-deployment")].image}'
If you check the rollout will not happen and old pods will keep running. We may have to resume the rollout to see the changes.┗━ ॐ kubectl set image deploy nginx-deployment nginx-deployment=nginx:1.16.4 -n denver deployment.apps/nginx-deployment image updated ┗━ ॐ kubectl rollout status deploy nginx-deployment -n denver Waiting for deployment "nginx-deployment" rollout to finish: 0 out of 3 new replicas have been updated...
-
Update the image to nginx:1.21.3 in deployment
nginx-deployment
(created in question 18). Resume the rollout so that new image changes can be propagated.steps
Resume the rollout.kubectl set image deploy nginx-deployment nginx-deployment=nginx:1.21.3 -n denver
kubectl rollout resume nginx-deployment -n denver
result
┗━ ॐ kubectl rollout resume deploy nginx-deployment -n denver deployment.apps/nginx-deployment resumed
Verify that the pods are updated.
┗━ ॐ kubectl get po -n denver -l app=nginx-deployment -o jsonpath='{.items[*].spec.containers[?(@.name=="nginx-deployment")].image}' nginx:1.21.3 nginx:1.21.3 nginx:1.21.3
-
Team Mumbai wants to setup stateful MySQL db in namespace
mumbai
. Create a statefulsetmysql
inmumbai
namespace with imagemysql:5.6
. Store the password of mysql in secretmysql-secret
and pass it to container with nameMYSQL_ROOT_PASSWORD
as an environment variable. Run the MySQL instance on port3306
. Mount a PVC namedmysql-pv-claim
in themysql
container at path/var/lib/mysql
. The PVC should be binded to a PV namedmysql-pv-volume
with storageClass asmanual
. The PV & PVC should haveReadWriteOnce
access mode. The PVC should request for1Gi
storage. The PV capacity should be3Gi
. Expose themysql
deployment on port3306
using a headless service.steps
Create the storageClass, PV and PVC using storage.yaml.kubectl create ns mumbai
Generate the basic mysql deployment yaml.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: manual provisioner: k8s.io/minikube-hostpath # this is for minikube clusters. Use appropriate provisioner if working on other clusters. reclaimPolicy: Retain allowVolumeExpansion: true --- apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 3Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim namespace: mumbai spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
Edit the deployment yaml.kubectl create deploy mysql -n mumbai --image=mysql:5.6 --port=3306 --dry-run=client -o yaml > mysql-deploy.yaml
Create the secret mysql-secret.apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: mumbai spec: selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim
Create the service `mysql-service` by below data in mysql-service.yaml.kubectl create secret generic mysql-secret --from-literal=MYSQL_ROOT_PASSWORD=password -n mumbai
Apply the deployment, storage and service yaml.apiVersion: v1 kind: Service metadata: name: mysql namespace: mumbai spec: ports: - port: 3306 selector: app: mysql clusterIP: None
kubectl apply -f storage.yaml kubectl apply -f mysql-deploy.yaml kubectl apply -f mysql-service.yaml
result
Check if storage is created.Check if mysql pod in up and mysql service is up inside the pod.[11:09 PM IST 04.10.2021 ☸ 127.0.0.1:57199 📁 CKAD-TheHardWay ❱ master ▲] ┗━ ॐ kg sc,pv,pvc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE storageclass.storage.k8s.io/manual k8s.io/minikube-hostpath Retain Immediate true 6s storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 28h NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/mysql-pv-volume 3Gi RWO Retain Bound mumbai/mysql-pv-claim manual 6s persistentvolume/pvc-ebf0372d-f5c4-4446-b023-175d22ee1ab1 1Gi RWO Retain Available mumbai/mysql-pv-claim manual 6s
Check if mysql is accessible using the headless service.┗━ ॐ kg po,svc,secret -n mumbai NAME READY STATUS RESTARTS AGE pod/mysql-756f767845-tsrqq 1/1 Running 0 8m10s pod/mysql-client 0/1 Error 0 50s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP None <none> 3306/TCP 6m51s NAME TYPE DATA AGE secret/default-token-55rs4 kubernetes.io/service-account-token 3 11m secret/mysql-secret Opaque 1 7m43s
[11:15 PM IST 04.10.2021 ☸ 127.0.0.1:57199 📁 CKAD-TheHardWay ❱ master ▲] ┗━ ॐ kubectl run -n mumbai -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword If you don't see a command prompt, try pressing enter. mysql> mysql> mysql> exit Bye