Skip to content

Latest commit

 

History

History
312 lines (222 loc) · 8.33 KB

File metadata and controls

312 lines (222 loc) · 8.33 KB

Lightning Lab 1

Solution to LL-1

  1. Upgrade the current version of kubernetes from 1.26.0 to 1.27.0 exactly using the kubeadm utility.

    There is currently an issue with this lab which requires an extra step. This may be addressed in the near future.

    On controlplane node

    1. Drain node

      kubectl drain controlplane --ignore-daemonsets
      
    2. Upgrade kubeadm

      apt-get update
      apt-mark unhold kubeadm
      apt-get install -y kubeadm=1.27.0-00
      
    3. Plan and apply upgrade

      kubeadm upgrade plan
      kubeadm upgrade apply v1.27.0
      
    4. Remove taint on controlplane node. This is the issue described above. As part of the upgrade specifically to 1.26, some taints are added to all controlplane nodes. This will prevent the gold-nginx pod from being rescheduled to the controlplane node later on.

      kubectl describe node controlplane | grep -A 3 taint
      

      Output:

      Taints:   node-role.kubernetes.io/control-plane:NoSchedule
                node.kubernetes.io/unschedulable:NoSchedule
      

      Let's remove them

      kubectl taint node controlplane node-role.kubernetes.io/control-plane:NoSchedule-
      kubectl taint node controlplane node.kubernetes.io/unschedulable:NoSchedule-
      
    5. Upgrade the kubelet

      apt-mark unhold kubelet
      apt-get install -y kubelet=1.27.0-00
      systemctl daemon-reload
      systemctl restart kubelet
      
    6. Reinstate controlplane node

      kubectl uncordon controlplane
      
    7. Upgrade kubectl

      apt-mark unhold kubectl
      apt-get install -y kubectl=1.27.0-00
      
    8. Re-hold packages

      apt-mark hold kubeadm kubelet kubectl
      
    9. Drain the worker node

      kubectl drain node01 --ignore-daemonsets
      
    10. Go to worker node

      ssh node01
      
    11. Upgrade kubeadm

      apt-get update
      apt-mark unhold kubeadm
      apt-get install -y kubeadm=1.27.0-00
      
    12. Upgrade node

      kubeadm upgrade node
      
    13. Upgrade the kubelet

      apt-mark unhold kubelet
      apt-get install kubelet=1.27.0-00
      systemctl daemon-reload
      systemctl restart kubelet
      
    14. Re-hold packages

      apt-mark hold kubeadm kubelet
      
    15. Return to controlplane

      exit
      
    16. Reinstate worker node

      kubectl uncordon node01
      
    17. Verify gold-nginx is scheduled on controlplane node

      kubectl get pods -o wide | grep gold-nginx
      
  2. Print the names of all deployments in the admin2406 namespace in the following format...

    This is a job for custom-columns output of kubectl

    kubectl -n admin2406 get deployment -o custom-columns=DEPLOYMENT:.metadata.name,CONTAINER_IMAGE:.spec.template.spec.containers[].image,READY_REPLICAS:.status.readyReplicas,NAMESPACE:.metadata.namespace --sort-by=.metadata.name > /opt/admin2406_data
    
  3. A kubeconfig file called admin.kubeconfig has been created in /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.

    First, let's test this kubeconfig

    kubectl get pods --kubeconfig /root/CKA/admin.kubeconfig
    

    Notice the error message.

    Now look at the default kubeconfig for the correct setting.

    cat ~/.kube/config
    

    Make the correction

    vi /root/CKA/admin.kubeconfig
    

    Test

    kubectl get pods --kubeconfig /root/CKA/admin.kubeconfig
    
  1. Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update.
    kubectl create deployment nginx-deploy --image=nginx:1.16
    kubectl set image deployment/nginx-deploy nginx=nginx:1.17 --record
    

    You may ignore the deprecation warning.

  2. A new deployment called alpha-mysql has been deployed in the alpha namespace. However, the pods are not running. Troubleshoot and fix the issue.

    The deployment should make use of the persistent volume alpha-pv to be mounted at /var/lib/mysql and should use the environment variable MYSQL_ALLOW_EMPTY_PASSWORD=1 to make use of an empty root password.

    Important: Do not alter the persistent volume.

    Inspect the deployment to check the environment variable is set. Here I'm using yq which is like jq but for YAML to not have to view the entire deployment YAML, just the section beneath containers in the deployment spec.

    kubectl get deployment -n alpha alpha-mysql  -o yaml | yq e .spec.template.spec.containers -
    

    Find out why the deployment does not have minimum availability. We'll have to find out the name of the deployment's pod first, then describe the pod to see the error.

    kubectl get pods -n alpha
    kubectl describe pod -n alpha alpha-mysql-xxxxxxxx-xxxxx
    

    We find that the requested PVC isn't present, so create it. First, examine the Persistent Volume to find the values for access modes, capacity (storage), and storage class name

    kubectl get pv alpha-pv
    

    Now use vi to create a PVC manifest

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: mysql-alpha-pvc
      namespace: alpha
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: slow
  1. Take the backup of ETCD at the location /opt/etcd-backup.db on the controlplane node.

    This question is a bit poorly worded. It requires us to make a backup of etcd and store the backup at the given location.

    Know that the certificates we need for authentication of etcdctl are located in /etc/kubernetes/pki/etcd

    ETCDCTL_API='3' etcdctl snapshot save \
      --cacert=/etc/kubernetes/pki/etcd/ca.crt \
      --cert=/etc/kubernetes/pki/etcd/server.crt \
      --key=/etc/kubernetes/pki/etcd/server.key \
      /opt/etcd-backup.db
    

    Whilst we could also use the argument --endpoints=127.0.0.1:2379, it is not necessary here as we are on the controlplane node, same as etcd itself. The default endpoint is the local host.

  2. Create a pod called secret-1401 in the admin1401 namespace using the busybox image....

    The container within the pod should be called secret-admin and should sleep for 4800 seconds.

    The container should mount a read-only secret volume called secret-volume at the path /etc/secret-volume. The secret being mounted has already been created for you and is called dotfile-secret.

    1. Use imperative command to get a starter manifest

      kubectl run secret-1401 -n admin1401 --image busybox --dry-run=client -o yaml --command -- sleep 4800 > admin.yaml
      
    2. Edit this manifest to add in the details for mounting the secret

      vi admin.yaml
      

      Add in the volume and volume mount sections seen below

      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: secret-1401
        name: secret-1401
        namespace: admin1401
      spec:
        volumes:
        - name: secret-volume
          secret:
            secretName: dotfile-secret
        containers:
        - command:
          - sleep
          - "4800"
          image: busybox
          name: secret-admin
          volumeMounts:
          - name: secret-volume
            readOnly: true
            mountPath: /etc/secret-volume
    3. And create the pod

      kubectl create -f admin.yaml