Skip to content

Latest commit

 

History

History
128 lines (109 loc) · 4.12 KB

KC.md

File metadata and controls

128 lines (109 loc) · 4.12 KB

Building a Kubernetes 1.23 Cluster with Kubeadm

  • This lab will allow you to practice the process of building a new Kubernetes cluster. You will be given a set of Linux servers, and you will have the opportunity to turn these servers into a functioning Kubernetes cluster. This will help you build the skills necessary to create your own Kubernetes clusters in the real world.

  • Log in to the lab server using the credentials provided:

 $ ssh user@<PUBLIC_IP_ADDRESS> 

Install Packages

    1. Log into the Control Plane Node (Note: The following steps must be performed on all three nodes or desired nodes.)
    1. Create configuration file for containerd:
    $ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF 
    1. Load modules:
    $ sudo modprobe overlay sudo modprobe br_netfilter 
    1. Set system configurations for Kubernetes networking
    $ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF 
    1. Apply new settings:
    $ sudo sysctl --system 
    1. Install containerd:
    $ sudo apt-get update && sudo apt-get install -y containerd 
    1. Create default configuration file for containerd:
    $ sudo mkdir -p /etc/containerd 
    1. Generate default containerd configuration and save to the newly created default file:
    $ sudo containerd config default > /etc/containerd/config.toml 
    1. Restart containerd to ensure new configuration file usage:
    $ sudo systemctl restart containerd 
    1. Verify that containerd is running.
    $ sudo systemctl status containerd 
    1. Disable swap:
    $ sudo swapoff -a
    1. Disable swap on startup in /etc/fstab:
    $ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    1. Install dependency packages:
    $ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
    1. Download and add GPG key:
    $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - 
    1. Add Kubernetes to repository list:
    $ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
    1. Update package listings:
    $ sudo apt-get update
    1. Install Kubernetes packages (Note: If you get a dpkg lock message, just wait a minute or two before trying the command again):
    $ sudo apt-get install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00
    1. Turn off automatic updates:
sudo apt-mark hold kubelet kubeadm kubectl
    1. Log into both Worker Nodes to perform previous steps.

Initialize the Cluster

    1. Initialize the Kubernetes cluster on the control plane node using kubeadm (Note: This is only performed on the Control Plane Node):
    $ sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.23.0 
    1. Set kubectl access:
    $ mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 
    1. Test access to cluster:
    $ kubectl get nodes

Install the Calico Network Add-On

    1. On the Control Plane Node, install Calico Networking:
    $ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 
    1. Check status of the control plane node: $ kubectl get nodes

Join the Worker Nodes to the Cluster

    1. In the Control Plane Node, create the token and copy the kubeadm join command (NOTE:The join command can also be found in the output from kubeadm init command):
    $ kubeadm token create --print-join-command 
    1. In both Worker Nodes, paste the kubeadm join command to join the cluster. Use sudo to run it as root: $ sudo kubeadm join ...
    1. In the Control Plane Node, view cluster status (Note: You may have to wait a few moments to allow all nodes to become ready):
    $ kubectl get nodes