Skip to content

snigdhasambitak/CKA-k8s-buildCluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CKA-k8s-buildCluster : Building a Kubernetes Cluster with Kubeadm for CKA exam

Create 3 Virtual Machines for your Kubernetes cluster

You have been given the task of quickly spinning up a working Kubernetes cluster with one master and two worker nodes.

Dependencies

You should install VirtualBox and Vagrant before you start.

You can spin up 3 nodes by creating a Vagrantfile

Creating a Vagrantfile

Use the text editor of your choice and create a file with named Vagrantfile, inserting the code below. The value of N denotes the number of nodes present in the cluster, it can be modified accordingly. In the below example, we are setting the value of N as 2.

IMAGE_NAME = "bento/ubuntu-16.04"
N = 2

Vagrant.configure("2") do |config|
    config.ssh.insert_key = false

    config.vm.provider "virtualbox" do |v|
        v.memory = 1024
        v.cpus = 2
    end

    config.vm.define "k8s-master" do |master|
        master.vm.box = IMAGE_NAME
        master.vm.network "private_network", ip: "192.168.50.10"
        master.vm.hostname = "k8s-master"
        end
    end

    (1..N).each do |i|
        config.vm.define "node-#{i}" do |node|
            node.vm.box = IMAGE_NAME
            node.vm.network "private_network", ip: "192.168.50.#{i + 10}"
            node.vm.hostname = "node-#{i}"
            end
        end
    end

You can create the cluster with:

$ vagrant up

You can delete the cluster with:

$ vagrant destroy

Install Docker on all three nodes.

  • Do the following on all three nodes:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
sudo apt-mark hold docker-ce
  • Verify that Docker is up and running with
sudo systemctl status docker

Make sure the Docker service status is active (running)!

Install Kubeadm, Kubelet, and Kubectl on all three nodes.

  • Install the Kubernetes components by running this on all three nodes:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
sudo apt-mark hold kubelet kubeadm kubectl

Bootstrap the cluster on the Kube master node.

  • On the Kube master node, do this:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

That command may take a few minutes to complete.

  • When it is done, set up the local kubeconfig:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Take note that the kubeadm init command printed a long kubeadm join command to the screen. You will need that kubeadm join command in the next step!

  • Run the following commmand on the Kube master node to verify it is up and running:
kubectl version

This command should return both a Client Version and a Server Version.

Join the two Kube worker nodes to the cluster.

  • Copy the kubeadm join command that was printed by the kubeadm init command earlier, with the token and hash. Run this command on both worker nodes, but make sure you add sudo in front of it:
sudo kubeadm join $some_ip:6443 --token $some_token --discovery-token-ca-cert-hash $some_hash
  • Now, on the Kube master node, make sure your nodes joined the cluster successfully:
kubectl get nodes

Verify that all three of your nodes are listed. It will look something like this:

NAME            STATUS     ROLES    AGE   VERSION
ip-10-0-1-101   NotReady   master   30s   v1.12.2
ip-10-0-1-102   NotReady   <none>   8s    v1.12.2
ip-10-0-1-103   NotReady   <none>   5s    v1.12.2

Note that the nodes are expected to be in the NotReady state for now.

Set up cluster networking with flannel.

  • Turn on iptables bridge calls on all three nodes:
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
  • Next, run this only on the Kube master node:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Now flannel is installed! Make sure it is working by checking the node status again:

kubectl get nodes

After a short time, all three nodes should be in the Ready state. If they are not all Ready the first time you run kubectl get nodes, wait a few moments and try again. It should look something like this:

NAME            STATUS   ROLES    AGE   VERSION
ip-10-0-1-101   Ready    master   85s   v1.12.2
ip-10-0-1-102   Ready    <none>   63s   v1.12.2
ip-10-0-1-103   Ready    <none>   60s   v1.12.2