Skip to content

cha2ranga/k8s-installation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

83 Commits
 
 
 
 
 
 
 
 

Repository files navigation

This README document provides instructions on manually setting up a Kubernetes cluster. The article will also cover how to use MetalLB for load balancing and the Kubernetes Dashboard for monitoring and managing the cluster. The goal is to give readers a basic understanding of Kubernetes and the tools to create their playground for testing and simulating scenarios.

Activity Plan

workflow

Create a Kubernetes cluster with @Rocky Linux minimum installation on the vsphere environment.

This cluster consists of a single master node and two worker nodes. we will use @kubeadm to initialize the cluster.

At a later phase, let's add the @metallb for the load balancer IP range and @dashboard for GUI.

Basic idea here to map a fundamental container components like compute, network and storage (at later phase) to together.

Compute - CRI (Container Runtime Interface) @containerd

Network - CNI (Container Network Interface) @calico

Storage - CSI (Container Storage Interface) @Dell PowerStore ## Futuer Blog

Network Diagram

ndiagram1

VMs Installation

Setup VMs with minimum installation (2vCPU, 4GB Memory, 30GB Disks) Once installation is finishd, setup static IPs (preffered) for all threee VMs. Then upgrade the OS to latest patch versions for all three VMs.

yum install -y wget
yum update -y && reboot

Once nodes are back in online, setup /etc/hosts file as follows. If you have a proper DNS in the environment feel free to follow

Since we are following non-production deployment add the host file entries.

 cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.0.201    master
172.18.0.202    worker1
172.18.0.203    worker2

Configure passwordless SSH

Configure ssh-key files

on master

ssh-keygen

Diagram1

Once key is ready you can copy the key to worker1 and worker2

ssh-copy-id root@worker1
ssh-copy-id root@worker2

Diagram1

Now from master node you will be able to ssh into both worker nodes without password.

RUN below 04 stpes into all three VMs

01 - VM prepreration for k8s installation

There are multiple parameters need to be configured before we start k8s installation. Here we are going to use @containerd as a CRI.

Rest of the prerequsites likes, swap configuration, addtional packages, firewall, addtional kernel modules will be configured using basic bash scripts.

If you want to adjust specific version of the k8s installation, you can manually edit the scripts. Define specific version number. Otherwise this installation will follow lates installation.

Create a directory and download scripts

cd ~ && mkdir k8s_installation && cd k8s_installation
wget https://raw.githubusercontent.com/cha2ranga/k8s-installation/main/scripts/1_k8s_install_part1.bash
wget https://raw.githubusercontent.com/cha2ranga/k8s-installation/main/scripts/2_k8s_install_part2.bash
chmod +x 1_k8s_install_part1.bash
chmod +x 2_k8s_install_part2.bash

02 - Run Script 1_k8s_install_part1.bash

Script1 will adjust/install firewall, addtional kernel moduels, Container run time @containerd etc.

./1_k8s_install_part1.bash

Script will automatically adjust the /etc/containerd/config.toml config. Change the value of cgroup driver "SystemdCgroup = false" to "SystemdCgroup = true". This will enable the systemd cgroup driver for the containerd container runtime.

containerd configuration

03 - Run Script 2_k8s_install_part2.bash

Scrip2 will install/configure kubelet, kubeadm, kubectl and addtional packages like multipath.

./2_k8s_install_part2.bash

04 - Configure socket path for containerd

once you properly configure container socket path, you will be able to list down containers in the individual node.

crictl config runtime-endpoint unix:///run/containerd/containerd.sock

Now prerequsites are completed.

Let's create a kubernetes cluster using @kubeadm tool.

Use KUBEADM to create kubernetes Cluster

Run below commands from your master node

we are going to use following ip cidr for container network 10.244.0.0/16

It's always better to perform a dry run before actual kubeadm init process. you can do the dry run as follows,

kubeadm init --pod-network-cidr=10.244.0.0/16 --dry-run

kubeadm dryrun

Once you verify there is no errors, you can do the "kubeadm init"

kubeadm init --pod-network-cidr=10.244.0.0/16

kubeadm init

If you open another ssh session to your master node and try to list down running containers, you see the initial container creation

watch -n 1 crictl ps

containers

Copy kubernetes config file to .kube directory

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternative method is, you can export config file as follows

export KUBECONFIG=/etc/kubernetes/admin.conf

kubeadm config

After kubeadm initialized the master nodes, you will see base containers are up and running in the master node.

crictl ps

containers

Now you can start enabling network communication for your container network plugin. Here we are using @calico manifest file.

As explained in the "kubeadm init" output, now we need to apply CNI. Then join worker nodes to the cluster using join token.

containers

Download the yaml file for calico CNI

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml -O

then apply CNI

kubectl apply -f calico.yaml

Calico

Join Worker Nodes to cluster (Run on worker nodes)

Get the token and use this token to join rest of the worker nodes to cluster.

kubeadm join 172.xx.xx.xx:6443 --token z1v1jp.pmxxxxxxxxxxxcmqt3 \
        --discovery-token-ca-cert-hash sha256:0924f8614xxxxxxxxxxxxxxxxxx56a52a1bf5a2b1fbc575e8

In case if you missed that token, you can re-create the token using follwoing command

kubeadm token create --print-join-command

  Go to each worker node and join them to cluster

Calico

Now you can go back to master node and check the status

kubectl get nodes
kubectl get nodes -o wide

Calico

Use follwoing commands to set the autocompletion and set alias

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
alias k=kubectl
complete -o default -F __start_kubectl k

Metallb Installation (LoadBalancer)

Here we are going to use Metallb as a L2 mode. metallb iprange 172.18.0.220-172.18.0.229

Refer to this URL for @metallb manifest installation.

apply the manifest by,

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Manifest will create a "metallb-system" namespace

Now you can refer to @L2 configuration

First, create an ip address pool for your load balancer. Add the below content to your l2ip_pool.yml file

touch l2ip_pool.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.18.0.220-172.18.0.229

then create the IP pool

kubectl apply -f l2ip_pool.yml
ipaddresspool.metallb.io/first-pool created

Verify load balancer IP pool

kubectl -n metallb-system get ipaddresspools.metallb.io
NAME         AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
first-pool   true          false             ["172.18.0.220-172.18.0.229"]

Let's advertise the ip pool. first create "l2adv.yml"

touch l2adv.yml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

Let's advertise IP range

kubectl create -f l2adv.yml
l2advertisement.metallb.io/example created

verify advertised ip pool

kubectl -n metallb-system get l2advertisements.metallb.io
NAME      IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
example   ["first-pool"]

Create a quick nginx deployment with two replicas

kubectl create deployment web --image nginx --replicas=2

Now you can change your sample deployment of the web application to load balancer

kubectl expose deployment web --port=80 --name=websvc --type=LoadBalancer

verify,

kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>         443/TCP        143m
websvc       LoadBalancer   10.100.238.200   172.18.0.220   80:30427/TCP   8s

Metric Server Installation

The Kubernetes Metrics Server is a cluster-wide aggregator of resource usage data.

Even though installation is simple, it usually will not work due to certificate and other configuration errors. There are additional parameters that need to add based on documentation.

Here are the common challenges with @Metrics-server installation.

  1. create config map for "front-proxy" certificate
kubectl -n kube-system create configmap front-proxy-ca --from-file=front-proxy-ca.crt=/etc/kubernetes/pki/front-proxy-ca.crt -o yaml | kubectl -nkube-system replace configmap front-proxy-ca -f -

output: configmap/front-proxy-ca replaced

verify,

kubectl -n kube-system get cm
  1. Now you can apply the metrics-server manifest file
wget https://raw.githubusercontent.com/cha2ranga/k8s-installation/main/metrics-server/components_custom.yaml
kubectl apply -f components_custom.yaml
  1. now, you can run the following commands to verify the metrics server. Note: wait at least for 30 seconds
kubectl top nodes
kubectl top pods

Metrics-server

Kubernetes Dashboard Installation

Let's enable @kubernetes dashboard for our cluster. In this case, we will use a load balancer IP to publish the dashboard service via https. It is not recommended for a production installation. However, since this is a demo setup, we can still expose dashboard services over load balancer IPs.

Dashboard

You can add the following manifest file,

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

you can list down the newly created pods under "kubernetes-dashboard"

kubectl -n kubernetes-dashboard get pods -o wide

by default kubernetes dashboard uses cluster ip for it's service.

kubectl -n kubernetes-dashboard get svc
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP      10.101.138.205   <none>        8000/TCP                 1m
kubernetes-dashboard   kubernetes-dashboard        ClusterIP      10.97.185.20     <none>        443/TCP                  1m

Since we have configured metallb, we can use load balancer IP for Kubernetes dashboard.

kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard

set the parameters as follows, Edit service

After editing the file, you can write and exit from the file.

:wq

Now you can verify the service status.

kubectl -n kubernetes-dashboard get svc

the output shows as follows,

NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP      10.101.138.205   <none>        8000/TCP        1h
kubernetes-dashboard        LoadBalancer   10.97.143.73     172.27.1.61   443:31058/TCP   14s

Now we can access the services over the load balancer IP address, in this case https://172.27.1.61

Kubernetes Dashboard

Instead of a kubeconfig file, we can create an admin and read-only users to access the Kubernetes dashboard

Let's create a service account, cluster role, and cluster role binding.

vim admin-user.yaml

Add the following settings to admin-user.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

create admin user,

kubectl apply -f admin-user.yaml

Run the following command to generate a token for the admin user

kubectl -n kubernetes-dashboard create token admin-user

Now you can use this token to authenticate to the kubernetes dashboard. Kubernetes Dashboard login

Create read-only user

vim read-only-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: read-only-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
  name: read-only-clusterrole
  namespace: default
rules:
- apiGroups:
  - ""
  resources: ["*"]
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources: ["*"]
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources: ["*"]
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-only-binding
roleRef:
  kind: ClusterRole
  name: read-only-clusterrole
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: read-only-user
  namespace: kubernetes-dashboard
kubectl apply -f read-only-user.yaml

Run the following command to generate a token for read-only-user

kubectl -n kubernetes-dashboard create token read-only-user

example output

kubectl -n kubernetes-dashboard create token read-only-user
eyJhbGciOiJSUzI1NiIsImtpZCIxxxxxxxxxxxxxxxI3j9cnqMfUbqHlELpFegaPw

About

Rocky linux k8s installation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages