Skip to content

Kubernetes Eco System installation on Ubuntu 22.04 server

MD. MEHEDI HASAN edited this page Apr 2, 2023 · 3 revisions

Kubernetes setup with kubeadm

Disable Swap Space

Disable swap space of linux server by the following command:

sudo swapoff -a

Run the following command to open fstab file under /etc directory

sudo vim /etc/fstab

It should look the following snippet.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-bQpQVGAVGOn3j9xMxnZtcLpXEXrvKqaLo1oB32yyvQaUf4ch14r3Z0QtG4j0Tg1E / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/cdaba124-8b1f-4579-9d2f-bd82f921cf66 /boot ext4 defaults 0 1
/swap.img       none    swap    sw      0       0

Commented out the following line by:

#/swap.img       none    swap    sw      0       0

So the final output should lokk like the following snippet.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-yiV1ENGt384BlUdDC6Ie02TJ1qT3E9wmNe2WShFPo5RPeoEtPyZdqMUUqH0PRQ1Q / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/a5b2d4a1-daee-43b4-96c1-9ac203582eea /boot ext4 defaults 0 1
#/swap.img      none    swap    sw      0       0
~             
~       
"/etc/fstab" 12L, 658B                                                                                                                                                                           12,1          All

Check if swap has been disabled by running the free command:

free -h

It should show the snippet like below:

master@master:~$ free -h
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       345Mi       2.9Gi       1.0Mi       537Mi       3.2Gi
Swap:             0B          0B          0B

To check if the settings are correct, write the following commands:

sudo mount -a
free -h

Step 2

Add some settings to sysctl

sudo tee /etc/sysctl.d/kubernetes.conf << EOF

Then write the following lines

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Terminal should look like the following snippet:

master@master:~$ sudo tee /etc/sysctl.d/kubernetes.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[sudo] password for master: 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
master@master:~$

Reload sysctl

sudo sysctl --system

Step 3

Install curl apt-transport-https and ca-certificates with the following command

sudo apt-get install curl apt-transport-https ca-certificates

Download the Google Cloud public signing key:

sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Add the kubernetes apt repository:

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update packages by the following command:

sudo apt-get update

Then update apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get install -y kubelet kubeadm kubectl

Then run the following command:

sudo apt-mark hold kubelet kubeadm kubectl

Step 4: Installing Containerd

Write below command to make sure loaded modules are set on boot.

cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

Terminal should show the snippet like below:

master@master:~$ cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF
overlay
br_netfilter
master@master:~$ 

Enable kernel modules write the following commands:

sudo modprobe overlay
sudo modprobe br_netfilter

Add Docker repo

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt install -y containerd.io

Configure containerd and start service

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo vim /etc/containerd/config.toml

In the config.toml file, update SystemdCgroup. change it from SystemdCgroup = false to SystemdCgroup = true

Restart containerd

sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd
sudo systemctl enable kubelet

Step 5

Pull container images for containerd:

sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock

Terminal output should show the following snippet:

master@master:~$ sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock
W0205 05:59:11.447837    6213 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.1
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.1
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.1
[config/images] Pulled registry.k8s.io/kube-proxy:v1.26.1
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.6-0
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3

Start kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

The output should look like the following snippet:

master@master:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.0.148]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.0.148 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.0.148 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.001959 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: oklou6.ux9s09zdfc4huias
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.148:6443 --token oklou6.ux9s09zdfc4huias \
	--discovery-token-ca-cert-hash sha256:0ac1e5c4696fad0a9c875c070f9f15264eb93a582a93f535c41a2fab3b67955b 

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

To check status please follow the below snippet

Write the following command:

kubectl cluster-info

The output seems the below output:

master@master:~$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.148:6443
CoreDNS is running at https://192.168.0.148:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Step 6

Download calico by the following command:

wget https://docs.projectcalico.org/manifests/calico.yaml

In this guide we'll use Calico network plugin. You can choose any other supported network plugins. Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes. Open calico.yaml and find the setting for Pod Network IP address range CALICO_IPV4POOL_CIDR, adjust if needed for your infrastructure to ensure that the Pod network IP, range doesn't overlap with other networks in our infrastructure. Then edit by the following command:

vim calico.yaml

As you can see the value for CALICO_IPV4POOL_CIDR is 192.168.0.0/16. All pods are going to be allocated IP's from that network range. We need to make sure that the network range should not overlap with other network in our infracture. Exit without update anything in this page.

Then apply with:

kubectl apply -f calico.yaml

The output should show the following snippet:

master@master:~$ kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

Look for the all the system pods and calico pods to change to Running. The DNS pod won't start (pending) until the Pod network is deployed and Running.

kubectl get pods --all-namespaces