Skip to content

Latest commit

 

History

History
758 lines (558 loc) · 28.5 KB

K8s-Kubeadm-Cluster-Setup.md

File metadata and controls

758 lines (558 loc) · 28.5 KB

LAB: K8s Cluster Setup with Kubeadm and Containerd

This scenario shows how to create K8s cluster on virtual PC (multipass, kubeadm, containerd)

When creating K8s cluster with Ubuntu and Windows, please use Ubuntu 20.04, and Windows 2019 Server. Because tested, and stable run on Ubuntu 20.04 and Windows 2019 Server.

Easy way to create K8s Cluster with Ubuntu20.04 (Control-Plane, Workers) and Windows 2019 Server:

IMPORTANT:

Table of Contents

1. Creating Cluster With Kubeadm, Containerd

1.1 Multipass Installation - Creating VM

  • "Multipass is a mini-cloud on your workstation using native hypervisors of all the supported plaforms (Windows, macOS and Linux)"
  • Fast to install and to use.
  • Link: https://multipass.run/
# creating master, worker1
# -c => cpu, -m => memory, -d => disk space
multipass launch --name master -c 2 -m 2G -d 10G   
multipass launch --name worker1 -c 2 -m 2G -d 10G

image

# get shell on master 
multipass shell master
# get shell on worker1
multipass shell worker1

image

1.2 IP-Tables Bridged Traffic Configuration

  • Run on ALL nodes:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
  • Run on ALL nodes:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

image

image

  • Run on ALL nodes:
sudo sysctl --system

image

This part is optional:
  • Close swaps on the OS. Because it is required if you run on directly OS (on-premise)(instead of running on VM)
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
  • If you install your cluster behind the proxy, you should define http_proxy, https_proxy, ftp_proxy and no_proxy environment variables on /etc/environment.
  • You should add ::6443 and Master Node IP.
export no_proxy="192.168.*.*, ::6443, <yourMasterIP>:6443, 172.24.*.*, 172.25.*.*, 10.*.*.*, localhost, 127.0.0.1"

1.3 Install Containerd

  • Run on ALL nodes:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
  • Run on ALL nodes:
sudo modprobe overlay
sudo modprobe br_netfilter
  • Run on ALL nodes:
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
  • Run on ALL nodes:
sudo sysctl --system

image

image

  • Run on ALL nodes:
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install containerd -y
sudo mkdir -p /etc/containerd
sudo su -
containerd config default | tee /etc/containerd/config.toml
exit
sudo systemctl restart containerd

image

image

image

image

image

1.4 Install KubeAdm

  • Run on ALL nodes:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

image

image

image

image

1.5 Install Kubernetes Cluster

  • Run on ALL nodes:
sudo kubeadm config images pull

image

  • From worker1, ping the master to learn IP of master.
ping master

image

  • Run on Master:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<ip> --control-plane-endpoint=<ip>
# sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=172.31.45.74 --control-plane-endpoint=172.31.45.74

image

  • After kubeadm init command, master node responses back the followings:

image

  • On the Master node run:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

image

  • On the worker node, run to join cluster (tokens are different in your case, please look at the kubeadm init respond):
sudo kubeadm join 172.31.45.74:6443 --token w7nntd.7t6qg4cd418wzkup \
        --discovery-token-ca-cert-hash sha256:1f03886e5a28fb9716e01794b4a01144f362bf431220f15ca98bed2f5a44e91b
  • If it is required to create another master node, copy the control plane line (tokens are different in your case, please look at the kubeadm init respond):
sudo kubeadm join 172.31.45.74:6443 --token w7nntd.7t6qg4cd418wzkup \
        --discovery-token-ca-cert-hash sha256:1f03886e5a28fb9716e01794b4a01144f362bf431220f15ca98bed2f5a44e91b \
        --control-plane

image

  • On Master node:

image

1.6 Install Kubernetes Network Infrastructure

  • Calico is used for network plugin on K8s. Others (flannel, weave) could be also used.
  • Run only on Master, in our examples, we are using Calico instead of Flannel:
    • Calico:
    kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
    kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
    
    • Flannel:
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

image

image

  • After running network implementation, nodes are now ready. Only Master node is used to get information about the cluster.

image

image

1.6.1 If You have Windows Node to add your Cluster:
  • Instead of running it as above, you should run Calico with this way, run on Master node:
# Download Calico CNI
curl https://docs.projectcalico.org/manifests/calico.yaml > calico.yaml
# Apply Calico CNI
kubectl apply -f ./calico.yaml

Run on the Master Node:

# required to add windows node
sudo -i
cd /usr/local/bin/
curl -o calicoctl -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.19.1/calicoctl" 
chmod +x calicoctl
exit  
        
# Disable "IPinIP":        
calicoctl get ipPool default-ipv4-ippool  -o yaml > ippool.yaml
nano ippool.yaml  # set ipipmode: Never
calicoctl apply -f ippool.yaml
    
kubectl get felixconfigurations.crd.projectcalico.org default  -o yaml -n kube-system > felixconfig.yaml
nano felixconfig.yaml #Set: "ipipEnabled: false"
kubectl apply -f felixconfig.yaml     

# This is required to prevent Linux nodes from borrowing IP addresses from Windows nodes:"
calicoctl ipam configure --strictaffinity=true
sudo reboot

kubectl cluster-info
kubectl get nodes -o wide
ssh <username>@<WindowsIP> 'mkdir c:\k'
scp -r $HOME/.kube/config <username>@<WindowsIP>:/k/        # send to Win PC from master node, while installing calico, it is required

(Optional) If you need Windows Node: Creating Windows Node

New-NetFireWallRule -DisplayName "Allow All Traffic" -Direction OutBound -Action Allow  
New-NetFireWallRule -DisplayName "Allow All Traffic" -Direction InBound -Action Allow 

Install-WindowsFeature -Name containers    # install docker
Restart-Computer -Force 

.\install-docker-ce.ps1

Set-Service -Name docker -StartupType 'Automatic' 
 
#Install additional Windows networking components 

Install-WindowsFeature RemoteAccess 
Install-WindowsFeature RSAT-RemoteAccess-PowerShell 
Install-WindowsFeature Routing 
Restart-Computer -Force 
Install-RemoteAccess -VpnType RoutingOnly 
Set-Service -Name RemoteAccess -StartupType 'Automatic' 
start-service RemoteAccess 

# Install Calico
mkdir c:\k 
#Copy the Kubernetes kubeconfig file from the master node (default, Location $HOME/.kube/config), to c:\k\config. 

Invoke-WebRequest https://docs.projectcalico.org/scripts/install-calico-windows.ps1 -OutFile c:\install-calico-windows.ps1 

c:\install-calico-windows.ps1 -KubeVersion 1.23.5 
 
#Verify that the Calico services are running. 
Get-Service -Name CalicoNode 
Get-Service -Name CalicoFelix 

#Install and start kubelet/kube-proxy service. Execute following PowerShell script/commands. 
C:\CalicoWindows\kubernetes\install-kube-services.ps1 
Start-Service -Name kubelet 
Start-Service -Name kube-proxy 

#Copy kubectl.exe, kubeadm.etc to the folder below which is on the path:  
cp C:\k\*.exe C:\Users\<username>\AppData\Local\Microsoft\WindowsApps 
 
#Test Win node##################################### 
#List all cluster nodes 
kubectl get nodes -o wide     
 
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://<ProxyIP>:3128", [EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://<ProxyIP>:3128", [EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("NO_PROXY", "192.168.*.*, ::6443, <MasterNodeIP>:6443, 172.24.*.*, 172.25.*.*, 10.*.*.*, localhost, 127.0.0.1, 0.0.0.0/8", [EnvironmentVariableTarget]::Machine)
Restart-Service docker

2. Joining New K8s Worker Node to Existing Cluster

2.1 Brute-Force Method

  • If we lose the token and token CA cert dash and API server address, wé need to learn them to join a new node into the cluster.
  • We are adding new node to existing cluster above. We need to get join token, discovery token CA cert hash, API server advertise address. After getting info, we'll create join command for each nodes.
  • Run on Master to get certificate and token information:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
kubeadm token list
kubectl cluster-info

image

  • In this example, token TTL has 3 hours left (normally, token expires in 24 hours). So we don't need to create new token.
  • If the token is expired, generate a new one with the command:
sudo kubeadm token create
kubeadm token list
  • Create join command for worker nodes:
kubeadm join \
  <control-plane-host>:<control-plane-port> \
  --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash>
  • In our case, we run the following command on both workers (worker2, worker3):
sudo kubeadm join 172.31.32.27:6443 --token 39g7sx.v589tv38nxhus74k --discovery-token-ca-cert-hash sha256:1db5d45337803e35e438cdcdd9ff77449fef3272381ee43784626f19c873d356

image

image

2.2 Easy Way to Get Join Command

  • Run on the master node:
kubeadm token create --print-join-command 
  • Copy the join command above and paste it on ALL worker nodes.
  • Then, we get nodes ready, run on master:
kubectl get nodes

image

3. IP address changes in Kubernetes Master Node

  • After restarting Master Node, it could be possible that the IP of master node is updated. Your K8s cluster API's IP is still old IP of the node. So you should configure the K8s cluster with new IP.

  • You cannot reach API when using kubectl commands:

image

  • If you installed the docker for the docker registry, you can remove the exited containers:
sudo docker rm $(sudo docker ps -a -f status=exited -q)

On Master Node:

sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • After kubeadm reset, if there is an error that shows the some of the ports still using, please use following command to kill process, then run kubeadm init:
sudo netstat -lnp | grep <PortNumber>
sudo kill <PID>

image

image

  • It shows which command should be used to join cluster:
sudo kubeadm join 172.31.40.125:6443 --token 07vo3z.q2n2qz6bd07ipdnf \
        --discovery-token-ca-cert-hash sha256:46c7dcb092ca091e71ab39bd542e73b90b3f7bdf0c486202b857a678cd9879ba

image

image

  • Network Configuration with new IP:
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

image

On Worker Nodes:

sudo kubeadm reset
sudo kubeadm join 172.31.40.125:6443 --token 07vo3z.q2n2qz6bd07ipdnf \
        --discovery-token-ca-cert-hash sha256:46c7dcb092ca091e71ab39bd542e73b90b3f7bdf0c486202b857a678cd9879ba

image

image

  • On Master Node:

  • Worker1 is now joined the cluster.

kubectl get nodes

image

4. Removing the Worker Node from Cluster

  • Run commands on Master Node to remove specific worker node:
kubectl get nodes
kubectl drain worker2
kubectl delete node worker2

image

  • Run on the specific deleted node (worker2)
sudo kubeadm reset

image

5. Installing Docker on Existing Cluster & Starting of Running Local Registry for Storing Local Image

5.1 Installing Docker

  • Run commands on Master Node to install docker on Master node:
 sudo apt-get update
 sudo apt-get install ca-certificates curl gnupg lsb-release
 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo docker run hello-world

Goto for more information: https://docs.docker.com/engine/install/ubuntu/

image

image

image

image

image

  • Copy and run on all nodes to change Docker's Cgroup:
cd /etc/docker
sudo touch daemon.json
sudo nano daemon.json
# in the file, paste:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
sudo systemctl restart docker
sudo docker image ls
kubectl get nodes

image

image

image

sudo mkdir -p /etc/systemd/system/docker.service.d
cd /etc/systemd/system/docker.service.d/
sudo touch http-proxy.conf
sudo nano http-proxy.conf
# copy and paste in the file:
[Service]
Environment="HTTP_PROXY=http://<ProxyIP>:3128"
Environment="HTTPS_PROXY=http://<ProxyIP>:3128"
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl show --property=Environment docker
sudo docker run hello-world
  • Use docker command without sudo:
sudo groupadd docker
sudo usermod -aG docker [non-root user]
# logout and login to enable it

5.2 Running Docker Registry

  • Run on Master to pull registry:
sudo docker image pull registry
  • Run container using 'Registry' image: (-p: port binding [hostPort]:[containerPort], -d: detach mode (running background), -e: change environment variables status)
sudo docker container run -d -p 5000:5000 --restart always --name localregistry -e REGISTRY_STORAGE_DELETE_ENABLED=true registry
  • Run registry container with binding mount (-v) and without getting error 500 (REGISTRY_VALIDATION_DISABLED=true):
sudo docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker_registry:/var/lib/registry -e REGISTRY_STORAGE_DELETE_ENABLED=true -e REGISTRY_VALIDATION_DISABLED=true -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 registry

image

image

  • Open with browser or run curl command:
curl http://127.0.0.1:5000/v2/_catalog

image

6. Pulling Image from Docker Local Registry and Configure Containerd

  • In this scenario, docker local registry already runs on the Master node (see Section 5)
  • First add insecure-registry into /etc/docker/daemon.js on the ALL Nodes:
sudo nano /etc/docker/daemon.json
# copy insecure-registries and paste it
{
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries":["192.168.219.64:5000"]
}
sudo systemctl restart docker.service

image

  • Pull image from DockerHub, label with new tag and push the local registry on master node:
sudo docker image pull nginx:latest
ifconfig                           # to get master IP
sudo docker image tag nginx:latest 192.168.219.64:5000/nginx:latest
sudo docker image push 192.168.219.64:5000/nginx:latest
curl http://192.168.219.64:5000/v2/_catalog
sudo docker image pull 192.168.219.64:5000/nginx:latest
  • Create docker config and get authentication username and pass in base64 coded:
sudo docker login       # this creates /root/.docker/config
sudo cat /root/.docker/config.json | base64 -w0   # copy the base64 encoded key
  • Create my-secret.yaml and paste the base64 encoded key:
apiVersion: v1
kind: Secret
metadata:
  name: registrypullsecret
data:
  .dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
  • Create secret. Kubelet uses this secret to pull image:
kubectl create -f my-secret.yaml && kubectl get secrets
  • Create nginx_pod.yaml. Image name shows where the image is pulled from. In addition, "imagePullSecrets" should be defined, which secret should be used for pulling image for local docker registry.
apiVersion: v1
kind: Pod
metadata:
  name: my-private-pod
spec:
  containers:
    - name: private
      image: 192.168.219.64:5000/nginx:latest
  imagePullSecrets:
    - name: registrypullsecret

image

  • On the ALL Nodes, registry IP and the port should be defined:
sudo nano /etc/containerd/config.toml   # if containerd is using as runtime. If this was Docker, on /etc/docker/daemon.js add insecure-registries like master 
# copy and paste (our IP: 192.168.219.64, change it with your IP):
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.219.64:5000"]
          endpoint = ["http://192.168.219.64:5000"]
    [plugins."io.containerd.grpc.v1.cri".registry.configs]
      [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.219.64:5000".tls]
        insecure_skip_verify = true
# restart containerd.service
sudo systemctl restart containerd.service

image

  • If registry IP and the port is not defined, you will get this error: "http: server gave HTTP response to HTTPS client.
  • If pod's status is ImagePullBackOff (Error), it can be inspected with describe command:
kubectl describe pods my-private-pod

image

  • On Master:
kubectl apply -f nginx_pod.yaml
kubectl get pods -o wide

image

7. NFS Server Connection for Persistent Volume

sudo apt install nfs-common
sudo apt install cifs-utils
sudo mkdir /data                                           # create /data directory under root and mount it to NFS
sudo mount -t nfs <NFSServerIP>:/share /data/              # /share directory is created while creating NFS server
sudo chmod 777 /data                                       # give permissions to reach mounted shared area

Reference