Skip to content
Branch: master
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
0-canal/overlays
0-cilium/overlays
0-flannel/overlays
1-core-metrics/overlays
2-dashboard/overlays
3-efk/overlays/v1.15.1
4-kube-prometheus/overlays
5-ingres-lb/overlays
6-metal-lb/overlays
7-rook/overlays
8-kata/overlays
9-multi-network
admit-kata
hack
node-feature-discovery/overlays/v0.4.0
node-problem-detector/overlays/v0.6.6
tests
DEVELOP.md
README.md
Vagrantfile
containerd_devmapper_setup.sh
create_stack.sh
kubeadm.yaml
reset_stack.sh
setup_kata_firecracker.sh
setup_system.sh
vagrant.md

README.md

How to setup the cluster

Prerequisite

This setup currently will work with Kubernetes 1.14 & above. Any version of Kubernetes before that might work, but is not guaranteed.

Sample multi-node vagrant setup

To be able to test this tool, you can create a 3-node vagrant setup. In this tutorial, we will talk about using libvirt, but you can use any hypervisor that you are familiar with.

Install vagrant

Follow instructions in the Vagrant docs

Or, follow our detailed steps

Now you have a 3 node cluster up and running. Each of them have 4 vCPU, 8GB Memory, 2x10GB disks, 1 additional private network. Customize the setup using environment variables. E.g., NODES=2 MEMORY=16384 CPUS=8 vagrant up --provider=libvirt

To login to the master node and change to this directory

vagrant ssh clr-01
cd clr-k8s-examples

Setup the nodes in the cluster

Run setup_system.sh once on each and every node (master and workers) to ensure Kubernetes works on it.

This script ensures the following

  • Installs the bundles the Clearlinux needs to support Kubernetes, CRIO and Kata
  • Customizes the system to ensure correct defaults are setup (IP Forwarding, Swap off,...)
  • Ensures all the dependencies are loaded on boot (kernel modules)

NOTE: This step is done automatically if using vagrant. The setup_system.sh script uses the runtime specified in the RUNNER environment variable and defaults to crio. To use the containerd runtime, set the RUNNER environment variable to containerd.

In case of vagrant, if you want to spin up VM's using different environment variable than declared in [setup_system.sh], specify when performing vagrant up. E.g., RUNNER=containerd vagrant up

Specify a version of Clear Linux

To specify a particular version of Clear Linux to use, set the CLRK8S_CLR_VER environment variable to the desired version before starting setup_system.sh (e.g. CLRK8S_CLR_VER=31400 ./setup_system.sh)

Configuration for high numbers of pods per node

In order to enable running greater than 110 pods per node, set the environment variable HIGH_POD_COUNT to any non-empty value.

NOTE: Use this configuration when utilizing the metrics tooling in this repo.

Enabling experimental firecracker support

EXPERIMENTAL: Optionally run setup_kata_firecracker.sh to be able to use firecracker VMM with Kata.

The firecracker setup switches the setup to use a sparse file backed loop device for devicemapper storage. This should not be used for production.

NOTE: This step is done automatically if using vagrant.

Bring up the master

Run create_stack.sh on the master node. This sets up the master and also uses kubelet config via kubeadm.yaml to propagate cluster wide kubelet configuration to all workers. Customize it if you need to setup other cluster wide properties.

There are different flavors to install, run ./create_stack.sh help to get more information.

NOTE: Before running create_stack.sh script, make sure to export the necessary environment variables if needed to be changed. By default it will use CLRK8S_CNI to be canal, and CLRK8S_RUNNER to be crio. Cilium is tested only in the Vagrant.

# default shows help
./create_stack.sh <subcommand>

In order to enable running greater than 110 pods per node, set the environment variable HIGH_POD_COUNT to any non-empty value.

Join Workers to the cluster

kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash <hash> --cri-socket=/run/crio/crio.sock

Note: Remember to append --cri-socket=/run/crio/crio.sock to the join command generated by the master.

On workers just use the join command that the master spits out. There nothing else you need to run on the worker. All the other Kubernetes customizations are pushed in from master via the values setup in the kubeadm.yaml file.

So if you want to customize the kubelet on the master or the workers (things like resource reservations etc), update this file (when the cluster is created). The master will push this configuration automatically to every worker node that joins in.

Running Kata Workloads

The cluster is setup out of the box to support Kata via runtime class. Clearlinux will also setup kata automatically on all nodes. So running a workload with runtime class set to "kata" will launch the POD/Deployment with Kata.

An example is

kubectl apply -f tests/deploy-svc-ing/test-deploy-kata-qemu.yaml

Running Kata Workloads with Firecracker

EXPERIMENTAL: If firecracker setup has been enabled, runtime class set to "kata-fc" will launch the POD/Deployment with firecracker as the isolation mechanism for Kata.

An example is

kubectl apply -f tests/deploy-svc-ing/test-deploy-kata-fc.yaml

Making Kata the default runtime using admission controller

If you want to run a cluster where kata is used by default, except for workloads we know for sure will not work with kata, using admission webhook and sample admission controller, follow admit-kata README.md

Accessing control plane services

Pre-req

You need to have credentials of the cluster, on the computer you will be accessing the control plane services from. If it is not under $HOME/.kube, set KUBECONFIG environment variable for kubectl to find.

Dashboard

kubectl proxy # starts serving on 127.0.0.1:8001

Dashboard is available at this URL http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

Kibana

Start proxy same as above. Kibana is available at this URL http://localhost:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana

Grafana

kubectl -n monitoring port-forward svc/grafana 3000

Grafana is available at this URL http://localhost:3000 . Default credentials are admin/admin. Upon entering you will be asked to chose a new password.

Cleaning up the cluster (Hard reset to a clean state)

Run reset_stack.sh on all the nodes

You can’t perform that action at this time.