3-Node kind cluster with Cilium CNI + Hubble enabled
Cilium supports two methods of installation:
The CLI tool makes it easy to get started with Cilium, especially when you’re first learning about it. It uses the Kubernetes API directly to examine the cluster corresponding to an existing kubectl context and choose appropriate install options for the Kubernetes implementation detected. We’ll be using the Cilium CLI install method for most of the labs in the course.
The Helm chart method is meant for advanced installation and production environments where you want granular control of your Cilium installation. It requires you to manually select the best datapath and IPAM mode for your particular Kubernetes environment. You can learn more about the Helm chart installation method in the Cilium documentation resources. We’ll use the Helm chart install method in a later chapter when getting familiar with some advanced capabilities.
We will need a Kubernetes cluster appropriately configured and ready for an external CNI to be installed. We were using kind
cluster for this. Install kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.19.0/kind-linux-amd64
sudo chmod +x ./kind
sudo mv ./kind /usr/local/bin
To verify:
which kind
kind version
Note: kind version >=
v0.7.0
kind
does not require kubectl
, but we will not be able to perform some of the operations without it. Install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
To verify:
kubectl version --client
sudo apt-get update
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker ${USER}
su - ${USER}
sudo chmod 666 /var/run/docker.sock
Here is the YAML configuration file for a 3-node kind cluster with default CNI disabled. Save this locally to your workstation as kind-config.yaml
with the contents:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
networking:
disableDefaultCNI: true
Now create a new kind cluster using this configuration:
kind create cluster --config=kind-config.yaml
Kind will create the cluster and will configure an associated kubectl
context. Confirm your new kind cluster is the default kubectl
context:
kubectl config current-context
Now you should be able to use kubectl and the Cilium CLI tool and interact with your newly minted kind cluster.
kubectl get nodes
Note: Because you have created the cluster without a default CNI, the Kubernetes nodes are in a NotReady state.
Download and install the Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
To verify:
cilium version
We’ll be installing the default Cilium image into the Kubernetes cluster we’ve prepared.
cilium install
To verify:
cilium status --wait
cilium hubble enable --ui
To verify:
cilium status
The Cilium CLI tool also provides a command to install a set of connectivity tests in a dedicated Kubernetes namespace. We can run these tests to validate that the Cilium install is fully operational:
cilium connectivity test --request-timeout 30s --connect-timeout 10s
Note: The connectivity tests require at least two worker nodes to successfully deploy in a cluster. The connectivity test pods will not be scheduled on nodes operating in the control-plane role. If you did not provision your cluster with two worker nodes, the connectivity test command may stall waiting for the test environment deployments to complete.
With Cilium now installed, we can use kubectl to confirm that the nodes are now ready and the required Cilium operational components are present in the cluster:
kubectl get nodes
kubectl get daemonsets --all-namespaces
kubectl get deployments --all-namespaces
The cilium daemonset is running on all 3 nodes in the cluster, and the cilium-operator
deployment is running on a single node.
Now, Cilium
is successfully installed.