Inception-of-Things is a project of the 42 school aimed at deepening our knowledge through the use of technologies related to the Kubernetes sphere, such as k3s, k3d, vagrant, or argoCD.
This project is divided into 3 parts, separated by the folders p1, p2, and p3. All of these parts were completed inside a Virtual Machine.
The first part introduces us to the use of k3s and vagrant, with the goal of creating a cluster containing a server node and a worker node.
It is therefore necessary to create two VMs from a Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "base"
config.vm.provider :libvirt do |lv|
lv.memory = "1024"
lv.cpus = "1"
end
config.vm.define "sleleuS" do |server|
server.vm.box = "debian/bookworm64"
server.vm.hostname = "sleleuS"
server.vm.network "private_network", ip: "192.168.56.110"
server.vm.provision "shell", path: "scripts/server.sh"
end
config.vm.define "sleleuSW" do |worker|
worker.vm.box = "debian/bookworm64"
worker.vm.hostname = "sleleuSW"
worker.vm.network "private_network", ip: "192.168.56.111"
worker.vm.provision "shell", path: "scripts/worker.sh"
end
end
Each virtual machine will launch its own provisioning script, allowing the installation of k3s and connection to the cluster.
On the server side: It will be necessary to inform the Vagrant user about the location of the configuration file that k3s generates during its installation in order to use kubectl
without any issues: export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> /home/vagrant/.profile
, as well as change the permissions of this file so that it can be accessed since the
provisioning is done from the root user: K3S_KUBECONFIG_MODE=644
Finally, since the flannel service listens by default on the eth0 network interface, it will be necessary to specify the eth1 interface to have a correct internal IP: INSTALL_K3S_EXEC='--flannel-iface=eth1'
On the worker side: Creating the worker is even simpler, and only requires installing and launching k3s, indicating in the env variable K3S_URL
the IP and port of the server,
and the token generated by the server in the env variable K3S_TOKEN
. By default, this token is generated upon the server's creation in the directory: /var/lib/rancher/k3s/server/token
It is therefore enough to wait for this token to be created, and then copy it to the /vagrant
directory. This is a directory shared by default between the different virtual machines launched in a Vagrant box.
If everything is configured correctly, it is possible to verify from the server via SSH if the worker is correctly connected to the server with the command kubectl get nodes -o wide
:
This second part introduces us to the deployment of Kubernetes applications with k3s. From a single virtual machine with k3s installed in server mode, the goal is to deploy 3 web applications following this diagram:
Accessible from the IP 192.168.56.110
, it should be possible to access the different applications depending on the HOST used, app1.com will allow access to app1, app2.com to app2, etc. If the HOST is not defined, the default access will be set to application 3.
Application 2 must have 3 replicas. The number of replicas will define the number of pods of the application, each pod being a clone of the site, allowing in practice to support a larger amount of traffic.
It is also possible with Kubernetes to automatically change these replicas based on the site's load using the command kubectl autoscale
.
To accomplish this deployment, it is first necessary to add the different hosts to the /etc/hosts/
file in order to resolve the DNS between the host and the server's IP:
for i in {1..3}; do
line="192.168.56.110 app$i.com"
if ! grep -q "$line" /etc/hosts; then
echo "$line" >> /etc/hosts
echo "Added HOST $line"
else
echo "$line already exists"
fi
done
Then, for deployment, Kubernetes will use several configuration files, including ingress, service, and deployment types.
In the order of access from the user to the application, a request will be processed by the ingress. The ingress acts as a reverse proxy, defining how our pod will be accessible.
The service will indicate the available ports inside our images.
Finally, the deployment file is responsible for specifying from which image the application will be built, but also its number of replicas, potential environment variables, etc.
For example, a basic ingress will take this form:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
It's worth noting that with the use of configuration files, the imperative gives way to the descriptive. The configuration files establish an additional layer of abstraction, allowing Kubernetes to establish the services we describe to it by itself. This simplifies versioning as well as the updating of services, rather than managing them oneself from the CLI.
Once the service is launched, access to the applications will be possible through a simple curl, or from a web browser:
For this project, we used the image from Paulbower available on DockerHub, hello-kubernetes.
By connecting via SSH to the VM, we can verify that the services are working as expected with kubectl get all
:
All seems to be fine !
This time, moving away from Vagrant, we're setting up a small infrastructure taking this form:
We need to set up a k3d cluster featuring an ArgoCD service and an application. ArgoCD will manage the deployment of this application, whose configuration file is located in a public GitHub repository, thus automatically updating the application present in the Kubernetes cluster with every change to the configuration file.
K3d is a wrapper for k3s that allows launching clusters in a Docker container. Therefore, it's necessary to install Docker in addition to k3d to use it.
Once the tools are installed, it will be necessary to create a namespace for argocd and a dev namespace containing the application to be deployed from ArgoCD:
kubectl create namespace argocd
kubectl create namespace dev
It's possible to create the services, controllers, and deployment of ArgoCD from its configuration file available on GitHub:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
A configuration file can also be provided to ArgoCD in order to setup an application in a declarative way:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wil
spec:
project: default
source:
repoURL: https://github.com/sleleu/iot-dev-sleleu.git
targetRevision: HEAD
path: config
destination:
server: https://kubernetes.default.svc
namespace: dev
This configuration file points to the public repo iot-dev-sleleu, which will contain a DockerHub image proposed by the school's topic: https://hub.docker.com/r/wil42/playground. This image has two versions, allowing testing the automatic update of the application after a commit on the GitHub repo.
Access to ArgoCD has been configured via an ingress, to access it from the host argocd-server.com:8888
, and the application in access by path:
Once the application is created, the admin panel will request a login admin
, as well as the password automatically generated upon the creation of the ArgoCD service in the cluster, which can be obtained with
this command: kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
:
A panel allowing to see the state of the service will be available:
Switching the image from v1 to v2, ArgoCD will retrieve the latest commit, and update the application accessible from localhost:8888
in real-time:
# Before commit
➜ p3 git:(main) ✗ curl localhost:8888
{"status":"ok", "message": "v1"}
# After commit
➜ p3 git:(main) ✗ curl localhost:8888
{"status":"ok", "message": "v2"}
and that concludes our part 3!
This project has a Makefile in each part, just run the make
command to set up the architecture.
However, some tools must be installed on your machine (or VM) to launch the project.
For parts 1 and 2, it is necessary to install Vagrant:
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt update && sudo apt install vagrant
as well as the libvirt provider: https://vagrant-libvirt.github.io/vagrant-libvirt/
An installation script named install.sh
is included in part 3 to assist in installing the rest of the tools.
- https://www.youtube.com/playlist?list=PLn6POgpklwWqfzaosSgX2XEKpse5VY2v5
- https://www.youtube.com/watch?v=s_o8dwzRlu4
- https://blog.stephane-robert.info/docs/conteneurs/orchestrateurs/kubernetes/introduction/
- https://cours.brosseau.ovh/tp/ci/kubernetes/deploy-container-in-kubernetes.html
- https://blog.stephane-robert.info/docs/conteneurs/orchestrateurs/k3s/introduction/
- https://blog.stephane-robert.info/docs/infra-as-code/provisionnement/vagrant/introduction/
- https://blog.filador.fr/a-la-decouverte-de-k3s/
- https://lpenaud.github.io/vagrant.html