Skip to content

mjbright/2022-snowcamp-gitops-workshop

 
 

Repository files navigation

GitOps, a slightly realistic situation on Kubernetes with FluxCD

Click the button below to start a new development environment:

Open in Gitpod

👓 Abstract

You're tired of talks that deploy hello-worlds to demonstrate the relevance of the younameit tool.
It's a good thing: what we're interested in is trying a slightly realistic DevSecOps situation!
We will therefore build a step-by-step enterprise scenario with a dev team, which deploys/updates/rolls back WebApps on Kubernetes via Helm charts. A second dev team will use Kustomize, for the same purpose.
And on the Ops side, we will also be concerned with the platform's security issues: segregation of team rights, WebApps network flows, transparent patch management on the technical stack, metrology, control of activities on the cluster.
We will see how these teams collaborate with each other on a daily basis in a GitOps workflow that relies on Kubernetes, FluxCD, Azure DevOps, and many other things…

👓 Synopsis

This is a hands-on workshop, documented into this very same Github repository.
We have 3 different people, dev1, ops and dev2. Both devs build very simple Web apps that display Pokemon ID card. One Pokemon per app.
Once the Web apps are developped, built and packaged, devs want to deploy them onto a Kubernetes cluster.
The thing is… how to smartly deploy without relying on ops?

A first historical version of the workshop is performed on Ms Azure and every command is performed from a Docker container that is our work environment so nothing is required but:

  • a browser
  • and a Google Cloud account able to provision resources on GKE.
  • or an Azure account able to provision resources on Azure (especially AKS).

First, we detail how to set-up this working environment within a Docker image. How to build it and run it interactively.

Second, we provision a Kubernetes cluster in AKS.
Then, we have a scenario on 3 tracks:

1st track - dev1

A 1st track is a dev named dev1 that builds a simple Web application and deploys it into his/her own Kubernetes namespace with a simple Kubernetes YAML file.
The whole thing is to build the CI/CD automation to perform these several steps…

dev1

  1. I developp a 1st WebApp named dev1-aspicot-app
  2. By using GitHub Actions (👓 Github Action code)…
    • I package it as a Docker image (👓 Dockerfile)…
    • … and publish it into a container registry (👓 Docker Hub)
  3. Then I create the YAML file in order to deploy into Kubernetes as a deployment and a service (👓 YAML file)
    • 🔀 I have to ask *ops in order for me to know which namespace I have to use
  4. 🔀 Finally, I ask ops for help so that the Kubernetes cluster take my deployment into account

packaging an Helm chart

  1. publishing it into a chart repository
  2. deploying it in a Kubernetes namespace
  3. testing it
  4. promoting it for Prod deployment

2nd track - dev2

A 2nd track is another dev that builds another Web application, but this time, he/she is using a Helm chart to deploy it into another dedicated Kubernetes namespace.

dev2

  1. I developp another WebApp named dev2-carapuce-app
  2. By using GitHub Actions (👓 Github Action code)…
    • I package it as a Docker image (👓 Dockerfile)…
    • … and publish it into a container registry (👓 Docker Hub)
  3. In another GitHub repository

3rd track - ops

A 3rd track is a platform ops that operates the Kubernetes cluster.

  • How he manages the Flux configuration and orchestration
  • He has to upgrade the the database engine, (yet to come)
  • to upgrade the cluster, (yet to come)
  • to manage the monitoring and alerting systems in order to run the Prod smoothly. (yet to come)

By doing so, we will be able to show how different teams may work together onto the same Kubernetes clusters and the amount of coordination that is needed (or not).

All the automation relies on Azure DevOps and Flux v2.

Why 3 tracks?

What I mean by "3 tracks" is that we'll be able to do the following:

  • demonstrate each track one after the other
  • let attendees choose one track or another, practice on it
  • let attendees team up to synchronize with each others in order to complete the whole 3 tracks.

Hope you will enjoy this workshop!🙂

👉 Let-su go!

Here are the steps to perform…

👉 Fork this git repository: https://github.com/one-kubernetes/workshop.
👉 And clone it locally.

Now you have all the instructions at hand!

✋ Pre-requisites

  1. To play the codelab, you may use an interactive workspace in GitPod (it's free 💸). Just click the button Open in Gitpod.
  2. This workspace embed a K3s server you can use to perform the entire codelab.

You also can use Kubernetes clusters deployed

🚪 Namespace isolation

💡 First of all, you want to isolate both dev teams in their own namespace.

🧙‍♂️ As the Ops team.
You want to create 2 namespaces for both your 🙋‍♀️_dev1_ team and your 🙋‍♂️_dev2_ team.

  • 🙋‍♀️_dev1_ team should be able to use its namespace but not the one of 🙋‍♂️_dev2_ team.
  • 🙋‍♂️_dev2_ team should be able to use its namespace but not the one of 🙋‍♀️_dev1_ team.
  • 🧙‍♂️_ops_ team should be able to use both, since it is the admin of the kubernetes cluster.

To do so, you have to run the following commands…

# Create resources (namespaces, service accounts, roles and role bindings)
kubectl create -f access.yml

# how to get secrets
kubectl describe sa dev1 -n dev1-ns
kubectl describe sa dev2 -n dev2-ns

# how to get service account token
kubectl get secret dev1-token-5bx7g --namespace=dev1-ns -o "jsonpath={.data.token}" | base64 --decode
kubectl get secret dev2-token-jhvnf --namespace=dev2-ns -o "jsonpath={.data.token}" | base64 --decode

# how to get service account client certificate key
kubectl get secret dev1-token-5bx7g --namespace=dev1-ns -o "jsonpath={.data['ca\.crt']}"
kubectl get secret dev2-token-jhvnf --namespace=dev2-ns -o "jsonpath={.data['ca\.crt']}"

# how to create a pod
export KUBECONFIG=./mykube-config.yml
kubectl config current-context
kubectl run nginx --image=nginx --restart=Never --namespace=dev1-ns
kubectl run nginx --image=nginx --restart=Never --namespace=dev2-ns

kubectl auth can-i --namespace=dev3 --list

Dev Team

Now you are the dev team.
You have to build a CI pipeline that will build, package and ship you application so that it can be deployed onto a Kubernetes cluster.

👉 Fork this git repository: https://github.com/one-kubernetes/dev-team1.
👉 And clone it locally.

Now, follow the live-demo:

  • enter into Azure DevOps
  • create a project
  • create a pipeline for your frontend app
    • link it to your Github repository
    • configure it from the ci-pipeline.yml file
    • add the 3 variables that are needed
      • registryName
      • registryLogin
      • registryPassword
  • create a pipeline for your backend app from the ci-pipeline.yml file

You can run them manually and see every step running.
Now you may have a look to your ACR and check that you have a docker image and an helm chart that are published.

You may upgrade your application and see that the CI is working fine: for every push, the pipeline will

  • build the app,
  • package it into a docker image
  • and publish both the dockerimage and the helmchart

Ops Team

🧙‍♂️ Now you are the Ops team.
What you have to do is install and configure flux so that every time a new chart is published, it is deployed into the right namespace on the right kubernetescluster.

Flux

To install and configure Flux in order to manage deployment onto your Kubernetes cluster as a single tenant, see here.
To do so in order for your Flux to be able to manage multiple tenants onto your Kubernetes cluster, see here.

⚠️ Flux v2 and Azure ACR

Azure ACR is migrating to OCI container registry standard.
This standard is only available in experimental mode in Helm.
And Flux v2 is as of now not compatible with this standard.
So you have to use another chart repository than ACR.

You can deploy a Chart Museum by following these instructions (thanks to helm!).
And you can push your freshly built chart into this repository by performing this command:

curl --data-binary "@$(projectName)-$(helmChartVersion).tgz"  http://20.93.169.137:8080/api/charts

🔎 References

This workshop is inspired by the following Internet resources:

🪃 You can send your feedback here!

roti.express SnowCamp 2022

About

The main project for this workshop

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 54.6%
  • Dockerfile 45.4%