This sample code demonstrates how console or background service can easily achieve self-healing capability if it is containerized and hosted in Kubernetes cluster.
- .Net Core 3.0 application framework is used to develop cross-platform worker service that runs in the background as a daemon.
- It reads a message from Request message queue and writes it to Response message queue.
- App is containerized using Docker Linux VM and saved to a private Azure Container Registry (ACR).
- It is then hosted on Azure inside Azure Kubernetes Service (AKS) cluster.
- App relies on an external services such as two Azure Service Bus Queues.
- App must not start processing jobs until it ensures that all of its dependencies are accessible and hence available. This is startup check. Kubernetes Startup probe is in ALPHA and may only be available in v1.16. Upgrade your cluster when it's out.
- After successful validtion it should periodically report that it is runnning and not frozen or crashed. This is a liveness check.
You can read about Kubernetes Readiness & Liveliness probes. Then read below article to understand how this sample is designed & implemented. (Some work is still in progress)
https://kaizenberglabs.wordpress.com/2019/10/28/kubernetes-essentials-readiness-liveliness-probes/
- Polly (Retry Policy)
- Lamar (IoC)
- NLog (Logging)
- Azure Subsription
- Resource Group
- Azure Container Registry instance
- Azure Kubernetes Service instance (Kubernets version 1.15, only readiness & liveness probes can be tested)
- Service Bus Namespace with 2 queues
- SAS keys for both queues
- Service Principal with Reader access on Resource Group
- Clone repository locally
- Install Docker for Winodws
- Install Azure CLI
- Open PowerShell
- Login to Azure account from developer desktop
az login
- Install Kubernetes CLI
az aks install-cli
- Login to Azure Container Registry
docker login azurecontainerregistryname -u username -p password
- Navigate to source code
- Build Dockerfile of this project & tag it
docker build -f Dockerfile -t azurecontainerregistryname/k8s-probes-test:1.0.0 .
- Push the image to ACR
docker push azurecontainerregistryname/k8s-probes-test:1.0.0
- Login to Azure Kubernetes Service cluster
az aks get-credentials --resource-group resourcegroupofakscluster --name aksclustername
- Deploy Probes.yaml to AKS cluster
- Edit Probes.yaml file. Set values for Environment Variables as per Azure Setup
TenantId - Azure Active Directory Tenant Id
SubscriptionId - Azure Subscription Id
ResourceGroupName - Name of Resource Group that contains AKS Cluster
ClientId - Service Principal Client Id
ClientSecret - Service Principal Client Secret
ServiceBusNamespace - Service Bus Namespace name
ServiceBusNamespaceSASKey - Request/Response Queue SAS key
RequestQueueName - Request Queue name
ResponseQueueName - Response Queue name
LivenessSignalIntervalSeconds - Interval in seconds of creating liveness file
(optional) LivenessFilePath - Full path including file name for liveness check
(optional) StartupFilePath - Full path including file name for startup check - Deploy Probes on AKS cluster
kubectl apply -f Probes.yaml --record
- View all pods that are created and running
kubect get pods
- View output of one of the pod
kubectl logs -f poduniqueuid