This project demonstrates how to set up a complete GitOps workflow for a production-ready Python application. It uses Docker for containerization, Kind for a local Kubernetes cluster, Argo CD for GitOps-based deployment, and Prometheus and Grafana for monitoring.
- Overview
- Features
- Architecture
- Prerequisites
- Quick Start
- Project Structure
- Application Endpoints
- Setup and Deployment
- Monitoring
- CI/CD Pipeline
- Development
- Repository
This project provides a hands-on example of deploying a web application to a Kubernetes cluster using GitOps principles. The core idea is to have a Git repository as the single source of truth for both application code and infrastructure configuration. Argo CD automatically syncs the desired state from the Git repo to the Kubernetes cluster.
- ✅ Health Check & Metrics Endpoints - Production-ready health probes and Prometheus metrics
- ✅ Structured Logging - Comprehensive logging throughout the application
- ✅ Error Handling - Graceful error handling with JSON responses
- ✅ Environment Configuration - Support for .env configuration files
- ✅ Kind Kubernetes Cluster - Local cluster with 1 control plane and 2 worker nodes
- ✅ ArgoCD GitOps - Automated deployment and sync from Git
- ✅ GitHub Actions CI/CD - Automated testing, linting, security scanning, and Docker builds
- ✅ Comprehensive Tests - Full test coverage with pytest
- ✅ Monitoring Stack - Prometheus and Grafana for observability
- ✅ Multi-stage Docker Build - Optimized production and debug images
flowchart TD
A[Developer] -->|Push Code| B[GitHub Repository]
B -->|Trigger CI/CD| C[GitHub Actions Pipeline]
C -->|Lint & Test| D[Test Results]
C -->|Build & Push Image| E[Docker Hub]
B -->|ArgoCD Monitors Repo| F[ArgoCD]
F -->|Sync & Deploy| G[Kind Kubernetes Cluster]
G -->|Runs| H[Example App]
H -->|Expose Metrics| I[Prometheus]
I -->|Visualize| J[Grafana]
F -->|Status & Rollback| G
The diagram above illustrates the automated flow from code commit to deployment using GitOps and CI/CD.
- Docker (20.10+)
- Kind (0.11+)
- kubectl (1.20+)
- Helm (3.0+)
- Git
# Clone the repository
git clone https://github.com/Ashikuroff/example-app.git
cd example-app
# Create Kind cluster
kind create cluster --config kind-config.yaml
# Install dependencies
pip install -r src/requirements.txt
# Run tests
PYTHONPATH=. pytest test/ -v
# Run locally (development)
export FLASK_DEBUG=True
python src/server.pyAccess the app at: http://localhost:5000
.
├── .github/
│ └── workflows/
│ └── main.yml # GitHub Actions CI/CD pipeline
├── argo/
│ ├── argo-cd/ # Argo CD application manifests
│ └── example-app/ # Kubernetes manifests for the app
├── src/
│ ├── server.py # Flask application
│ ├── requirements.txt # Python dependencies
│ └── __init__.py # Python package marker
├── test/
│ ├── test_server.py # Test suite
│ └── __init__.py # Python package marker
├── grafana/ # Grafana setup documentation
├── promethues/ # Prometheus setup documentation
├── .env.example # Environment configuration template
├── .gitignore # Git ignore rules
├── Dockerfile # Multi-stage Docker build
├── kind-config.yaml # Kind cluster configuration
├── requirements-test.txt # Testing dependencies
└── README.md # This file
The Flask application exposes the following endpoints:
| Endpoint | Method | Description | Response |
|---|---|---|---|
/ |
GET | Main endpoint | {"message": "Hello World!", "version": "1.0.0"} |
/health |
GET | Health check | {"status": "healthy", "service": "example-app"} |
/metrics |
GET | Prometheus metrics | Prometheus format metrics |
/* |
ANY | 404 handler | {"error": "Not found", "path": "..."} |
This project uses Kind to create a local Kubernetes cluster with one control-plane node and two worker nodes.
kind create cluster --config kind-config.yamlCheck cluster status:
kubectl get nodesThe Dockerfile uses multi-stage builds with two targets: prod (production) and debug (with ptvsd).
-
Build the production image:
docker build -t <your-dockerhub-username>/example-app:1.0.0 --target prod .
-
Push to Docker Hub:
docker push <your-dockerhub-username>/example-app:1.0.0
-
Update deployment manifest: Edit
argo/example-app/deployments/deployment.yamland update the image:containers: - name: example-app image: <your-dockerhub-username>/example-app:1.0.0
Commit and push the change:
git add argo/example-app/deployments/deployment.yaml git commit -m "Update app image to version 1.0.0" git push
-
Create Argo CD namespace:
kubectl create namespace argocd
-
Install Argo CD:
kubectl apply -n argocd -f argo/argo-cd/install.yaml
-
Access Argo CD UI:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Navigate to:
https://localhost:8080 -
Get initial password:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Username:
admin| Password: (from above command)
-
Apply Argo CD application:
kubectl apply -f argo/argo-cd/app.yaml
-
Monitor sync status: Watch the Argo CD UI for the sync process. Once complete, the application will be running.
The
syncPolicyis set toautomated, so future changes to manifests inargo/example-app/will automatically deploy.
-
Port-forward the service:
kubectl port-forward svc/example-service -n example-app 8081:80
-
Access in browser: Navigate to:
http://localhost:8081
Prometheus collects metrics from your application and cluster.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/prometheus
kubectl port-forward svc/prometheus-server 9090:9090Access Prometheus: http://localhost:9090
Grafana visualizes metrics from Prometheus.
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install grafana grafana/grafana
kubectl port-forward svc/grafana 3000:3000Access Grafana: http://localhost:3000
Default credentials:
- Username:
admin - Password: (retrieve with
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo)
This repository includes several convenience scripts and a Makefile to speed local development and validation:
scripts/setup.sh— create a Python virtualenv at.venvand install dependencies fromsrc/requirements.txtandrequirements-test.txt.scripts/test.sh— run the test suite (pytest) with optional--coverageand--lintflags.scripts/server.sh— start the Flask application locally (FLASK_PORT,FLASK_DEBUGcontrolled via flags/env).scripts/preflight.sh— wrapper to run the Azure Deployment Preflight diagnostic (.github/skills/azure-deployment-preflight/preflight.py). Pass--executeto runbicep/azcommands (requires CLIs + auth).Makefile— shortcuts for common tasks:make setup,make test,make server,make preflight, etc.
Example usage:
# prepare environment
./scripts/setup.sh
# run tests
./scripts/test.sh
# start dev server
./scripts/server.sh --port 5000 --debug
# run preflight (dry-run)
./scripts/preflight.sh --output preflight-report.mdWe added an Azure Deployment Preflight skill to help validate Bicep deployments before applying changes to Azure. Key points:
- Script:
.github/skills/azure-deployment-preflight/preflight.py - README:
.github/skills/azure-deployment-preflight/README.md - What it does: detects
azure.yaml(azd projects), finds.bicepfiles and parameter files, checks foraz,azd, andbicepCLIs, runsbicep build(dry-run) and generates a Markdownpreflight-report.mddescribing issues and suggested remediation. With--executeit will attemptaz ... what-ifcommands (requires auth and correct parameters).
Replace placeholder values (resource group, location) in the generated what-if commands before executing in CI or production.
The GitHub Actions workflows were updated to use supported action versions and modern Docker actions. Highlights:
actions/checkout@v4actions/setup-python@v5actions/upload-artifact@v4docker/setup-buildx-action@v3(stable)docker/login-action@v3docker/build-push-action@v6
If your org uses pinned action SHAs instead of tags, consider updating the workflows to pin to a specific SHA for each action.
For any additional tooling or CI integration (e.g., automatically running the preflight in a PR), I can add a workflow example — tell me where you'd like it integrated.
Add Prometheus Data Source:
- Go to Configuration → Data Sources
- Click "Add data source" → Select "Prometheus"
- URL:
http://prometheus-server.default.svc.cluster.local - Click "Save & Test"
Import Kubernetes Dashboard:
- Go to Dashboards → Import
- Enter dashboard ID:
6417 - Click "Load" → "Import"
This project uses GitHub Actions for automated CI/CD. The pipeline includes:
- Test - Runs pytest with full test coverage
- Lint - Code quality checks with flake8 and pylint
- Security - Security scanning with bandit
- Build - Docker image build with caching
- Publish - Push to Docker Hub (main/staging branches only)
- Push to:
main,staging,test,developbranches - Pull requests to:
main,staging,developbranches
Set these secrets in GitHub repository settings:
DOCKER_USERNAME- Your Docker Hub usernameDOCKER_PASSWORD- Your Docker Hub token/password
Images are tagged automatically:
latest- From main branchmain-{sha}- Commit hash from mainstaging-{sha}- Commit hash from staging
# Clone repository
git clone https://github.com/Ashikuroff/example-app.git
cd example-app
# Create virtual environment (optional)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r src/requirements.txt
pip install -r requirements-test.txtCreate a .env file based on .env.example:
cp .env.example .envEdit .env to customize:
FLASK_ENV=development
FLASK_DEBUG=True
FLASK_PORT=5000
LOG_LEVEL=DEBUG# Run all tests
PYTHONPATH=. pytest test/ -v
# Run with coverage
PYTHONPATH=. pytest test/ --cov=src --cov-report=html# Development mode (with auto-reload and debugger)
export FLASK_DEBUG=True
python src/server.py
# Production mode
export FLASK_DEBUG=False
gunicorn -w 4 -b 0.0.0.0:5000 src.server:app# Run linting
flake8 src/
pylint src/server.py
# Run security checks
bandit -r src/- Developer pushes code to GitHub
- GitHub Actions runs:
- Tests (pytest)
- Linting (flake8, pylint)
- Security scan (bandit)
- Docker build & push (on main/staging)
- Argo CD monitors the repository
- Argo CD syncs deployment manifests to cluster
- Kubernetes deploys the new image
- Application is now running with the latest changes
Any changes committed to argo/example-app/ directory are automatically synced to the cluster by Argo CD.
All code and configuration details can be found at: https://github.com/Ashikuroff/example-app
This project is provided as an educational example.