This README explains how to run, build, and deploy the Python Flask app used in the CI/CD pipeline project.
https://github.com/Hyghen/CI-CD-Python
A small Python Flask app that demonstrates a full CI/CD pipeline: local run, Docker image build, GitHub Actions CI to build and push images to Docker Hub, and local deployment on Minikube.
-- app.py — Flask application (listens on port 5000)
-- requirements.txt — Python dependencies
-- Dockerfile — to build the app image
-- docker-compose.yml — for local container running
-- .github/workflows/ci-cd.yml — GitHub Actions workflow (CI + build and push)
-- Git
-- Docker & Docker Hub account
-- GitHub account and the repository fork or push access
-- Minikube & kubectl (for local Kubernetes deployment)
-- Python 3.11 (for local testing)
-- git clone https://github.com/Hyghen/CI-CD-Python.git
-- cd CI-CD-Python
-- python3 -m venv .venv
-- source .venv/bin/activate
-- pip install -r requirements.txt
-- python app.py open http://192.168.1.9:5000
-- Using Docker Compose (local container)
-- Build & run with docker-compose:
-- docker compose up --build open http://localhost:5000
-- docker compose down
-- Build & push image manually (to Docker Hub)
-- docker build -t chitransh8824/ci-cd-python:latest .
-- docker push chitransh8824/ci-cd-python:latest

-- Configure GitHub Actions (CI/CD)
Create repository secrets on GitHub: DOCKERHUB_USERNAME, DOCKERHUB_TOKEN, DOCKERHUB_REPO (e.g. yourdockerhub/ci-cd-python).
-- test job: installs dependencies and runs tests
-- build-and-push job: logs into Docker Hub, builds image, and pushes latest and ${{GITHUB_SHA::7}} tag

Once you push to main, Actions will run automatically.
-- minikube start
-- kubectl create deployment python-app --image=yourdockerhub/ci-cd-python:latest
-- kubectl expose deployment python-app --type=NodePort --port=5000
-- minikube service python-app --url
-- Alternative: create k8s/deployment.yaml and k8s/service.yaml and apply them with kubectl apply -f k8s/.
-- Check pod status: kubectl get pods
-- Logs: kubectl logs deploy/python-app
-- Access app: use curl or kubectl port-forward deploy/python-app 8080:5000 then open http://localhost:8080.
-- ImagePullBackOff: ensure image name and tag match Docker Hub and repo is public or credentials available.
-- CrashLoopBackOff: kubectl logs to inspect errors; ensure PORT and app start command are correct.
-- Service not reachable: try kubectl port-forward or minikube service --url.
Set up a monitoring sandbox environment on RHEL 9 with Prometheus, Grafana, Node Exporter, and Alertmanager. Preconfigure alerts for CPU, disk, and service health so this environment can be reused for DevOps learning and testing.
-
Prometheus (metrics collection)
-
Node Exporter (system metrics)
-
Grafana (visualization)
-
Alertmanager (alerting)
-- sudo dnf install -y wget tar systemd
-- sudo useradd --no-create-home --shell /bin/false prometheus
-- sudo useradd --no-create-home --shell /bin/false node_exporter
-
Download official tarballs from Prometheus site.
-
Extract, copy binaries to
/usr/local/bin/
. -
Create data directories under
/var/lib/prometheus/
.
-- sudo dnf install -y https://dl.grafana.com/oss/release/grafana-10.4.0-1.x86_64.rpm
-- sudo systemctl enable --now grafana-server
-
Download official tarball.
-
Extract and move binaries to
/usr/local/bin/
. -
Create working directory at
/etc/alertmanager/
.
-
Place Prometheus, Node Exporter, and Alertmanager configs under
/etc/
. -
Set proper ownership (
chown prometheus:prometheus
). -
Create systemd unit files for Prometheus, Node Exporter, and Alertmanager.
-
Reload systemd and enable services:
-- sudo systemctl daemon-reexec
-- sudo systemctl enable --now prometheus node_exporter alertmanager grafana-server
-- sudo firewall-cmd --add-port=9090/tcp --permanent # Prometheus
-- sudo firewall-cmd --add-port=9100/tcp --permanent # Node Exporter
-- sudo firewall-cmd --add-port=9093/tcp --permanent # Alertmanager
-- sudo firewall-cmd --add-port=3000/tcp --permanent # Grafana
-- sudo firewall-cmd --reload
- Prometheus:
http://192.168.1.9:9090

- Node Exporter:
http://192.168.1.9:9100/metrics

- Grafana:
http://192.168.1.9:3000
(default login: admin/admin)

- Alertmanager:
http://192.168.1.9:9093

Test Alerts:
-
Create a CPU usage alert rule in Prometheus.
-
Stop Node Exporter (
sudo systemctl stop node_exporter
) and check Alertmanager. -
Grafana dashboards should show dropped metrics.
-
Check logs:
-- sudo journalctl -u prometheus -u node_exporter -u grafana-server -u alertmanager -f
-
Ensure firewall ports are open.
-
Verify config files have correct paths and permissions.