A self-hosted homelab focused on Linux operations, reverse proxying, monitoring, secrets management, backups, and automated deployment.
This project is built to demonstrate practical infrastructure skills:
- Docker-based self-hosting
- Reverse proxy configuration with Caddy
- Service health monitoring with Uptime Kuma
- Metrics collection with Prometheus
- Dashboarding with Grafana
- Secrets and password management with Vaultwarden
- Automated deployment with Ansible
- Encrypted backups with Restic
This setup is being developed locally first, then will be deployed to multiple servers remotely as multi-node homelab servers.
-
Caddy
Reverse proxy for local service routing. -
Uptime Kuma
Service uptime and health monitoring. -
Vaultwarden
Self-hosted password manager. -
Prometheus
Metrics collection and scraping. -
Grafana
Metrics dashboards and visualization. -
node_exporter
Host-level metrics exporter for Prometheus. -
Seafile
Self-hosted file sync and storage service.
-
Ansible
Automates deployment of the homelab stack. -
Docker Compose
Defines the service stack and shared network. -
Restic
Encrypted backups with retention policies.
Windows 11 host
→ WSL2 Ubuntu control environment
→ Ansible deploys runtime files
→ Docker Compose launches containers
→ Caddy routes traffic to internal services
-
Browser-to-service routing Browser requests use Caddy hostnames such as:
- kuma.localhost
- vault.localhost
- seafile.localhost
- grafana.localhost
- prometheus.localhost
-
Container-to-container routing Containers communicate over the Docker network using service/container names, for example:
This separation matters because localhost inside a container is the container itself, not the host machine.
node_exporter
→ Prometheus scrapes metrics
→ Grafana visualizes metrics
Uptime Kuma separately checks service availability over HTTP.
homelab/
README.md
.gitignore
.env.example
docs/
architecture.md
service-matrix.md
runbooks.md
ansible/
ansible.cfg
inventories/
local/
hosts.example.ini
group_vars/
all/
main.example.yml
secrets.example.yml
remote_c/
hosts.example.ini
group_vars/
all.yml
remote_i/
hosts.example.ini
group_vars/
all.yml
playbooks/
services.yml
compose/
caddy/
Caddyfile
stack/
compose.yml.j2
prometheus.yml
scripts/
backup.sh
The source repo is separate from the runtime deployment path.
- Source repo:
~/homelab - Runtime path:
~/homelab-runtime
Ansible renders templates into the runtime directory and starts the stack from there.
The homelab stack is deployed with:
ansible-playbook ansible/playbooks/services.yml -i ansible/inventories/local/hosts.iniThis playbook:
- creates runtime directories
- creates persistent data directories
- renders the unified Docker Compose file from a template
- copies the Caddy and Prometheus config
- starts the stack with Docker Compose
Prometheus scrapes:
- itself
- node_exporter
Grafana uses Prometheus as a datasource.
A Node Exporter dashboard was imported to visualize host metrics.
Uptime Kuma monitors internal container endpoints over the Docker network.
Restic is used for encrypted backups of the runtime data directory.
Current backup target:
- local restic repository
Retention policy:
- keep 7 daily backups
- keep 4 weekly backups
- keep 6 monthly backups
- prune unused data
- Local Docker stack
- Reverse proxy with Caddy
- Uptime Kuma
- Vaultwarden
- Prometheus
- Grafana
- node_exporter
- Restic backups
- Ansible-based local deployment
- Seafile file server
- Bootstrap remote nodes
- Expand Ansible inventories for Remote Servers
- Add multi-node monitoring and alerting
This project is intentionally being built in layers instead of adding many services at once. The priority is reproducibility, observability, backup safety, and clean deployment architecture.