A declarative deployment CLI that generates your entire stack from a single
manifest.yaml.
manifest.yaml → swiftdeploy init → nginx.conf + docker-compose.yml → running stack
[Client]
│
▼ :8080
[Nginx container] ← reverse proxy, access logs, JSON error pages
│
▼ :3000 (internal only, never exposed)
[API container] ← Flask app, stable or canary mode
│
[app-logs volume] ← shared log mount
Both containers share the swiftdeploy-net bridge network.
The API port is never exposed publicly — all traffic routes through Nginx.
| Tool | Version |
|---|---|
| Docker + Docker Compose | ≥ 24.x |
| Python | ≥ 3.10 |
| pip + venv | any recent |
git clone https://github.com/YOUR_USERNAME/swiftdeploy.git
cd swiftdeploypython3 -m venv venv
source venv/bin/activate
pip install pyyaml jinja2 ruamel.yamlNote: Activate the venv every time you open a new terminal:
source venv/bin/activate
chmod +x swiftdeploydocker build -t swift-deploy-1-node:latest .Verify image size is under 300MB:
docker images swift-deploy-1-node:latestReads manifest.yaml and renders:
nginx.conffromtemplates/nginx.conf.j2docker-compose.ymlfromtemplates/docker-compose.yml.j2
./swiftdeploy initExpected output:
swiftdeploy init
✔ PASS Generated nginx.conf (proxy → port 3000, listen 8080)
✔ PASS Generated docker-compose.yml (mode=stable)
→ Init complete — run './swiftdeploy validate' next
Run
initany time you editmanifest.yamlor afterteardown --cleandeletes the generated files.
Runs 5 checks before deployment. Exits non-zero if any fail.
./swiftdeploy validateExpected output:
swiftdeploy validate
✔ PASS manifest.yaml exists and is valid YAML
✔ PASS All required fields present and non-empty
✔ PASS Docker image exists locally (swift-deploy-1-node:latest)
✔ PASS Nginx port not already bound on host (port 8080 is free or owned by swiftdeploy)
✔ PASS Generated nginx.conf is syntactically valid
All checks passed — ready to deploy!
| # | Check | How |
|---|---|---|
| 1 | manifest.yaml exists and is valid YAML |
File exists + PyYAML parse |
| 2 | All required fields present and non-empty | Checks services, nginx, network keys |
| 3 | Docker image exists locally | docker images -q |
| 4 | Nginx port not already bound | Socket connect test |
| 5 | nginx.conf is syntactically valid |
nginx -t inside temporary container |
Runs init, brings up the stack, and blocks until health checks pass or 60s timeout.
./swiftdeploy deployExpected output:
swiftdeploy deploy
→ Running init…
✔ PASS Generated nginx.conf (proxy → port 3000, listen 8080)
✔ PASS Generated docker-compose.yml (mode=stable)
→ Starting stack with docker compose…
✔ Container swiftdeploy-api Healthy
✔ Container swiftdeploy-nginx Started
→ Waiting for service health on port 8080 (timeout 60s)…
→ Health check passed after 1 attempt(s)
✔ Stack is up and healthy!
API: http://localhost:8080/
Health: http://localhost:8080/healthz
Test the running stack:
# Welcome message
curl http://localhost:8080/
# Health check
curl http://localhost:8080/healthz
# Verify headers
curl -I http://localhost:8080/Switches between stable and canary mode with a rolling restart of only the API container. Nginx stays up throughout — zero downtime.
# Switch to canary
./swiftdeploy promote canaryExpected output:
swiftdeploy promote canary
✔ PASS manifest.yaml updated mode: stable → canary
✔ PASS docker-compose.yml regenerated with MODE=canary
→ Restarting API service container only…
✔ PASS API container restarted
→ Confirming new mode via http://localhost:8080/healthz…
✔ PASS Confirmed: service is now running in canary mode
✔ Promote to 'canary' complete!
Canary mode features:
- Every response includes
X-Mode: canaryheader POST /chaosendpoint is activated
# Verify canary headers
curl -I http://localhost:8080/
# Test chaos endpoints (canary only)
# Slow responses by N seconds
curl -X POST http://localhost:8080/chaos \
-H "Content-Type: application/json" \
-d '{"mode": "slow", "duration": 2}'
# Return 500 on ~50% of requests
curl -X POST http://localhost:8080/chaos \
-H "Content-Type: application/json" \
-d '{"mode": "error", "rate": 0.5}'
# Recover to normal
curl -X POST http://localhost:8080/chaos \
-H "Content-Type: application/json" \
-d '{"mode": "recover"}'Switch back to stable:
./swiftdeploy promote stable# Stop containers, remove networks and volumes
./swiftdeploy teardown
# Also delete generated config files (nginx.conf + docker-compose.yml)
./swiftdeploy teardown --cleanExpected output:
swiftdeploy teardown
→ Stopping and removing containers, networks, and volumes…
✔ PASS Stack torn down successfully
✔ Teardown complete
After
teardown --clean, run./swiftdeploy initto regenerate configs frommanifest.yaml.
| Method | Path | Description | Mode |
|---|---|---|---|
| GET | / |
Welcome message with mode, version, timestamp | Both |
| GET | /healthz |
Liveness check with process uptime in seconds | Both |
| POST | /chaos |
Simulate degraded behaviour | Canary only |
GET /
{
"message": "Welcome to SwiftDeploy API — running in stable mode",
"mode": "stable",
"version": "1.0.0",
"timestamp": "2026-05-03T22:28:36Z"
}GET /healthz
{
"status": "ok",
"mode": "stable",
"uptime": 48.48
}- Listens on
nginx.portfrom manifest (default: 8080) - Proxy timeouts set from
nginx.proxy_timeout - JSON error bodies on 502/503/504
X-Deployed-By: swiftdeployheader on every response- Forwards
X-Modeheader from upstream in canary mode - Access logs in format:
$time_iso8601 | $status | ${request_time}s | $upstream_addr | $request
View nginx access logs:
docker logs swiftdeploy-nginxservices:
image: swift-deploy-1-node:latest # Docker image name
port: 3000 # Internal service port
mode: stable # stable | canary (updated by promote)
version: "1.0.0" # Injected as APP_VERSION
nginx:
image: nginx:latest
port: 8080 # Public-facing port
proxy_timeout: 30 # Seconds for proxy timeouts
network:
name: swiftdeploy-net
driver_type: bridge
restart_policy: unless-stopped
volumes:
logs: app-logs
manifest.yamlis the only file you edit manually. All other config files are generated from it.
swiftdeploy/
├── manifest.yaml ← Single source of truth (only file you edit)
├── swiftdeploy ← CLI tool (Python)
├── Dockerfile ← API service image (Alpine, <300MB)
├── app/
│ ├── main.py ← Flask API service
│ └── requirements.txt
├── templates/
│ ├── nginx.conf.j2 ← Nginx config template
│ └── docker-compose.yml.j2 ← Compose template
└── README.md
Generated files (never edit manually — re-run init instead):
├── nginx.conf ← Generated by swiftdeploy init
└── docker-compose.yml ← Generated by swiftdeploy init
- Containers run as non-root user (
appuser) - Linux capabilities dropped (
cap_drop: ALL) no-new-privilegessecurity option enabled- API port never exposed publicly (internal only)
- Images based on lightweight Alpine (~83MB)
# Nginx access logs
docker logs swiftdeploy-nginx
# API logs
docker logs swiftdeploy-api
# Follow live
docker compose logs -fNginx access log format:
2026-05-03T22:28:36+00:00 | 200 | 0.001s | 172.18.0.2:3000 | GET / HTTP/1.1