SwiftBatch is a recruiter-facing portfolio project that demonstrates a production-style Go backend for asynchronous image processing. The current repo includes the foundation for the first delivery milestones from SWIFTBATCH_AGENT_BRIEF.md:
- repository skeleton
- local Docker Compose stack
- Postgres schema and migration flow
- API and worker entrypoint scaffolding
cmd/
api/
migrate/
worker/
deploy/
docker/
k8s/
docs/
internal/
api/
config/
db/
imageproc/
observability/
platform/
queue/
storage/
worker/
migrations/
scripts/
- Copy
.env.exampleto.envif you want local overrides. - Start the full local stack:
docker compose -f deploy/docker/docker-compose.yml up --buildThe compose flow starts postgres, redis, minio, the bucket bootstrapper, migrations, the API, and the worker. The API is published on http://localhost:18080, and worker metrics are published on http://localhost:18081/metrics.
If you want to run the Go binaries outside Docker instead, start the infrastructure first and then run:
go run ./cmd/migrate up
go run ./cmd/api
go run ./cmd/workerapiexposesPOST /v1/uploads/presign,POST /v1/jobs,GET /v1/jobs/:id,GET /v1/jobs/:id/results,GET /healthz,GET /readyz, andGET /metricsGET /now serves a minimal browser UI for upload, job submission, status polling, retry, and output downloadsworkernow exposes Prometheus metrics onhttp://localhost:18081/metricsplus a simpleGET /healthz- new jobs are persisted in Postgres and enqueued into Redis
- uploads can now be pushed directly to MinIO using a presigned
PUTURL returned by the API - the worker downloads source objects from MinIO, runs image transforms, uploads generated outputs back to MinIO, and persists
job_outputs - the worker creates
job_attempts, moves jobs throughqueued,processing,completed,failed, anddead_lettered, and automatically retries untilSWIFTBATCH_REDIS_MAX_RETRIESis exhausted - exhausted jobs are pushed to the Redis DLQ with failure metadata
GET /v1/jobs/:id/resultsnow returns output metadata plus presigned download URLs- worker metrics now include queue depth, DLQ depth, job completion/failure counts, retry counts, processing duration, and concurrency usage
- structured JSON logs now include
job_idon create, retry, processing, completion, and DLQ paths - IP-based rate limiting now protects
POST /v1/uploads/presign,POST /v1/jobs, andPOST /v1/jobs/:id/retry - Postgres migration flow is wired through
cmd/migrate - Docker Compose includes
api,worker,redis,postgres,minio, and a one-shotcreatebucketsinit container deploy/k8s/now contains single-nodek3smanifests for the full demo stack, including Traefik ingresses plus Prometheus and Grafana- the SkyServer deployment now uses Traefik with Let’s Encrypt HTTP-01, so the public hosts are served over valid HTTPS
- MinIO now sets explicit browser CORS for the frontend origin so presigned uploads work from the product UI
- the live SkyServer target now uses bundled
k3sTraefik plus aHelmChartConfigoverride for Let’s Encrypt and HTTP-to-HTTPS redirection
Further reading:
Implementation notes:
- the worker image uses Debian slim plus ImageMagick so
jpg,png,webp, andavifoutput support is available with a small amount of code - presigned URLs are generated against
SWIFTBATCH_STORAGE_PUBLIC_BASE_URL, which defaults tohttp://localhost:9000for local Docker verification - API rate limiting prefers
X-Forwarded-For, thenX-Real-IP, thenRemoteAddr, so it still works sensibly behind Traefik - the product is now being treated as an ephemeral-data demo system, so future cleanup of old uploads, outputs, and job history is part of the planned scope
Next implementation steps:
- add ephemeral data cleanup automation
- add demo-oriented docs and sample curl flow
- expand the user-facing frontend polish and add the engineering page
Deployment note:
- the current live-target plan uses a plain SkyServer Ubuntu VPS with
k3s - DNS for
abhinash.devis currently managed in the AWS Lightsail DNS zone UI, not Route 53