A lightweight Platform-as-a-Service control plane I built to explore how services like Heroku and Cloud Run work under the hood. This isn't a toy project—it handles real Kubernetes deployments with proper revision tracking, rolling updates, and horizontal scaling.
I've always been curious about what happens when you git push to Heroku. After digging through various PaaS implementations and Kubernetes operators, I decided to build my own simplified version to really understand the internals. The result is a clean REST API that manages the full lifecycle of containerized applications.
- Application Management - Create and organize apps with metadata tracking
- Service Deployment - Deploy any Docker image with configurable replicas, ports, and resource limits
- Revision Control - Every deployment creates an immutable revision, making rollbacks trivial
- Horizontal Scaling - Scale services up or down through the API, changes reflect instantly in the cluster
- Health Monitoring - Real-time status checks showing pod states across the cluster
┌─────────────────────────────────────────────────────────────┐
│ REST API (Chi) │
├─────────────────────────────────────────────────────────────┤
│ Service Layer │
│ (business logic, validation, orchestration) │
├──────────────────────┬──────────────────────────────────────┤
│ SQLite Store │ Kubernetes Client │
│ (state tracking) │ (cluster operations) │
└──────────────────────┴──────────────────────────────────────┘
│
▼
┌─────────────────┐
│ K8s Cluster │
│ (kind/EKS/GKE) │
└─────────────────┘
The design follows a pretty standard layered approach. The API layer handles HTTP concerns, the service layer contains all the business logic, and the store/kube clients deal with persistence and cluster operations. I intentionally kept Kubernetes-specific code isolated so the business logic stays testable without spinning up a cluster.
You'll need Go 1.21+, Docker, and kubectl. For local development, I use kind to spin up a cluster:
# Create a local cluster
kind create cluster --name minipaas
# Clone and run
git clone https://github.com/Demiserular/MINIPASSS.git
cd minipaas
go mod download
go run ./cmd/serverThe server starts on port 8080. Hit /healthz to verify everything's connected.
curl -X POST http://localhost:8080/api/v1/apps \
-H "Content-Type: application/json" \
-d '{"name": "my-app", "description": "Production backend"}'curl -X POST http://localhost:8080/api/v1/apps/{app_id}/services \
-H "Content-Type: application/json" \
-d '{
"name": "api",
"image": "nginx:alpine",
"replicas": 3,
"port": 80,
"cpu_limit": "500m",
"memory_limit": "256Mi"
}'curl -X PUT http://localhost:8080/api/v1/services/{service_id}/scale \
-H "Content-Type: application/json" \
-d '{"replicas": 5}'curl http://localhost:8080/api/v1/services/{service_id}/statusHere's what it looks like deploying and scaling a real service on my local kind cluster:
Cluster Status
$ kubectl get pods -n minipaas
NAME READY STATUS RESTARTS AGE
web-6d9b7c8f5d-2xkp4 1/1 Running 0 3m
web-6d9b7c8f5d-4nth7 1/1 Running 0 3m
web-6d9b7c8f5d-8qrz2 1/1 Running 0 3m
web-6d9b7c8f5d-k9vm5 1/1 Running 0 2m
web-6d9b7c8f5d-pf6x3 1/1 Running 0 2m
Health Check Response
{"status":"ok","database":"ok","cluster":"ok"}The system correctly provisions Kubernetes Deployments and tracks their state. Each revision gets stored so you can see the deployment history and roll back if needed.
| Component | Choice | Why |
|---|---|---|
| Language | Go | Fast compilation, great stdlib, native K8s client |
| Router | chi | Lightweight, middleware-friendly, idiomatic |
| Database | SQLite | Zero config for dev, swap to Postgres for prod |
| K8s Client | client-go | Official library, well documented |
| Logging | zap | Structured logs, great performance |
minipaas/
├── cmd/server/ # Entry point
├── api/ # HTTP handlers and routing
├── service/ # Business logic layer
├── store/ # Database operations
├── kube/ # Kubernetes client wrapper
├── models/ # Data structures
├── config/ # Environment configuration
└── deploy/ # K8s manifests for self-hosting
Building this taught me a lot about Kubernetes internals—particularly how controllers reconcile desired state vs actual state. The trickiest part was handling the async nature of pod scheduling while keeping the API responsive. I ended up polling deployment status rather than blocking on full rollout, which matches how the big platforms do it.
Also learned to appreciate good abstractions. Keeping the Kubernetes client behind an interface means I can run unit tests without a cluster and swap implementations if needed.
Some things I'd add with more time:
- WebSocket endpoint for real-time deployment logs
- Proper authentication (JWT or API keys)
- Rollback endpoint to revert to previous revisions
- Resource quota enforcement at the app level
- Prometheus metrics for observability
The project includes GitHub Actions workflows with security-first approach:
CI Pipeline (ci.yml):
| Job | Purpose |
|---|---|
| build | Compile, vet, test with race detection |
| lint | Code quality via golangci-lint |
| security | Static analysis with Gosec (OWASP) |
| vulnerability-scan | Dependency CVE check with govulncheck |
| dependency-review | Block PRs with vulnerable deps |
| docker | Build image + Trivy container scan |
Security Scans:
- Gosec - Finds security issues in Go code (SQL injection, hardcoded creds, etc.)
- govulncheck - Official Go vulnerability database check
- Trivy - Container image vulnerability scanner
- Dependency Review - Blocks PRs introducing vulnerable dependencies
Results upload to GitHub Security tab for tracking.
Release Pipeline (release.yml) - Triggered on version tags:
- Cross-platform binary builds (Linux, macOS, Windows)
- Docker image pushed to GitHub Container Registry
- Automatic GitHub release with binaries
# Create a release
git tag v1.0.0
git push origin v1.0.0MIT—use it however you want.
Built by Demiserular | Questions? Open an issue.