Self-hosted Nix binary cache using Attic in your lab Kubernetes cluster, with a Cloudflare Workers + R2 read-only edge cache for external CI/CD consumers.
Lab (read + write) Edge (read-only)
┌──────────────────────┐ ┌──────────────────────────┐
│ atticd (Deployment) │ rclone │ CF Worker + R2 │
│ │ │ sync │ cache.yourdomain.com │
│ PostgreSQL │ ────────► │ │
│ │ │ (CronJob) │ nix-cache bucket │
│ Lab S3 (MinIO/Ceph) │ │ │
└──────────────────────┘ └──────────────────────────┘
▲ ▲ ▲
│ │ │
nix build attic push GitHub Actions /
(local) (lab CI) external CI reads
attic-k8s/ # Kubernetes manifests (deploy via Fleet)
├── fleet.yaml # Fleet bundle config
├── namespace.yaml
├── secrets.yaml # ⚠ Encrypt before committing (SOPS/SealedSecrets)
├── configmap.yaml # Attic server.toml
├── deployment.yaml # atticd
├── service-ingress.yaml # ClusterIP + Ingress
├── postgres-helmchart.yaml # Optional: PostgreSQL via Bitnami Helm
├── r2-sync-cronjob.yaml # rclone sync to Cloudflare R2
└── server.toml # Reference config (baked into configmap)
cloudflare-worker/ # Cloudflare Worker project
├── wrangler.toml
├── package.json
└── src/
└── worker.js # R2 → Nix binary cache proxy
- S3-compatible storage in your cluster (MinIO, Ceph RGW, etc.)
- Create a bucket named
nix-cache - PostgreSQL instance (or use the included HelmChart)
-
Edit secrets.yaml with your real credentials:
- PostgreSQL connection string
- S3 access/secret keys
- Generate the HS256 secret:
openssl rand -hex 32
-
Edit configmap.yaml — update the S3 endpoint to match your lab.
-
Edit service-ingress.yaml — set the hostname and TLS config for your ingress controller.
-
Encrypt secrets before committing to Git:
# SOPS example sops --encrypt --in-place attic-k8s/secrets.yaml -
Deploy via Fleet (push to your fleet repo) or manually:
kubectl apply -f attic-k8s/
-
Initialize the cache once atticd is running:
# Port-forward if needed kubectl port-forward -n attic svc/attic 8080:80 # Login (use the HS256 token to generate an admin token) attic login lab http://localhost:8080 --set-default # Create the cache attic cache create main # Note the public key from: attic cache info main
# Build a docker image with nix
nix build .#docker
# Push the closure to the cache
attic push main ./resultOr push everything in one shot:
nix build .#docker --json \
| jq -r '.[].outputs.out' \
| xargs -I{} attic push main {}- Cloudflare account with R2 enabled
- Create an R2 bucket named
nix-cache - Wrangler CLI installed
-
Edit wrangler.toml — set your domain route and public signing key.
-
Deploy:
cd cloudflare-worker npm install npx wrangler deploy -
Configure custom domain (optional):
- Add a CNAME record for
cache.yourdomain.compointing to your Worker's*.workers.devdomain - Or use the routes config in wrangler.toml
- Add a CNAME record for
The r2-sync-cronjob.yaml runs rclone every 15 minutes to sync your lab
S3 bucket to Cloudflare R2.
-
Edit
r2-sync-cronjob.yaml— fill in both lab S3 and R2 credentials in the rclone config secret. -
It deploys alongside the rest of the manifests.
-
Verify the sync:
kubectl logs -n attic -l app.kubernetes.io/component=r2-sync --tail=50
On developer machines and CI runners, add the cache as a substituter:
# nix.conf or via NIX_CONFIG env var
extra-substituters = https://cache.yourdomain.com
extra-trusted-public-keys = your-cache:AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v27
with:
extra_nix_config: |
extra-substituters = https://cache.yourdomain.com
extra-trusted-public-keys = your-cache:AAAA...=
- run: nix build .#docker
- run: |
skopeo copy \
docker-archive:result \
docker://ghcr.io/${{ github.repository }}:${{ github.sha }}- Garbage Collection: Attic handles GC on the lab side. The R2 mirror follows via the rclone sync (deletes propagate).
- Cache Signing: All
.narinfofiles are signed by Attic. The signing key is set when youattic cache create. Consumers only need the public key. - Monitoring: Add Prometheus scraping to the atticd pod if needed. The Worker has built-in analytics in the Cloudflare dashboard.
- Cost: R2 has zero egress fees, making it ideal for CI reads. You pay only for storage and Class A/B operations.
- Security: The Worker is read-only by design — no write path exists. Your lab Attic instance is the only writer.