Skip to content

flrichar/attic-cache

Repository files navigation

Nix Binary Cache: Attic on Kubernetes + Cloudflare R2 Edge Cache

Self-hosted Nix binary cache using Attic in your lab Kubernetes cluster, with a Cloudflare Workers + R2 read-only edge cache for external CI/CD consumers.

Architecture

Lab (read + write)                    Edge (read-only)
┌──────────────────────┐             ┌──────────────────────────┐
│  atticd (Deployment)  │   rclone   │  CF Worker + R2          │
│         │             │   sync     │  cache.yourdomain.com    │
│  PostgreSQL           │ ────────►  │                          │
│         │             │  (CronJob) │  nix-cache bucket        │
│  Lab S3 (MinIO/Ceph)  │            │                          │
└──────────────────────┘             └──────────────────────────┘
     ▲      ▲                                    ▲
     │      │                                    │
  nix build  attic push                  GitHub Actions /
  (local)    (lab CI)                    external CI reads

Directory Layout

attic-k8s/                    # Kubernetes manifests (deploy via Fleet)
├── fleet.yaml                # Fleet bundle config
├── namespace.yaml
├── secrets.yaml              # ⚠ Encrypt before committing (SOPS/SealedSecrets)
├── configmap.yaml            # Attic server.toml
├── deployment.yaml           # atticd
├── service-ingress.yaml      # ClusterIP + Ingress
├── postgres-helmchart.yaml   # Optional: PostgreSQL via Bitnami Helm
├── r2-sync-cronjob.yaml      # rclone sync to Cloudflare R2
└── server.toml               # Reference config (baked into configmap)

cloudflare-worker/            # Cloudflare Worker project
├── wrangler.toml
├── package.json
└── src/
    └── worker.js             # R2 → Nix binary cache proxy

Setup Guide

1. Lab: Deploy Attic on Kubernetes

Prerequisites

  • S3-compatible storage in your cluster (MinIO, Ceph RGW, etc.)
  • Create a bucket named nix-cache
  • PostgreSQL instance (or use the included HelmChart)

Steps

  1. Edit secrets.yaml with your real credentials:

    • PostgreSQL connection string
    • S3 access/secret keys
    • Generate the HS256 secret: openssl rand -hex 32
  2. Edit configmap.yaml — update the S3 endpoint to match your lab.

  3. Edit service-ingress.yaml — set the hostname and TLS config for your ingress controller.

  4. Encrypt secrets before committing to Git:

    # SOPS example
    sops --encrypt --in-place attic-k8s/secrets.yaml
  5. Deploy via Fleet (push to your fleet repo) or manually:

    kubectl apply -f attic-k8s/
  6. Initialize the cache once atticd is running:

    # Port-forward if needed
    kubectl port-forward -n attic svc/attic 8080:80
    
    # Login (use the HS256 token to generate an admin token)
    attic login lab http://localhost:8080 --set-default
    
    # Create the cache
    attic cache create main
    
    # Note the public key from:
    attic cache info main

2. Push to the Cache (Local / Lab CI)

# Build a docker image with nix
nix build .#docker

# Push the closure to the cache
attic push main ./result

Or push everything in one shot:

nix build .#docker --json \
  | jq -r '.[].outputs.out' \
  | xargs -I{} attic push main {}

3. Cloudflare: Deploy the Edge Cache

Prerequisites

  • Cloudflare account with R2 enabled
  • Create an R2 bucket named nix-cache
  • Wrangler CLI installed

Steps

  1. Edit wrangler.toml — set your domain route and public signing key.

  2. Deploy:

    cd cloudflare-worker
    npm install
    npx wrangler deploy
  3. Configure custom domain (optional):

    • Add a CNAME record for cache.yourdomain.com pointing to your Worker's *.workers.dev domain
    • Or use the routes config in wrangler.toml

4. Sync Lab → R2

The r2-sync-cronjob.yaml runs rclone every 15 minutes to sync your lab S3 bucket to Cloudflare R2.

  1. Edit r2-sync-cronjob.yaml — fill in both lab S3 and R2 credentials in the rclone config secret.

  2. It deploys alongside the rest of the manifests.

  3. Verify the sync:

    kubectl logs -n attic -l app.kubernetes.io/component=r2-sync --tail=50

5. Configure Nix Clients

On developer machines and CI runners, add the cache as a substituter:

# nix.conf or via NIX_CONFIG env var
extra-substituters = https://cache.yourdomain.com
extra-trusted-public-keys = your-cache:AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=

GitHub Actions Example

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: cachix/install-nix-action@v27
        with:
          extra_nix_config: |
            extra-substituters = https://cache.yourdomain.com
            extra-trusted-public-keys = your-cache:AAAA...=
      - run: nix build .#docker
      - run: |
          skopeo copy \
            docker-archive:result \
            docker://ghcr.io/${{ github.repository }}:${{ github.sha }}

Operational Notes

  • Garbage Collection: Attic handles GC on the lab side. The R2 mirror follows via the rclone sync (deletes propagate).
  • Cache Signing: All .narinfo files are signed by Attic. The signing key is set when you attic cache create. Consumers only need the public key.
  • Monitoring: Add Prometheus scraping to the atticd pod if needed. The Worker has built-in analytics in the Cloudflare dashboard.
  • Cost: R2 has zero egress fees, making it ideal for CI reads. You pay only for storage and Class A/B operations.
  • Security: The Worker is read-only by design — no write path exists. Your lab Attic instance is the only writer.

About

Nix Binary Cache Proof of Concept

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors