Skip to content

ci: add per-DuckDB-version matrix CD for cmd/duckgres-worker#502

Merged
fuziontech merged 1 commit intomainfrom
feature/cd-worker-matrix
May 1, 2026
Merged

ci: add per-DuckDB-version matrix CD for cmd/duckgres-worker#502
fuziontech merged 1 commit intomainfrom
feature/cd-worker-matrix

Conversation

@fuziontech
Copy link
Copy Markdown
Member

Summary

Adds .github/workflows/container-image-worker-cd.yml — a new CD pipeline that publishes one duckgres-worker image per (DuckDB version × arch), using Dockerfile.worker from PR #501.

Matrix shape

DuckDB version Tags
1.5.2 (default) duckgres-worker:<sha>-duckdb1.5.2-{arm64,amd64} + multi-arch :<sha>-duckdb1.5.2 + :<sha> and :latest
1.5.1 duckgres-worker:<sha>-duckdb1.5.1-{arm64,amd64} + multi-arch :<sha>-duckdb1.5.1

Adding a DuckDB version is one new row under matrix.duckdb. The DUCKDB_GO_VERSION / DUCKDB_BINDINGS_VERSION pair maps to duckdb-go module versions; the encoding is v0.<major><minor:02d><patch:02d>.0 (see scripts/ducklake_version_matrix.sh for the same mapping in test code), so DuckDB 1.5.1 → v2.10501.0 / v0.10501.0 and 1.5.2 → v2.10502.0 / v0.10502.0.

What stays the same

The all-in-one image (.github/workflows/container-image-cd.yml) is left untouched and continues to publish the existing duckgres image unchanged. The new pipeline ships alongside it.

How tenants opt in to a non-default version

Operators flip a tenant's image config-store column to point at a specific suffixed worker tag (e.g. duckgres-worker:<sha>-duckdb1.5.1) to canary that DuckDB version for that org. PR #462 (the original multi-version control plane work) already wires the image-pinning lookup into the worker activation path — so this PR closes the loop on the user's original ask: "build for each DuckDB version so we can pin per tenant."

🤖 Generated with Claude Code

Adds .github/workflows/container-image-worker-cd.yml — a new CD pipeline
that publishes one duckgres-worker image per (DuckDB version × arch),
using Dockerfile.worker (PR #501).

Matrix shape:
  - DuckDB 1.5.2 (default) → duckgres-worker:<sha>-duckdb1.5.2-{arm64,amd64}
                              + multi-arch :<sha>-duckdb1.5.2 manifest
                              + :<sha> and :latest (only on default rows)
  - DuckDB 1.5.1            → duckgres-worker:<sha>-duckdb1.5.1-{arm64,amd64}
                              + multi-arch :<sha>-duckdb1.5.1 manifest

Adding a DuckDB version is one new row under matrix.duckdb. The
DUCKDB_GO_VERSION / DUCKDB_BINDINGS_VERSION pair maps to duckdb-go
module versions; the encoding is `v0.<major><minor:02d><patch:02d>.0`
(see scripts/ducklake_version_matrix.sh for the same mapping in test
code), so DuckDB 1.5.1 → v2.10501.0 / v0.10501.0 and 1.5.2 →
v2.10502.0 / v0.10502.0.

The all-in-one image (.github/workflows/container-image-cd.yml) is
left untouched and continues to publish the existing duckgres image
unchanged. The new pipeline ships alongside it.

Operators flip a tenant's `image` config-store column to point at a
specific suffixed worker tag (e.g. duckgres-worker:<sha>-duckdb1.5.1)
to canary that DuckDB version for that org. PR #462 (the original
multi-version control plane work) wires the image-pinning lookup into
the worker activation path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@fuziontech fuziontech enabled auto-merge (squash) May 1, 2026 18:27
fuziontech added a commit that referenced this pull request May 1, 2026
Adds .github/workflows/container-image-controlplane-cd.yml — publishes
duckgres-controlplane:<sha> + duckgres-controlplane:latest as a multi-
arch manifest (arm64 + amd64) on every push to main.

Single build per sha — the CP is version-agnostic by design (one
image fits all worker fleets), so no DuckDB-version matrix here.
Contrast with container-image-worker-cd.yml (PR #502) which produces
one duckgres-worker image per (DuckDB version × arch).

Together with the existing all-in-one CD (container-image-cd.yml,
unchanged) and the worker matrix CD, the image pipeline now mirrors
the binary set:

  duckgres                container-image-cd.yml             (existing)
  duckgres-worker         container-image-worker-cd.yml      (PR #502)
  duckgres-controlplane   container-image-controlplane-cd.yml (this PR)

Stacked on PR #503 which adds Dockerfile.controlplane.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@fuziontech fuziontech merged commit 70f283e into main May 1, 2026
21 of 22 checks passed
@fuziontech fuziontech deleted the feature/cd-worker-matrix branch May 1, 2026 18:30
fuziontech added a commit that referenced this pull request May 1, 2026
* feat: add Dockerfile.controlplane for the duckdb-free CP image

Builds cmd/duckgres-controlplane (PR #498). The image is the control-
plane Pod's runtime; all SQL execution is routed to remote
duckgres-worker images (Dockerfile.worker), so this image:

  - Does NOT link libduckdb (the controlplane-no-libduckdb CI guard
    from PR #499 enforces it)
  - Does NOT bundle the DuckDB extension downloads — without a DuckDB
    driver they'd be dead weight
  - Is meaningfully smaller than the all-in-one image

CGO is still enabled because the transpiler uses pg_query_go which
links libpg_query. That's a pure Postgres parser, nothing to do with
DuckDB.

Together with Dockerfile.worker (per-DuckDB-version, PR #501) and the
existing all-in-one Dockerfile (unchanged), the image set now mirrors
the binary set:

  duckgres                    (existing) — all-in-one, links libduckdb
  duckgres-worker             (new)      — worker-only, per-DuckDB-version
  duckgres-controlplane       (this PR)  — CP-only, no libduckdb

A CD workflow that publishes the controlplane image (single build per
sha, no DuckDB matrix needed since this binary is version-agnostic) is
the next PR.

Verified locally:
  - go build -o /tmp/duckgres-controlplane ./cmd/duckgres-controlplane
    builds clean (~40MB binary)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ci: add CD pipeline for cmd/duckgres-controlplane image (#504)

Adds .github/workflows/container-image-controlplane-cd.yml — publishes
duckgres-controlplane:<sha> + duckgres-controlplane:latest as a multi-
arch manifest (arm64 + amd64) on every push to main.

Single build per sha — the CP is version-agnostic by design (one
image fits all worker fleets), so no DuckDB-version matrix here.
Contrast with container-image-worker-cd.yml (PR #502) which produces
one duckgres-worker image per (DuckDB version × arch).

Together with the existing all-in-one CD (container-image-cd.yml,
unchanged) and the worker matrix CD, the image pipeline now mirrors
the binary set:

  duckgres                container-image-cd.yml             (existing)
  duckgres-worker         container-image-worker-cd.yml      (PR #502)
  duckgres-controlplane   container-image-controlplane-cd.yml (this PR)

Stacked on PR #503 which adds Dockerfile.controlplane.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fuziontech added a commit that referenced this pull request May 1, 2026
…ake_version (#506)

Closes the operational loop opened by PR #462 (per-tenant pinning) and PR #502
(per-DuckDB-version matrix CD): operators no longer need to run direct
`UPDATE duckgres_managed_warehouses SET image=..., ducklake_version=...` SQL
against the config store to pin a tenant to a specific worker image / DuckLake
spec version. The new endpoint goes through the same row-locked
MutateManagedWarehouse path the PUT endpoint uses, so concurrent mutators are
serialized.

Adds `ducklake_version` to managedWarehouseUpsertColumns() — it was missing,
so prior PUTs that touched this column were silently no-oping. Also adds
Image and DuckLakeVersion to the strict-decode whitelist on
managedWarehouseRequest so the existing PUT path can carry them too.

Includes a regression-guard test asserting both columns stay in the upsert
list — losing either silently breaks the matrix-build cutover.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant