[docs] Design sketch: content-addressed microdata publishing#311
[docs] Design sketch: content-addressed microdata publishing#311
Conversation
Companion to docs/release-bundles.md. Sketches what we'd build if we started over: content-addressed object storage (build_id = sha256 of inputs), release channels (latest/stable/lts-* pointing at build_ids), a one-command builder CLI, and a pe.py consumer that resolves channel -> manifest -> payload with sha256 verification. Motivation: today's stack layers PyPI + HF + GitHub + release manifests, and refreshing any link touches six files across three repos. Today's session hit it twice — the us-data HF path was a model repo not a dataset repo (broke refresh_release_bundle until we fixed it), and answering "is our data out of date?" required spelunking because inputs.model_package.sha256 isn't stored anywhere. Keeps intact: - Country data repos own construction (release-bundles.md boundary) - TRACE TRO sidecars with the same trov:/pe: vocabulary - UK privacy boundary (bucket can be org-gated) Explicitly a design sketch, not an accepted plan. Open questions section flags GCS vs S3, parquet vs HDF5, channel-history signing, and single-bucket-vs-namespaces.
Subagent stress-test surfaced five scenarios the sketch handled clumsily or not at all. Rewritten to: - State upfront that the motivating pains (#310 HF repo-type bug, "is data stale?") do NOT require this architecture — they're solvable with a one-time HF repo-type fix + a 50-line CI job. The sketch is a "where do we want to be in a year?" question, not an immediate-fix proposal. - Replace the single "Open questions" section with an explicit "Unresolved risks" section covering: * UK Data Service audit trail (today HF logs downloader identity; sketch loses this unless we explicitly gate manifests + log resolver hits) * Silent-promote attack (channel JSON has no signature; sketch is strictly weaker than PyPI/HF platform auth until channel-history signing ships) * Non-deterministic builds (today's Enhanced CPS pipeline uses torch+pandas imputation; v1 needs If-None-Match conditional writes or explicit first-writer-wins semantics) * Licence revocation vs immutability (tombstone build_ids with status=revoked, explicit licence-continuity qualifier on the replicability guarantee) * Cross-cloud replication (mirror story is payload-only; channels require proxy or consumer multi-mirror config) - Revised cost estimate: the earlier "3 engineer-weeks" was ~3x optimistic. Realistic range is 8-12 engineer-weeks. Recategorised as v5 scope. No change to the core three-concepts model (identity / distribution / discovery separation). That part held up.
Stress-test review folded in (commit 10fa058)Subagent stress-tested the sketch against five scenarios. Four needed explicit design answers; one meta-flag affected framing. Core architecture held up. The identity/distribution/discovery separation and the channels+manifest layer survive the review intact. No change to the shape. What changed in the doc:
The core three-concepts model is unchanged — just the operational realism around it. |
…-bundles
Codex review caught the main architectural overclaim in the earlier
sketch: it slid between recipe-addressed ("sha256 of inputs" — an
identifier derived from declared inputs) and content-addressed
("sha256 of output bytes" — an identifier derived from the bytes
themselves), and framed the whole design as a replacement for
release-bundles.md when release-bundles is load-bearing for the
scientific citation and certification surface.
Rewritten to:
- Scope the sketch explicitly to a storage substrate. release-bundles
remains the authoritative citation + certification surface; the
substrate is infrastructure underneath.
- Switch the primary identifier from `build_id = sha256(inputs)` to
`artifact_sha256 = sha256(output bytes)`. Input digest becomes a
derived queryable field in the manifest, not the primary key.
This is how OCI/Nix actually work in the parts that deliver.
- Drop the `stable` and `lts-{quarter}` channel names. Their
semantics for microdata are ambiguous (four meanings per codex:
"latest official source vintage" vs "methodologically preferred
reconstruction" vs "legally redistributable build" vs
"paper-citation freeze"). Keep only `latest` (operational) and
`next` (staging, feeds certification). Authoritative / stable
stays on the release-bundle side.
- Drop claims of org-independent identity. `data_vintage:
"cps_asec_2024"` is a label, not a raw-bytes hash; `built_at` /
`built_by` break bitwise identity across orgs anyway. The current
release-bundles schema records raw-input hashes, so regressing
on that would be real.
- New section: "The release-bundle boundary (what doesn't change)"
spelling out that certification, staged promotion, compatibility
rules, `*.trace.tro.jsonld` sidecars, and the replicability
guarantee all remain in release-bundles.md.
- Revised "whether to pursue" section leads with the honest
conclusion: keep the storage idea, drop the "replace release
bundles" framing, don't build it to fix #310 or "is our data
stale?" (which have cheap targeted fixes), and revisit if the UK
Data Service relationship gets stricter.
- Honest migration cost table (7-11 engineer-weeks, independent
tracks), explicitly v5 scope.
Both review findings (general-purpose + codex) carried forward
under "Unresolved risks"; that section barely changed except that
"non-deterministic builds" is now actually *cleaner* under output-
hash identity — two runs produce two different sha256s, they don't
silently collide.
Structure now: scope / motivating pains (and what's actually on the
critical path) / what the substrate provides / output-hash identity
/ narrow channel semantics / release-bundle boundary preservation /
consumer resolver changes / unresolved risks / what this fixes
vs. what it doesn't / honest cost / whether to pursue / open
questions.
Codex review folded in — structural rewrite (commit f951f5a)Codex caught a load-bearing overclaim the first review missed: the sketch slid between recipe-addressed (sha256 of declared inputs) and content-addressed (sha256 of output bytes). Those are meaningfully different, and the earlier draft used "content-addressed" to claim properties only true of the latter. Second, Codex flagged that the earlier framing ("replace PyPI + HF + GitHub + release manifests") was actively regressive: it proposed collapsing a scientific-citation-and-certification surface into a storage primitive. That's the pattern where a separate certification layer reappears on top within 2 years (codex: 60–85% probability), and we'd end up with two systems. What changed in the rewrite
The doc's load-bearing sentence, now explicit: "When a release bundle is certified, it promotes an artifact from Both prior review's risks (UK audit, silent-promote, non-determinism, licence revocation, cross-cloud mirroring) carried forward. "Non-deterministic builds" is actually cleaner under output-hash identity — two runs produce two distinct |
Design sketch for what we'd build if publishing PolicyEngine microdata from scratch. Explicitly a sketch, not an accepted plan — written to capture the shape while today's session has the context loaded.
Motivation
Today's stack layers PyPI (country models), PyPI and HF (country data), GitHub repos (build code), and
policyengine.pyrelease manifests (sha256 pins). Refreshing any link touches six files across three repos and mixes identity ("what dataset is this?") with distribution ("where do I get it?") and discovery ("what's current?").Two pain points from today's session that the sketch resolves:
inputs.model_package.sha256isn't stored anywhere queryable. Under the sketch, CI diffs it against the active channel's manifest.Shape
build_id= sha256 of inputs (data vintage + model wheel + calibration sha + seed + lockfile). Deterministic; retagging impossible.stableis one S3 PUT.manifest.jsoncarries schema + entity map, so agents learn column types without downloading 100 MB.pe.py's consumer resolveschannel → build_id → manifest → payload with sha256 verification.What stays
release-bundles.mdintact)policyengine.py, sametrov:/pe:vocabularyMigration cost
~3 engineer-weeks if genuinely pursued.
Explicit non-goals
pe-us 1.653and1.700will still produce different enhanced CPS)Open questions (in the doc)
This PR
Adds
docs/data-publishing-design.mdand wires it into the Quarto nav under Reference. No code change. Merge-neutral for v4.x — this is a research doc.