Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
50ed727
Rebase onto main; reconcile journey figures with new journey structure
claude May 10, 2026
8c88d44
Restore viz preview workflow
adewale May 10, 2026
43b5056
Fix figure scaling and strip duplicate prose from inside SVGs
claude May 10, 2026
b989bfa
Sizing + prose-duplication root-cause rules; ship 12 example figures
claude May 10, 2026
5c30bfd
Smoke test viz preview after deploy
adewale May 10, 2026
10b967c
Major coverage push: 50 examples attached (was 13); Workers figures r…
claude May 10, 2026
8241edf
Third coverage push: 90/109 examples attached (82.6%); 84 figures reg…
claude May 10, 2026
ecce666
Fourth coverage push: 100% (109/109); 103 figures; lessons captured
claude May 10, 2026
63f3c30
Fifth pass: lift 5 figures off the 8.0 reuse floor; tighten workers s3
claude May 11, 2026
571e399
Production layout: cells stay 2-col; figures sit in banner rows between
claude May 11, 2026
e5f680c
Sixth iteration + rubric saturation analysis
claude May 11, 2026
acbbb26
Fix marginalia lint and cache manifest
adewale May 11, 2026
2caa4e1
Example-figure rubric v2: 'earns its place', caption quality, page co…
claude May 11, 2026
405358c
Auto-resolve asset manifest conflicts during merge/rebase
claude May 11, 2026
2d9fe3b
Remove footer worker note; drop unused --subtle CSS variable
claude May 11, 2026
282554e
Tune layout scale per impeccable layout rubric
claude May 11, 2026
07573e1
Unify responsive collapse at 780px
claude May 11, 2026
d9ba53f
Fix 45 figures clipping content outside viewBox
claude May 11, 2026
470a778
TDD red-green-refactor: enforce figure geometry contracts in CI
claude May 11, 2026
8470b42
Extend marginalia contracts: text collisions, registration, grammar
claude May 11, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# `src/asset_manifest.py` is generated by `scripts/fingerprint_assets.py`.
# On merge/rebase, keep our side of the conflict — the post-merge and
# post-rewrite hooks regenerate the file deterministically afterwards.
# This works once `scripts/install-git-hooks.sh` has been run locally,
# which registers `merge.ours.driver = true` and points `core.hooksPath`
# at `.githooks/`.
src/asset_manifest.py merge=ours
9 changes: 9 additions & 0 deletions .githooks/post-merge
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Regenerate the asset manifest after a merge or pull so the digest
# reflects the merged tree, not whichever parent won the conflict.
set -e
cd "$(git rev-parse --show-toplevel)"
uv run python scripts/fingerprint_assets.py >/dev/null
if ! git diff --quiet src/asset_manifest.py public/_headers; then
echo "post-merge: asset manifest regenerated; stage and amend if needed"
fi
9 changes: 9 additions & 0 deletions .githooks/post-rewrite
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Regenerate the asset manifest after rebase/amend so the digest matches
# the rewritten history, not whichever commit happened to win each step.
set -e
cd "$(git rev-parse --show-toplevel)"
uv run python scripts/fingerprint_assets.py >/dev/null
if ! git diff --quiet src/asset_manifest.py public/_headers; then
echo "post-rewrite: asset manifest regenerated; stage and amend if needed"
fi
74 changes: 74 additions & 0 deletions .github/workflows/preview-viz.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
name: Preview viz

on:
push:
branches:
- claude/tuftean-marginalia-viz-TB0fw
workflow_dispatch:

permissions:
contents: read

concurrency:
group: preview-viz
cancel-in-progress: true

jobs:
upload-preview:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
with:
enable-cache: false
- uses: actions/setup-python@v5
with:
python-version: '3.13'
- uses: actions/setup-node@v4
with:
node-version: '22'
- name: Install dependencies
run: uv sync --all-groups
- name: Build generated assets
run: make build
- name: Verify Cloudflare auth
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
run: npx --yes wrangler whoami
- name: Sync Python Workers vendor
run: uv run pywrangler sync
- name: Upload Cloudflare Preview
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
run: |
set -x
uv run pywrangler preview \
--name viz \
--message "${{ github.sha }}" \
--json
- name: Smoke test deployed Preview
run: |
set -euo pipefail
base="https://viz-pythonbyexample.adewale-883.workers.dev"
for path in \
"/" \
"/examples/values" \
"/prototyping/journey-figures-gestalt"; do
url="${base}${path}"
echo "Checking ${url}"
curl --fail --show-error --silent --location --output /tmp/preview-smoke.html --write-out "%{http_code} %{url_effective}\n" "${url}"
if grep -qiE "error code: 1101|PythonError|Traceback" /tmp/preview-smoke.html; then
echo "Preview rendered an exception for ${url}"
head -200 /tmp/preview-smoke.html
exit 1
fi
done
- name: Dump wrangler logs on failure
if: failure()
run: |
find ~ /tmp /root -name "*.log" -path "*wrangler*" 2>/dev/null | while read f; do
echo "=== $f ==="
tail -300 "$f" || true
done
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,12 @@ Install dependencies with `uv`, then run:
python3 -m unittest discover -s tests -v
```

After cloning, install the local git hooks once so merges and rebases regenerate `src/asset_manifest.py` instead of producing conflicts:

```bash
./scripts/install-git-hooks.sh
```

Run locally on Workers:

```bash
Expand Down
158 changes: 158 additions & 0 deletions docs/example-figure-rubric.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
# Example figure rubric

Parallel to `docs/journey-visualisation-rubric.md`, but for the figures
that attach to **example pages** (literate-program lessons), not journey
sections. The journey rubric scores the figure beside a section heading;
this one scores the figure that sits between prose and code inside a
single cell of an example walkthrough.

The two rubrics share craft criteria (palette, primitives, emphasis
scarcity) and diverge on content criteria, because the audience and
task differ. A journey-section figure depicts the *conceptual shift*
unifying multiple lessons; an example figure depicts the *single move*
the surrounding cell discusses.

Score each example figure on a 10-point scale. Version 2 of this
rubric, applied 2026-05; see `docs/rubric-saturation.md` for the
reasoning that produced these upgrades. The previous criterion 2
("match the running variables") and criterion 5 ("caption asserts")
have been replaced; a new page-level coherence rubric joins the
per-figure scoring.

## Content (5.5)

1. **Cell fidelity (0-1.5)** — the figure depicts the move the cell's
prose discusses, not the example's title. If the example is
"Mutability" but cell 1 is about immutable strings, a figure on
cell 1 must depict immutability, not aliasing. Wrong cell, wrong
figure.
2. **The figure earns its place (0-1.0)** — the figure surfaces
something the prose cannot show in the same word count: a
relationship, a before/after, a hidden mechanism, an invariant.
A figure that merely restates the prose in diagram form earns
0.5; a figure that adds nothing the prose hasn't already said
earns 0. Generic placeholders (`a`, `b`, `xs`) are fine; what
matters is whether the figure carries pedagogical weight beyond
the prose. (Replaces v1's "match the running variables", which
punished honest reuse of library figures across multiple cells.)
3. **One conceptual move (0-1.0)** — exactly one shift, before-state
to after-state, or one mechanism. Squint test: a reader should
identify the figure's single point in two seconds.
4. **Mechanism over metaphor (0-1.0)** — the figure shows the actual
machinery (the cell, the binding, the dispatch, the iterator),
not a cartoon of it. Knuth's rule.
5. **Caption quality (0-1.0)** — `figcaption` declares what is true,
in the section summary's voice; it does not narrate what the
figure does. "Two names share one mutable list — appending
through one name changes the object visible through both."
earns 1.0. "The figure shows two names pointing at one list."
earns 0 (narration, not assertion). Mixed-voice captions earn
0.5. The SVG itself contains no prose duplicating the caption;
only diagrammatic labels (`stdout`, `iter()`, panel tags, type
signatures). See pipeline invariant 2 in the spec.

## Craft (3.0)

6. **Grammar conformance (0-1.0)** — composed exclusively from
`Canvas` primitives in `src/marginalia_grammar.py`. No bespoke
SVG, no new colours, no stroke weights outside the locked set.
7. **Emphasis scarcity (0-1.0)** — at most one accent mark per
figure. The accent goes on the single element the cell prose
names (the live mutation, the captured cell, the dispatch arrow).
Three accent marks competing for attention is no emphasis at all.
8. **Restraint (0-1.0)** — no decoration that does not carry
information. No drop shadows, gradients, ornamental rules,
non-orthogonal tilts, or marks placed for "balance".

## Context (1.5)

9. **Banner-row fit (0-1.0)** — the figure's intrinsic width sits
comfortably inside `.cell-banner`'s auto-fit grid. Intrinsic widths
beyond ~360 px clamp to the column without growing past it; much
narrower viewBoxes leave whitespace either side of the centred
figure. Aim for an intrinsic viewBox between 200 and 360 px wide.
10. **Pairs with the surrounding cell (0-0.5)** — the banner sits
AFTER the named cell, so the eye reads cell-prose → cell-code →
banner. The figure should summarise the move the surrounding
cell just made, not stand alone as a generic illustration of the
example title.

## Topic gates (cell-shape specific)

- **Binding cells** (assignments, `=`) — show the name-arrow with the
type tag and the resulting value. The canonical Python picture.
- **Mutation cells** — show before-state and after-state with the
same object identity, OR rebinding with a new identity. The
difference is the lesson.
- **Iteration cells** — show the iterator advance: a caret moving,
or `iter()`+`next()` producing values one at a time.
- **Function-definition cells** — show the signature with parameter
separators (`/`, `*`) explicit when relevant, or the
caller→body→return shape.
- **Class cells** — show state and methods bundled, or the
instance→class→type triangle, or MRO chain. Pick one, not all.
- **Exception cells** — show the lanes (try/except/else/finally)
with a single traced path, or the exception-cause arrow (`__cause__`
vs `__context__`).
- **Async cells** — show two parallel lanes (loop · coroutine) with
await handoffs.

## Release gates outside the score

- **One figure per cell, at most.** Two figures on one cell signal
the cell is doing two things; split the cell instead.
- **figcaption present and declarative.** Captions in the form
"Two names share one mutable list — appending through one name
changes the object visible through both." Not "this shows X" or
"see how Y".
- **figcaption agrees with the cell's prose.** The cell's prose
paragraph in the markdown and the figure's figcaption assert the
same thing in different words. If they disagree, one is wrong.
- **Palette discipline.** Only `INK`, `INK_SOFT`, `EMPHASIS`,
`SOFT_FILL`. No literal hex codes, no `rgba(0,0,0,…)` neutrals.
- **Pipeline invariants** (see spec) hold: SVG renders at intrinsic
size; SVG contains no prose duplicating the caption.

## Page-level coherence (per slug, multi-figure)

A separate 0-1.0 score applied to slugs whose `ATTACHMENTS[slug]`
list contains more than one figure. Multi-figure pages must form a
coherent set, not three angles on the same point.

- **1.0** — figures show distinct aspects of the lesson in a
natural reading order (intro picture, mid-walkthrough mechanism,
summary). Each banner earns its placement.
- **0.5** — figures are individually fine but redundant; one would
do the work of two. The page reads as cluttered.
- **0** — figures contradict each other, or one figure is on the
wrong cell, or the page has three figures where one would teach
better.

For single-figure slugs (today, all 109 of them), page coherence is
trivially 1.0 and does not enter the per-figure score. As multi-
figure attachments grow this criterion will become the discriminator
that prevents the "more figures is better" failure mode.

## Quality bands

- **9.0-10.0** — depicts the cell's move in two seconds; the figcaption
could only describe this figure; reads pleasantly on return visits.
- **8.0-8.9** — depicts the right move but uses generic placeholders
where specific names would land harder, or the caption hedges, or
one secondary mark steals attention from the primary one.
- **7.0-7.9** — depicts the cell but loses something in scope: shows
the example title rather than the specific cell's move; or topic
gate not satisfied.
- **below 7.0** — wrong cell, wrong shape, multiple primary ideas
competing, or accent marks scattered rather than scarce. Redesign
before promoting.

## Project gate

A cell figure may ship to production once it scores **≥ 8.5**. The
example's figure average should exceed **8.7** so a multi-figure
example reads as a coherent set rather than independently authored
diagrams.

The score is a guide, not a substitute for reading the cell beside
its surrounding prose.
121 changes: 121 additions & 0 deletions docs/journey-visualisation-rubric.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Journey visualisation rubric

This rubric scores the figure beside each journey section heading.
The example rubric (docs/example-quality-rubric.md) covers individual
lesson pages; this one covers the conceptual figures that introduce
each journey section.

A journey section sits *above* individual lessons. It groups three to
five examples under a shared conceptual shift, e.g. "Recognise iteration
as a protocol" or "Bundle behavior with state". The figure beside that
heading should depict the shift the section asks the reader to make.
It is not a recycled lesson figure.

Score each section figure on a 10-point scale.

## Content (5.5)

1. **Section fidelity (0-1.5)** — the figure depicts the conceptual
shift the section title and summary describe. It does not depict
one of the section's examples. A figure for "Make decisions
explicitly" must show *deciding*, not the body of any particular
`match` statement; a figure for "Bundle behavior with state" must
show *bundling*, not one specific class.
2. **Pedagogical scope (0-1.0)** — the figure captures the general
pattern that unifies the section's items. If the figure could be
replaced with the diagram from any single lesson it is too specific.
3. **One conceptual move (0-1.0)** — exactly one shift, before-state
to after-state, or the depiction of a single mechanism. Two ideas
compete for the reader's eye and both lose. Squint test: the
primary structure is identifiable within two seconds.
4. **Mechanism over metaphor (0-1.0)** — the figure shows the actual
machinery the section names — the iterator object, the cell, the
dispatch arrow — not a cartoon of it. Knuth's rule.
5. **Caption alignment (0-1.0)** — the `figcaption` names the
conceptual shift in plain language and matches the section
summary's voice. The caption is part of the figure, not optional.

## Craft (3.0)

6. **Grammar conformance (0-1.0)** — composed exclusively from
`Canvas` primitives in `src/marginalia_grammar.py`. No bespoke
SVG, no new colours, no stroke weights outside the locked set.
7. **Emphasis scarcity (0-1.0)** — at most one accent mark per
figure. The accent goes on the single element the section names
(the live yield, the dispatch arrow, the captured cell). If three
things are orange the figure has no emphasis at all.
8. **Restraint (0-1.0)** — no decoration that does not carry
information. No drop shadows, gradients, ornamental rules,
non-orthogonal tilts, or marks placed for "balance".

## Context (1.5)

9. **Independence from lesson figures (0-1.0)** — distinct framing
from any single lesson's diagram. If the section figure is
identical to the banner figure in one of the section's lessons,
one of them is wrong. Usually the section figure should be the
*more abstract* one.
10. **Layout fit (0-0.5)** — renders comfortably at the journey
page's ~280-320px section-figure column. Text inside the SVG
stays readable at that scale; the figure does not overflow.

## Topic gates

- **Decision sections** — depict the fork explicitly: a value flowing
through a predicate to one of several branches. A single linear
arrow does not satisfy this gate.
- **Loop sections** — show the back-edge that makes a loop a loop.
A linear sequence of cells without a return path is not a loop
picture, it is just a sequence.
- **Iteration sections** — show the `iter()` / `next()` protocol
explicitly: an iterable, an iterator, and one or more values
pulled out by `next()`. The figure must distinguish iterable
from iterator.
- **Type sections** — show annotations as ghost overlays on runtime
values, or show type relationships (union, generic, structural
matching) as containment / flow. Do not let a type figure devolve
into "a function with parameter names".
- **Resource and boundary sections** — show enter and exit as paired
events bracketing a body, with the failure path also routed
through exit. A one-way arrow is not a context manager.
- **Concurrency sections** — show two parallel lanes with handoffs
between them. A single timeline is not a concurrency picture.

## Release gates outside the score

- **Exactly one figure per section.** Section figures are not stacked.
If the section needs two figures the section is doing two things.
- **Caption present.** A figure without a `figcaption` is not allowed.
- **Section summary aligns with caption.** The summary in
`src/app.py`'s `JOURNEYS` list agrees with what the figure caption
asserts. Disagreement means one or the other is wrong.
- **Renders within `.journey-section`'s 2-column grid.** The figure
obeys the column the layout gives it (~280-320px); design at a
viewBox sized for that column, not at lesson-figure dimensions.
- **Uses only the four palette constants.** `INK`, `INK_SOFT`,
`EMPHASIS`, `SOFT_FILL`. Anything else is grounds for redesign.

## Quality bands

- **9.0-10.0** — captures the conceptual shift in two seconds; the
caption could only describe this figure; pleasant to look at on
return visits.
- **8.0-8.9** — depicts the right idea but shares too much framing
with a lesson figure, or the caption hedges instead of asserting,
or one secondary mark steals attention from the primary one.
- **7.0-7.9** — depicts the section but loses something in scope:
uses a specific predicate / iterable / type instead of the
general pattern; or topic gate not satisfied.
- **below 7.0** — recycled lesson figure, missing topic gate,
multiple primary ideas competing, or accent marks scattered
rather than scarce. Redesign before publishing.

## Project gate

Every section figure on a published journey page should score at
least **8.5**. The journey average across its three sections should
exceed **8.8** so the journey reads as a unified set rather than
three independently designed cards.

The score is a guide, not a substitute for reading the page beside
its surrounding lessons.
Loading