flight: macro bench composing 5 primitives + audit fixes (B1-B5, D6)#28
Merged
flight: macro bench composing 5 primitives + audit fixes (B1-B5, D6)#28
Conversation
When GALE_USE_SYNTH=ON, the gale-ffi crate is compiled to wasm32 first, then run through the synth AOT compiler (pulseengine/synth) which emits a Cortex-M ET_REL relocatable object. The object is wrapped into the same libgale_ffi.a path the rest of the build expects via ar, so the per-module gale_sem.c / gale_mutex.c / etc. consumers need no changes. This is the build-system half of the 4th-variant experiment for the cross-language LTO blog post: same engine bench, three existing builds (GCC baseline / GCC + Gale / LLVM + LTO + Gale) plus a 4th data point where verified Rust reaches Cortex-M via Verus → rustc → wasm → synth's Rocq-proved i32 instruction selection. Requires synth with --relocatable flag (pulseengine/synth#83). Default behaviour (GALE_USE_SYNTH=OFF) is unchanged: rustc-direct-to-Cortex-M is still the production path. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ment Adds the gale-via-synth lane to the engine-bench Renode matrix as a follow-up to the cross-language LTO post. Builds the GCC baseline and the GALE_USE_SYNTH=ON variant (wasm32 -> synth -> Cortex-M ET_REL -> libgale_ffi.a) on the same CI run, then sweeps both through Renode at the long sample count. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds two optional formally-verified-(or-not) wasm optimizers between rustc and synth in the GALE_USE_SYNTH pipeline: rustc -> [wasm-opt -Oz] -> [loom optimize] -> synth -> ar -> .a Both are detected via find_program() and only inserted into the pipeline if found. If neither is on the path the pipeline reduces to rustc -> synth (unchanged behaviour). Effect on engine bench (stm32f4_disco, prj-gale.conf, GCC C kernel): synth alone: text=22448, total=38533 synth + wasm-opt + loom: text=22420, total=38505 (-28 B) Wasm-level reduction is dramatic (-34% from wasm-opt -Oz), but the synth-emitted ARM code is dominated by per-function instruction selection overhead, so the final ELF only moves a few dozen bytes. The verification-chain story is the bigger win: loom proves each pass it applies preserves semantics; rejected passes are skipped rather than applied unsoundly. CI workflow installs both: binaryen apt package (wasm-opt) and loom-cli from pulseengine/loom main. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
workflow_dispatch alone requires the workflow to exist on the default branch before it can be invoked, which we can't do without merging the whole experiment first. Adding a push trigger on experiment/gale-via-synth so the workflow auto-runs whenever the experiment branch advances. Strip this trigger before any merge to main. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…t --path) cargo install --git URL --path PATH is invalid — the two flags are mutually exclusive. When installing a sub-crate from a git workspace, pass the package name as a positional argument: cargo install --git URL [--branch B] PACKAGE_NAME --force Fixes the workflow's synth-cli install (was rejected with: 'the argument --git <URL> cannot be used with --path <PATH>') and the identical pattern used for loom-cli. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… optimize mode) Diagnosed during local debugging: synth's optimized register-allocation path clobbers r0/r1 (input parameter registers) at function entry when the wasm body pushes i32 constants before the first local.get. The function's prologue ends up looking like: movs r0, #1 ← clobbers param 0 (count) movs r1, #0 ← clobbers param 1 (limit) ... cmp r0, r1 ← compares clobbered values, not the actual params This crashes the engine bench in Renode (HardFault on first gale call, infinite handler loop, never reaches the test's Zero Drops assertion). The CI run hit a 60-min step timeout without producing a single sample. Minimal repro saved at /tmp/match_gale.wat (3 i32 params, i64 local, push 3 i32 constants, then local.get 0). Worth filing as a synth issue once the experiment lands. Workaround: synth --no-optimize. Disables the offending pass and emits proper AAPCS prologue (push r4..r8/lr, locals on stack, params read from r0/r1/r2 unchanged). Verified locally: same gale_k_sem_give_decide function now starts with `stmdb sp!, {r4..r8, lr}` and reads r0/r1 correctly. Cost: ~68 bytes of additional flash (22624 → 22692) and unknown cycles. The --no-optimize path uses stack-based locals which is wasteful but correct. Stack frame size also goes up — synth reserves ~4KB per function for locals which may be excessive; will need to validate no stack overflow on the engine bench worker thread. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…frame fixes The previous CI run booted Zephyr with the synth-built ELF but every RPM step reported count=0 [drain_timeout]. Diagnosed two synth bugs: 1. i64 local storage dropped the upper half (--no-optimize path) 2. Locals area aliased the saved-register spill (also --no-optimize) Both fixed in pulseengine/synth#85. This commit points the workflow at that branch so the next CI run uses the fixed synth. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pulls in: - explicit I64Or/And/Xor/ExtendI32U/ExtendI32S/Shl/ShrU/ShrS handlers in synth's select_with_stack (no more wildcard fallthrough to select_default's R0:R1/R2:R3 assumption) - alloc_consecutive_pair now reserves implicit pair_hi of every stack entry plus extra_avoid for popped operands Local build verified: gale_k_sem_give_decide ends with orr r0, r6, r8 orr r1, r7, ip matching the wasm i64.or semantics. 22644 B FLASH. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Tracing the −34.5% handoff cycle delta in the synth bench. Found that loom's optimizer hoists `local.set 3 = 0` from the fall-through arm of gale_k_sem_give_decide to BEFORE the dispatcher, dropping the WakeThread/Increment distinction at the wasm level — synth then emits ARM that always returns action=INCREMENT regardless of has_waiter. The bench passes (samples=7750, drops=0) because the engine_control worker is rarely actually blocked at sem_give time, so the WAKE path is rarely needed for correctness. But the cycle delta is then comparing a degenerate always-INCREMENT path to rustc's correct WAKE/INCREMENT discrimination — apples to oranges. This run skips loom in CI so we can A/B against the loom-on result and validate the hypothesis. CMakeLists' find_program(LOOM) fails when loom isn't on PATH, falling through to wasm-opt -> synth without loom. Filed for follow-up: pulseengine/loom optimizer bug. The hoisting is unsound for this control-flow pattern. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…5% delta
The container ships Renode 1.16.0 (ARG RENODE_VERSION=1.16.0 in
zephyrproject-rtos/docker-image Dockerfile.base, unchanged across
v0.28.x..v0.29.2). 1.16.1 (Feb 2026) touches several Cortex-M paths
that could shift cycle accounting on the gale instruction stream:
fixed ARMv8-M Thumb2 data-processing instructions, fixed Stack
Pointer bits[1:0] handling, fixed wrong exception when FPU is
disabled. None is labelled a cycle-counter fix in the changelog,
but Thumb-2 dispatch changes shift cycle accounting whenever an
instruction takes a different micro-op path.
Adds nightly Renode (builds.renode.io) alongside 1.16.0 and runs
the same two ELFs under both. Yields three controls:
(a) baseline vs gale, both under nightly
— does the gale delta change when the cycle model changes?
(b) baseline_1.16.0 vs baseline_nightly (same ELF)
— control: cycle-model drift on identical instructions.
(c) gale_1.16.0 vs gale_nightly (same ELF)
— does the model shift gale's instructions more than baseline's?
If yes, the 1.16.0 model is mis-scoring gale-specific
instructions and the delta is partly artifactual.
Implementation: PATH override puts /opt/renode-nightly first for the
two new run steps. Robot file unchanged (it reads ELF / BENCH_CSV_OUT
from env). Existing 1.16.0 comparison is undisturbed; nightly outputs
go to events-nightly.csv and a separate report section.
Timeout bumped 120 -> 240 min to cover all four Renode runs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Add the six-thread / two-timer / five-primitive macro benchmark
described in docs/research/macro-bench-design.md. Composes ring_buf +
sem + mutex + msgq + condvar on a 100 Hz fixed-rate flight-control
loop, capturing per-sensor-ISR algo + handoff (engine_control parity)
plus per-controller-period t_lock, t_post, t_round, t_bcast.
Single CSV row per sensor sample with -1 sentinels for the segments
not measured on that row (~9-of-10 cycles have no t_bcast, ~9-of-10
sensor rows have no t_lock/t_post/t_round). 3-axis sweep
(sensor_hz x contention x payload) totalling ~4500 events on the
long sweep, matching engine_control's Renode lane density.
Verified:
- Builds clean for qemu_cortex_m3 (baseline + gale variants).
- QEMU smoke run: 150/150 samples, drops=0, telemetry_emits=11
(priority inheritance keeps the lowest-priority telemetry thread
alive under fusion/actuator contention).
- All four new cycle-delta segments populate as expected.
Two-ring split (sensor_ring -> fusion -> emit_ring -> reader) avoids
the single-sem race where reader_loop and the fusion thread would
otherwise compete for sensor_data_ready and steal samples from each
other. Reader thread runs at priority 10 (below all workers) so its
UART back-pressure never starves the measured chain.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Add analyze.py extending engine_control's per-step + Mann-Whitney
shape with four new metric columns (t_lock, t_post, t_round,
t_bcast). Negative values in the new columns are the "not measured
on this row" sentinel and are filtered out per metric.
New asserts beyond the engine_control set:
- telemetry_emits > 0 on both variants (design doc Section
"Risks": priority-inheritance must keep the lowest-priority
telemetry thread off the starvation floor)
- gale p99 <= 2 * baseline p99 on each of t_lock, t_post,
t_round, t_bcast (one regression guard per primitive segment)
run_qemu_bench.sh + tag_events.py mirror engine_control's shape
1:1 (same env conventions, same per-run-id tagging). Renode robot
file is engine_stm32f4.robot with the wait line updated to match
the macro bench's startup banner.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Modeled on engine-bench-renode-synth.yml, runs the new macro benchmark on stm32f4_disco under Renode for the long sweep (~4500 events). Same variant matrix (baseline + gale), same artifact upload shape, same MD report rendered into the job summary. Triggered on push to experiment/macro-bench-flight-control and manually via workflow_dispatch. Uses only safe GitHub contexts (github.workspace, github.ref) — no untrusted inputs flow into shell commands. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Run 25135494876 timed out at step 13 of 27 (120-min budget) because the bench emitted one row per sensor ISR (~1 kHz) of which only ~5% carried t_lock/t_post/t_round/t_bcast — controller runs at 100 Hz, so the matching pair-tag covers 1 in ~10 sensor rows, and the partial CSV shows only 109 of 2012 emitted rows had a real measurement. The other 95% were near-empty rows starving Renode at the UART. Two changes, applied together: 1. emit_event returns bool. Rows whose slot has no controller-cycle pair-tag (t_lock == 0) are dropped. reader_count++ only when a row was actually emitted; reader_skipped tracks the dropped sensor-rate rows for visibility. UART traffic falls ~10x. 2. Long sweep trimmed from 27 cells (sensor_hz x contention x payload) to 9 cells (sensor_hz=1000 only x 3 contention x 3 payload). The sensor_hz=2000 axis was the timeout cause; sensor_hz=500 carries identical primitive signal at lower rate. Per-cell sensor budget bumped from 150–200 to 1000 so each cell yields ~100 controller- tagged rows (samples * 100 / sensor_hz). TOTAL_SAMPLES recomputed to 900. The drain loop is rewritten in controller-rate units: it now waits for `expected_ctrl = samples * 100 / sensor_hz` rows to land, with a short 5s drain timeout because the cell already waited budget_ms for sensor ISRs to retire. CI timeout bumped 120 -> 180 min. Workflow comment block updated to match. Per audit P3 #1 #5 (cycle-delta column names) and the partial-CSV diagnostic from run 25135494876. Local build clean for both qemu_cortex_m3 baseline (16,392 B FLASH, 41,488 B RAM) and gale variant (18,480 B FLASH, 41,488 B RAM). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three correctness fixes from the Mythos audit (10 personas, 1 fresh-session validator). All three are confirmed by the partial CSV from run 25135494876 — none of them is hypothetical. B2 — actuator_done stale-token drain (P1 #1): ctrl_loop's K_MSEC(2) timeout path on actuator_done leaves the sem token uncollected if the actuator gives later. The next cycle's k_sem_take returns 0 immediately, reads a previous-cycle g_actuator_done_cyc, and computes t_round = old_done_cyc - new_t_post_out which underflows to ~2^32 cycles. Add a drain loop before each k_msgq_put to flush stale tokens. B3 — slot collision wrap (P1 #3): Bump SLOT_COUNT 512 → 1024 so the per-cell sensor-ISR budget (1000 in the trimmed long sweep) cannot wrap within a single cell. Cross-cell wrap remains harmless: sweep_driver stops the sensor timer and drains the reader between cells, so any in-flight controller stamps from cell X land in slots the reader has already consumed before cell X+1 starts. RAM cost: 5 arrays × 512 × 4 bytes = +10 KB. RAM use 41,488 B → 51,728 B (78.9 %); FLASH +52 B. B4 — emit_ring drops counter (P4 #2): ring_buf_put failures into emit_ring were silently dropped. gale's potentially-faster sem_give could mean the reader drains emit_ring better → fewer dropped emits → more rows in gale's CSV than baseline → biased comparison. Add g_emit_drops volatile, emit it in the === END === footer, and assert == 0 in the analyzer for both variants. Forces both onto the same denominator. Build clean for qemu_cortex_m3 baseline (16,444 B FLASH, 51,728 B RAM = 78.9 %). Audit cross-references: P1 (Cortex-M RTOS engineer) — B2, B3 P4 (counter-attacker) — B4 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The column names (t_round, t_bcast) are inherited from the design
doc, but the actual measurement windows are narrower than the names
suggest:
- t_round is named "round-trip" but measures only
controller_post_exit → actuator-0 stamp; it does NOT include the
controller's post-wake sem_take. It also includes actuator 0's
cycles_busy=100 busy-loop (same for both variants).
- t_bcast is named "broadcast" but measures the broadcaster's own
lock+broadcast+unlock window on the fusion thread; the telemetry
wake is never sampled.
Per audit P3 #1 #5: a reader who treats these names as stated will
over-attribute the measurement scope. The cheapest fix that protects
publication credibility is to define the columns precisely where the
reader will look — at the top of the analyzer's markdown report and
in main.c's file header. Numbers stay; honest scope-setting is
appended.
CSV column positions are unchanged so engine_control's analyzer
docstring's "strict superset" claim remains true.
Build clean; analyzer parses.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per audit P9: a reader downloading flight-bench-renode-long.zip six months from now should be able to identify exactly which Renode, Zephyr fork, rustc, SDK, and gale_sha produced the cycles. Without this, "I ran exactly this configuration" reduces to "trust the gale_sha and hope nothing else moved" — but every input below the gale repo (Zephyr fork at branch tip, container at mutable tag, rustc on stable channel) is a moving target. Adds one new step "Compose build manifest" right before the upload step. The manifest captures: rustc / cargo / west / robotframework / SDK versions, Renode 1.16.0 version, Zephyr fork + modules sha via `west list`, and sha256 + byte-size of every built ELF and emitted CSV. Output goes to /tmp/manifest.txt and is included in the artifact bundle. Both ELFs are also uploaded so binary-level reproducibility can be verified. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…on note Per audit P4 #6 / P2 F1 / P2 F2: the flight bench reports per-step medians with naive-bootstrap CI and pooled p99 as a point estimate with no CI. Two related issues: 1. The bench's samples are consecutive 100 Hz controller cycles — autocorrelated. A slow noise burst contaminates 5–10 consecutive samples; naive bootstrap underestimates the CI by treating dependent samples as independent. Politis-Romano predict the correction factor is ~sqrt((1+ρ)/(1-ρ)) for first-order auto- correlation. 2. The Mann-Whitney p-values are reported uncorrected across 162 simultaneous tests (27 cells × 6 metrics). At α=0.05 under H0 that yields ~8 false-positive cells by chance; a reader scanning the per-step table for "where did gale win?" will pick those up as signal. Fixes: - New helper `block_bootstrap_percentile_ci` (block_size=10, iters=2000). Used for pooled p50/p75/p95/p99 in the per-metric pooled tables. Per-step medians keep the naive bootstrap (median is robust to autocorrelation; the issue is tails, not central tendency). - One-paragraph note above the pooled tables explains the bootstrap choice and points readers at Holm-Bonferroni / BH-FDR for the per-step MW-U cells. Smoke-tested against engine_control's events.csv (different schema so 0 rows, but the report header + column-semantics block render correctly). Block bootstrap on synthetic xs=range(100) gives p99 CI [66, 98] for point=98 — wider than naive, as expected. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
…h-flight-control # Conflicts: # .github/workflows/engine-bench-renode-synth.yml
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds flight_control, a macro benchmark that composes five Zephyr kernel primitives (sem + ring_buf + mutex + msgq + condvar) on a 100 Hz fixed-rate flight-controller-shaped workload — six threads, two timer ISRs. Where engine_control isolates a single ISR-to-thread handoff, flight measures the cross-primitive composition that's the gale project's actual claim. Plus all the audit-driven correctness and methodology fixes that came out of the Mythos pass.
Note: this PR depends on #27 (synth A/B + cross-Renode result) — the macro-bench branch was forked off `experiment/gale-via-synth` and includes its commits as well. Either merge #27 first and rebase this onto main, or merge this and have it bring #27's commits along.
What's in this PR
benches/flight_control/(~1900 LOC) — main.c (793), control.c/h (148), analyzer (446), CI workflow (181), Renode robot, run_qemu_bench.sh.Audit fixes (Mythos pass)
All confirmed against the partial CSV from the first cancelled CI run (25135494876):
Bench design choices (deviations from the design doc)
These were called out in the audit and remain by design:
Also: long sweep trimmed from 27 cells to 9 (sensor_hz=1000 only × 3 contention × 3 payload). The 27-cell version blew the 120-min CI budget at step 13 in run 25135494876.
Test plan
🤖 Generated with Claude Code