Skip to content

Conversation

@flyingrobots
Copy link
Owner

Summary

  • Implements the WSC (Write-Streaming Columnar) snapshot format for deterministic serialization of WARP graph state
  • Zero-copy mmap-based deserialization via WscFile and WarpView
  • 8-byte aligned columnar layout for SIMD-friendly access
  • Upgrades canonical state hash from V1 (u32 counts) to V2 (u64 counts)
  • Refactors generated file convention from generated/*.rs to *.generated.rs

Test plan

  • All 57 warp-core unit tests pass
  • WSC module tests cover write/read/validate/view roundtrips
  • DIND coverage and permutation invariance tests pass
  • Pre-push hooks pass (clippy, rustdoc, banned patterns)

🤖 Generated with Claude Code

flyingrobots and others added 2 commits January 16, 2026 23:29
Implements deterministic serialization and zero-copy deserialization of
WARP graph state as specified in WARP-GRAPH.md.

Key changes:
- Add wsc/ module with types, write, build, read, view, and validate
- Add bytemuck dependency for safe Pod transmutation (no unsafe code)
- Add #[repr(transparent)] to all ID types for zero-copy compatibility
- Upgrade canonical_state_hash from V1 (u32) to V2 (u64 counts)
- Regenerate DIND golden files for V2 hash format

The WSC format provides:
- Byte-exact determinism regardless of insertion order
- Zero-copy mmap access via WscFile/WarpView
- 8-byte aligned columnar layout for SIMD-friendly access
- Comprehensive validation with clear error types

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 17, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Summary by CodeRabbit

  • New Features

    • WSC snapshot format added with deterministic writer/reader/validator APIs and V2 canonical hashing.
  • Documentation

    • Comprehensive WARP-GRAPH design and docs updated to describe WSC integration and APIs.
  • Chores

    • Enhanced Git hooks: timing/logging, staged-aware checks, markdown lint on commits, stale-artifact sweep, ignore updates, and timing visualization tooling; added a dev dependency for charts.
  • Bug Fixes

    • Fixed task-list matching in hook checks.
  • Tests

    • Added WSC unit tests and hook verification scripts.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

Adds a deterministic, zero-copy WSC snapshot subsystem to warp-core (types, read, view, validate, build, write), bumps canonical state hash to V2 (u64 counts/blobs) and updates testdata, introduces #[repr(transparent)] ID newtypes with as_bytes(), renames generated modules to *.generated.rs, and overhauls git hooks, scripts, and test fixtures.

Changes

Cohort / File(s) Summary
Git hooks & ignore
\.githooks/pre-commit, \.githooks/pre-push, \.githooks/pre-push-parallel, \.githooks/pre-push-sequential, \.gitignore
Rewrote hooks: pre-commit adds staged-file capture, markdownlint step, and sweep trigger; pre-push replaced with timing/logging driver that selects sequential/parallel variants; added parallel/sequential implementations; updated ignore rules for target-* dirs and .githooks/timing.jsonl.
Sweep script
scripts/sweep-stale-artifacts.sh
New script to remove stale Rust incremental build caches across multiple target dirs/profiles (configurable age).
Pre-push timing & plotting
scripts/plot-prepush-timing.mjs, package.json
New JSONL timing consumer and ASCII chart script; added asciichart devDependency.
Hook test harnesses
tests/hooks/test_hook_issues.sh, scripts/test-hook-issues.sh
New automated tests verifying hook behaviors and previously fixed issues across hook scripts.
Echo DIND tests & harness
crates/echo-dind-tests/src/lib.rs, crates/echo-dind-tests/src/codecs.generated.rs, crates/echo-dind-tests/src/type_ids.generated.rs, crates/echo-dind-tests/src/rules.rs, crates/echo-dind-harness/src/dind.rs
Replace generated module with codecs/type_ids (*.generated.rs), re-export SCHEMA_HASH, adjust imports/signatures, add lint allowances, switch DIND hash domain to DIND_STATE_HASH_V2.
Test fixtures (DIND)
testdata/dind/*.hashes.json
Bulk updates: hash_domain V1→V2 and wholesale replacement of hashes_hex arrays across many fixtures — data changes only.
Bootstrap scripts
scripts/bootstrap_*.mjs (several)
Updated source paths to crates/echo-dind-tests/src/codecs.generated.rs, adjusted SCHEMA_HASH regex/messages, added 64-hex validation; some scripts modify deterministic frame content.
Warp-core dependency
crates/warp-core/Cargo.toml
Added bytemuck = "1.14" with features ["derive","extern_crate_std"].
Warp-core identity & hashing
crates/warp-core/src/ident.rs, crates/warp-core/src/graph.rs
Added #[repr(transparent)] to ID newtypes and as_bytes() accessors; bumped canonical_state_hash header to "DIND_STATE_HASH_V2" and switched node/edge/blob counts from u32→u64; canonical_state_hash now #[must_use].
WSC top-level module
crates/warp-core/src/lib.rs, crates/warp-core/src/wsc/mod.rs
New public wsc module with re-exports for build/read/types/validate/view/write and key types.
WSC build (deterministic conversion)
crates/warp-core/src/wsc/build.rs
New build_one_warp_input producing deterministic OneWarpInput (sorted nodes/edges, canonical edge indices, 8‑byte aligned blobs); extensive tests added.
WSC types (on-disk layout)
crates/warp-core/src/wsc/types.rs
New repr(C) fixed-size row/header types (NodeRow, EdgeRow, Range, OutEdgeRef, AttRow, WscHeader, WarpDirEntry) with LE accessors and Pod/Zeroable derives; size/stability tests.
WSC read primitives & errors
crates/warp-core/src/wsc/read.rs
New zero-copy read helpers and comprehensive ReadError enum; validate_header, read_slice, read_bytes implemented with tests.
WSC view (zero-copy API)
crates/warp-core/src/wsc/view.rs
New WscFile, WarpView, AttachmentRef providing safe zero-copy accessors and boundary-checked reads; open/from_bytes entry points and tests.
WSC validate
crates/warp-core/src/wsc/validate.rs
pub fn validate_wsc(&WscFile) validating index ranges, ordering, attachment tags/reserved bytes, and blob bounds; tests for many error cases.
WSC write
crates/warp-core/src/wsc/write.rs
Added OneWarpInput and write_wsc_one_warp to serialize a single WARP deterministically with 8-byte alignment; helpers and tests.
Graph design & docs
docs/WARP-GRAPH.md, CHANGELOG.md, README.md, docs/meta/docs-index.md
Large design doc/spec for GraphStore→WSC integration, changelog entries for WSC and V2 hash bump, and docs index/readme updates.
Ban-nondeterminism allowlist
.ban-nondeterminism-allowlist
Allowlisted intentional std::fs usage in new wsc modules for native snapshot I/O.
Generated-file naming & build scripts
scripts/*bootstrap*.mjs, scripts/bootstrap_*.mjs
Transition from generated/* to *.generated.rs naming; scripts and schema-extraction regex/messages adjusted and validated.
Misc utilities
scripts/plot-prepush-timing.mjs, scripts/sweep-stale-artifacts.sh, scripts/plot-prepush-timing.mjs
New timing plotting, sweeping, and assorted bootstrap/validation utilities.

Sequence Diagram(s)

sequenceDiagram
    rect rgba(200,200,255,0.5)
    participant Graph as GraphStore
    participant Build as build_one_warp_input
    participant OneWarp as OneWarpInput
    end
    rect rgba(200,255,200,0.5)
    participant Write as write_wsc_one_warp
    participant Bytes as Vec<u8>
    end
    rect rgba(255,200,200,0.5)
    participant View as WscFile::from_bytes
    participant Validate as validate_wsc
    end

    Graph->>Build: request OneWarpInput(root_node_id)
    Build->>Build: collect nodes/edges, sort, build indices, align blobs
    Build-->>OneWarp: return deterministic OneWarpInput
    OneWarp->>Write: serialize(schema_hash, tick)
    Write-->>Bytes: emit 8-byte aligned WSC bytes
    Bytes->>View: WscFile::from_bytes(bytes)
    View->>Validate: validate_wsc(WscFile)
    Validate-->>View: Result<(), ReadError>
Loading
sequenceDiagram
    participant Consumer as caller
    participant WscFile as WscFile::from_bytes
    participant WarpView as WarpView<'a'>
    participant Data as mmap/bytes

    Consumer->>WscFile: from_bytes(data)
    WscFile->>WscFile: validate_header(), locate WarpDirEntry
    WscFile->>WarpView: construct zero-copy slices (nodes, edges, atts, blobs)
    Consumer->>WarpView: node_ix(node_id) / blob_for_attachment(att)
    WarpView-->>Consumer: Option<usize> / Option<&[u8]> (zero-copy)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Poem

🧩 Align the bytes, let snapshots hum,
Newtypes stand clear—transparent, strict and numb.
V2 widens counts; indices march in line,
Build, write, read — deterministic and fine.
Tests stand watch; the pedant nods: "Design."

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the main addition: the WSC (Write-Streaming Columnar) snapshot format implementation. Clear, specific, and directly reflects the primary changeset.
Description check ✅ Passed The description is well-related to the changeset, covering the key aspects: WSC implementation, zero-copy deserialization, hash upgrade, and file convention refactoring. Provides test coverage evidence and references generated tooling.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch warp-graph

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot added the tooling Tooling/CI/CLI label Jan 17, 2026
- Document WSC feature and related changes in CHANGELOG
- Add WSC to warp-core feature list in README
- Add WARP-GRAPH.md reference to docs-index

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d5f9b75191

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +291 to +295
pub fn out_edges_for_node(&self, node_ix: usize) -> &[OutEdgeRef] {
self.out_index.get(node_ix).map_or(&[], |range| {
let start = range.start() as usize;
let len = range.len() as usize;
self.out_edges.get(start..start + len).unwrap_or(&[])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reject out-of-bounds ranges instead of returning empty

These accessors use .get(start..start+len).unwrap_or(&[]), so if out_index (or the attachment index tables) contains a range that extends past the backing table, the view silently returns an empty slice. validate_wsc relies on these accessors, which means a corrupted WSC file can pass validation while dropping edges/attachments at read time, yielding an incomplete graph without any error. Consider surfacing range failures (or adding explicit range checks in validation) so invalid index/data tables are rejected instead of silently masked.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.githooks/pre-commit (1)

80-80: CRITICAL: Regex pattern is broken—will never match the intended files.

In single-quoted strings, bash passes the literal text to grep. The pattern WASM-TASKS\\.md in ERE means:

  • \\ = literal backslash character
  • . = any single character

So this matches strings like WASM-TASKS\Xmd (backslash + any char + "md"), not WASM-TASKS.md.

To match a literal dot, use a single backslash: \.

🐛 Proposed fix
-  if echo "$STAGED" | grep -E '^(WASM-TASKS\\.md|docs/tasks\\.md)$' >/dev/null; then
+  if echo "$STAGED" | grep -E '^(WASM-TASKS\.md|docs/tasks\.md)$' >/dev/null; then
🤖 Fix all issues with AI agents
In `@crates/echo-dind-tests/src/lib.rs`:
- Around line 8-15: Update the six bootstrap scripts to use the new generated
file paths by replacing any references to
"crates/echo-dind-tests/src/generated/codecs.rs" and
"crates/echo-dind-tests/src/generated/type_ids.rs" with the new filenames
"crates/echo-dind-tests/src/codecs.generated.rs" and
"crates/echo-dind-tests/src/type_ids.generated.rs"; look for usages that expect
the modules declared with #[path = "codecs.generated.rs"] pub mod codecs; and
#[path = "type_ids.generated.rs"] pub mod type_ids; (apply the same replacement
across scripts: bootstrap_dense_rewrite.mjs, bootstrap_dind_log.mjs,
bootstrap_randomized_order.mjs, bootstrap_randomized_convergent.mjs,
bootstrap_error_determinism.mjs, bootstrap_math_determinism.mjs) so they
reference the updated generated file locations and won’t crash at runtime.
- Around line 8-15: The generated modules codecs and type_ids must not suppress
missing_docs; either update the generator to emit rustdoc for all public items
(add doc comments for Theme, MotionV2View, MotionV2Builder, the ops module and
public functions/types, and remove any #![allow(missing_docs)] from the
generated files) or make the generated modules private (change pub mod codecs /
pub mod type_ids to mod codecs / mod type_ids) and provide hand-written,
documented public wrappers exposing the intended API; locate symbols codecs,
type_ids, Theme, MotionV2View, MotionV2Builder, and ops in the generated outputs
to ensure docs are added or the modules are made non-public and wrappers
created.

In `@crates/warp-core/src/wsc/build.rs`:
- Around line 91-98: The loop over bucket currently ignores edges when
edge_ix.get(&e.id) returns None; add a defensive assertion to catch this
invariant violation during development: inside the for e in bucket loop (where
OutEdgeRef is constructed), replace the silent if let Some check with a
debug_assert!(edge_ix.contains_key(&e.id), "...") or use expect on the lookup
(e.g. edge_ix.get(&e.id).expect(...)) so missing entries fail loudly; reference
the variables/functions: bucket, edge_ix, e.id, and OutEdgeRef to locate and
update the code.

In `@crates/warp-core/src/wsc/read.rs`:
- Around line 209-236: Add a test to ensure validate_header handles misaligned
input without panicking: create a Vec<u8> buffer, write WscHeader::MAGIC_V1 into
the buffer starting at offset 1, take a slice from that offset (misaligned for
WscHeader), call validate_header(&misaligned_slice) and assert it returns
Err(ReadError::Alignment(_)); this verifies the from_bytes alignment fix and
prevents panics in validate_header.
- Around line 129-130: validate_header currently uses bytemuck::from_bytes which
can panic on misaligned slices; change it to use bytemuck::try_from_bytes (or
try_from_bytes_mut as appropriate) to safely handle misaligned input and
propagate the Result. Locate validate_header and replace the from_bytes call
that creates a &WscHeader from data[..header_size] with
bytemuck::try_from_bytes<&WscHeader>(&data[..header_size]) (or
try_from_bytes_mut if mut needed), handle the returned Result (map or ?)
consistent with read_slice's use of try_cast_slice, and ensure the function
returns an error instead of panicking on alignment/size issues.

In `@crates/warp-core/src/wsc/validate.rs`:
- Around line 104-107: The reserved-byte check currently ignores non-zero
values; update the att.reserved0 check to fail fast by returning a validation
error (e.g., return Err(ValidateError::NonZeroReservedBytes { field:
"reserved0", value: att.reserved0.clone() }) ) or, if you intend leniency, emit
a clear warning via the crate logger (e.g., tracing::warn!) instead of an empty
block; locate the check on att.reserved0 in validate.rs and replace the empty
branch with one of these actions so non-zero reserved bytes are either rejected
or explicitly logged.
- Around line 44-57: The ordering-validation loops in validate.rs currently
detect but ignore node/edge ordering violations (the loops over nodes.windows(2)
and edges.windows(2)), so change them to return an error when window[0].node_id
>= window[1].node_id or window[0].edge_id >= window[1].edge_id; add (or use) a
specific error variant like OrderingViolation (or extend the existing validation
error enum) and return Err(ValidationError::OrderingViolation{kind:
"node"/"edge", previous_id: ..., current_id: ...}) from the function instead of
leaving the comment, ensuring the validate function fails fast on ordering
problems.
- Around line 127-216: Update the tests in the tests mod to add failing
(negative) cases that assert validate_wsc returns an Err for broken inputs:
create OneWarpInput variants using write_wsc_one_warp/WscFile::from_bytes that
(1) have nodes out-of-order (nodes vector with NodeRow entries not sorted by
node_id), (2) have edges out-of-order, (3) include AttRow with invalid tag
values or non-zero reserved0 bytes, (4) reference blob ranges that are
out-of-bounds (blob_off_le/blob_len_le beyond blobs length), and (5) contain
out_edges indices that point outside the out_edges array; call
validate_wsc(&file) and assert it errors for each case (use the same
helpers/types: OneWarpInput, NodeRow, AttRow, Range, write_wsc_one_warp,
validate_wsc) so the tests cover ordering, reserved-byte, attachment-tag, blob
bounds, and out_edge-index violations.

In `@crates/warp-core/src/wsc/view.rs`:
- Around line 96-131: The warp_view implementation must stop doing u64->usize
casts, unchecked usize arithmetic, and bytemuck::from_bytes() which can panic;
instead call the existing read_slice(self.data, offset, count) helper with
self.header().warp_dir_off() and self.header().warp_count() to get a
Result<&[WarpDirEntry], ReadError>, then use .get(index) to retrieve the single
entry and propagate a SectionOutOfBounds/ WarpIndexOutOfBounds as appropriate;
finally pass that entry reference into WarpView::new(&self.data, entry). This
replaces manual offset arithmetic, prevents overflow/truncation, and converts
alignment/misalignment errors into ReadError via bytemuck::try_cast_slice()
inside read_slice().

In `@crates/warp-core/src/wsc/write.rs`:
- Around line 240-245: The function write_padding pushes zeros one byte at a
time which is fine but can be simplified for clarity and efficiency; replace the
loop in write_padding with a single call to resize the buffer to the target
length (e.g., use buf.resize(target, 0)) so padding is applied in one operation
while preserving behavior and correctness.

In `@docs/WARP-GRAPH.md`:
- Around line 569-575: Fix the markdown lint issues by replacing hard tabs with
spaces and normalizing ordered list prefixes in the "changes" sections of
WARP-GRAPH.md (e.g., the lists that mention NodeId/EdgeId/TypeId/WarpId, as well
as subsequent change lists); convert all tabs to spaces, ensure ordered lists
use sequential numeric prefixes like "1.", "2.", "3." (not mixed tabbed bullets
or broken prefixes), and re-render the affected list blocks so they comply with
MD010/MD029.
- Around line 1-15: The file currently contains Rust source and HTML comments
causing markdownlint errors; add a top-level H1 heading (e.g., "# WARP-GRAPH")
then move the Rust content (including the SPDX and copyright comments and the
module doc lines like "//! Minimal in-memory graph store...") into a fenced code
block using ```rust so the doc renderer treats it as code; alternatively convert
the leading "///" or "//!"" module doc lines into normal prose under the H1 if
you want them as Markdown text. Ensure the Rust items such as the type/derive
lines (#[derive(Debug, Clone)], #[cfg_attr(...)]), and the use/imports (use
std::collections::BTreeMap; use crate::attachment::AttachmentValue; use
crate::ident::{EdgeId, Hash, NodeId, WarpId}; use crate::record::{EdgeRecord,
NodeRecord};) remain inside the fenced block so markdownlint no longer flags
MD041/MD018.

In `@testdata/dind/030_error_determinism.hashes.json`:
- Around line 4-5: Update the mismatched hash domain string: change every
"hash_domain" value from "DIND_STATE_HASH_V1" to "DIND_STATE_HASH_V2" in the
eight fixture JSONs and in the harness code so it matches
canonical_state_hash()'s domain header (b"DIND_STATE_HASH_V2\0"); specifically
replace the literal "DIND_STATE_HASH_V1" occurrences used by the harness (the
constants/strings in the dind.rs module) and the JSON fixtures so consumers
reading the hash_domain see V2 and computed hashes remain consistent with
canonical_state_hash().

In `@testdata/dind/050_randomized_order_small_seed0003.hashes.json`:
- Around line 6-207: The fixture file
testdata/dind/050_randomized_order_small_seed0003.hashes.json contains the same
hash ordering as seed0001; regenerate the seed0003 fixture by re-running the
fixture generator with seed "0003" (or the intended numeric seed) so the RNG is
actually seeded differently, ensure the generator accepts and applies the seed
parameter (wire the seed through the generator invocation you use to produce
050_randomized_order_small_* fixtures), and verify the new file's "hashes_hex"
order differs from seed0001 before committing.

In `@testdata/dind/060_math_determinism.hashes.json`:
- Around line 32-58: The determinism test JSON contains 27 identical entries of
the hash "1916f6730c757d02983ef3d1d3be6a28543c9bf2eb8323c59aacf9bf2bce6dc4" due
to a generation failure; fix by re-running or repairing the testdata generator
to produce the intended varied hashes, replace the repeated "1916f673..."
entries with the properly generated unique hashes, and add a small validation
(in the generator or a test) that asserts the output contains the expected
number of unique hashes and no long runs of identical values to prevent
regressions.

Comment on lines 8 to 15
/// Auto-generated codec definitions.
#[path = "codecs.generated.rs"]
pub mod codecs;
/// DIND test rules and state management.
pub mod rules;
/// Auto-generated type ID constants.
#[path = "type_ids.generated.rs"]
pub mod type_ids;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find all potential bootstrap scripts
find . -type f \( -name "*bootstrap*" -o -name "*build*" -o -name "*gen*" \) | head -20

Repository: flyingrobots/echo

Length of output: 1102


🏁 Script executed:

# Search for references to the old generated path patterns mentioned in the review
rg "crates/echo-dind-tests/src/generated/(codecs|type_ids)" -t js -t ts -t py --no-heading

Repository: flyingrobots/echo

Length of output: 43


🏁 Script executed:

# Also check if the old directory exists
ls -la crates/echo-dind-tests/src/generated/ 2>&1 || echo "Directory does not exist"

Repository: flyingrobots/echo

Length of output: 170


🏁 Script executed:

# Check the current structure
ls -la crates/echo-dind-tests/src/ | grep -E "(codecs|type_ids)"

Repository: flyingrobots/echo

Length of output: 202


🏁 Script executed:

# Check the bootstrap scripts for file path references
cat -n ./scripts/bootstrap_dense_rewrite.mjs

Repository: flyingrobots/echo

Length of output: 4605


🏁 Script executed:

cat -n ./scripts/bootstrap_dind_log.mjs

Repository: flyingrobots/echo

Length of output: 3299


🏁 Script executed:

# Double-check for any references to old pattern with broader search
rg "generated/(codecs|type_ids)" --no-heading

Repository: flyingrobots/echo

Length of output: 929


🏁 Script executed:

# Search for any codecs.rs or type_ids.rs references broadly
rg "codecs\.rs|type_ids\.rs" --no-heading

Repository: flyingrobots/echo

Length of output: 1376


Update bootstrap scripts to use new generated file paths.
Six bootstrap scripts still read crates/echo-dind-tests/src/generated/codecs.rs, which no longer exists. The generated files have moved to crates/echo-dind-tests/src/codecs.generated.rs and crates/echo-dind-tests/src/type_ids.generated.rs. These scripts will crash at runtime.

Affected scripts:

  • scripts/bootstrap_dense_rewrite.mjs
  • scripts/bootstrap_dind_log.mjs
  • scripts/bootstrap_randomized_order.mjs
  • scripts/bootstrap_randomized_convergent.mjs
  • scripts/bootstrap_error_determinism.mjs
  • scripts/bootstrap_math_determinism.mjs
Path fix (apply to all bootstrap scripts)
-const codecs = fs.readFileSync("crates/echo-dind-tests/src/generated/codecs.rs", "utf8");
+const codecs = fs.readFileSync("crates/echo-dind-tests/src/codecs.generated.rs", "utf8");
🤖 Prompt for AI Agents
In `@crates/echo-dind-tests/src/lib.rs` around lines 8 - 15, Update the six
bootstrap scripts to use the new generated file paths by replacing any
references to "crates/echo-dind-tests/src/generated/codecs.rs" and
"crates/echo-dind-tests/src/generated/type_ids.rs" with the new filenames
"crates/echo-dind-tests/src/codecs.generated.rs" and
"crates/echo-dind-tests/src/type_ids.generated.rs"; look for usages that expect
the modules declared with #[path = "codecs.generated.rs"] pub mod codecs; and
#[path = "type_ids.generated.rs"] pub mod type_ids; (apply the same replacement
across scripts: bootstrap_dense_rewrite.mjs, bootstrap_dind_log.mjs,
bootstrap_randomized_order.mjs, bootstrap_randomized_convergent.mjs,
bootstrap_error_determinism.mjs, bootstrap_math_determinism.mjs) so they
reference the updated generated file locations and won’t crash at runtime.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, locate and inspect the file in question
find . -name "lib.rs" -path "*/echo-dind-tests/src/*" -type f

Repository: flyingrobots/echo

Length of output: 96


🏁 Script executed:

# Once we find it, check lines 8-15 to see the module definitions
cat -n crates/echo-dind-tests/src/lib.rs | head -20

Repository: flyingrobots/echo

Length of output: 834


🏁 Script executed:

# Check if the generated files exist at the specified paths
ls -la crates/echo-dind-tests/src/codecs.generated.rs 2>/dev/null || echo "Not found at codecs.generated.rs"
ls -la crates/echo-dind-tests/src/type_ids.generated.rs 2>/dev/null || echo "Not found at type_ids.generated.rs"

Repository: flyingrobots/echo

Length of output: 254


🏁 Script executed:

# Search for missing_docs suppression in generated files
if [ -f crates/echo-dind-tests/src/codecs.generated.rs ]; then
  head -50 crates/echo-dind-tests/src/codecs.generated.rs | grep -n "missing_docs"
fi
if [ -f crates/echo-dind-tests/src/type_ids.generated.rs ]; then
  head -50 crates/echo-dind-tests/src/type_ids.generated.rs | grep -n "missing_docs"
fi

Repository: flyingrobots/echo

Length of output: 106


🏁 Script executed:

# Check if these are truly public modules and inspect the actual generated files
rg -n "pub (struct|enum|fn|const|type|mod)" crates/echo-dind-tests/src/ | head -30

Repository: flyingrobots/echo

Length of output: 2929


🏁 Script executed:

# View the header and structure of codecs.generated.rs
head -30 crates/echo-dind-tests/src/codecs.generated.rs

Repository: flyingrobots/echo

Length of output: 814


🏁 Script executed:

# Count public items without docs in generated files
echo "=== codecs.generated.rs public items ==="
rg "pub (struct|enum|fn|const|type)" crates/echo-dind-tests/src/codecs.generated.rs | wc -l
echo "=== type_ids.generated.rs public items ==="
rg "pub (const|struct|enum|fn|type)" crates/echo-dind-tests/src/type_ids.generated.rs | wc -l

Repository: flyingrobots/echo

Length of output: 150


🏁 Script executed:

# Check the SPDX and copyright headers
head -5 crates/echo-dind-tests/src/codecs.generated.rs
head -5 crates/echo-dind-tests/src/type_ids.generated.rs

Repository: flyingrobots/echo

Length of output: 470


🏁 Script executed:

# Verify rules.rs has proper documentation for its public API
rg -B1 "pub (const|fn|struct)" crates/echo-dind-tests/src/rules.rs | head -50

Repository: flyingrobots/echo

Length of output: 1787


🏁 Script executed:

# Check if codecs and type_ids modules are re-exported or used outside lib.rs
rg "echo_dind_tests::(codecs|type_ids)" --type rs

Repository: flyingrobots/echo

Length of output: 87


🏁 Script executed:

# Check crate-level documentation and see if the public modules should really be public
grep -n "pub use" crates/echo-dind-tests/src/lib.rs

Repository: flyingrobots/echo

Length of output: 43


🏁 Script executed:

# Verify the actual violation by running clippy as per the guidelines
cd crates/echo-dind-tests && cargo clippy --all-targets 2>&1 | grep -A 5 "missing_docs\|warning" | head -40

Repository: flyingrobots/echo

Length of output: 43


🏁 Script executed:

# Try correct rg syntax without type flag
rg "echo_dind_tests::(codecs|type_ids)"

Repository: flyingrobots/echo

Length of output: 139


🏁 Script executed:

# Check the full lib.rs to understand module visibility
cat crates/echo-dind-tests/src/lib.rs

Repository: flyingrobots/echo

Length of output: 4160


🏁 Script executed:

# See what's actually exported from codecs.generated.rs
head -100 crates/echo-dind-tests/src/codecs.generated.rs | tail -70

Repository: flyingrobots/echo

Length of output: 1884


🏁 Script executed:

# Check if there's a parent crate that re-exports echo-dind-tests public API
find crates -name "Cargo.toml" -exec grep -l "echo.dind.tests" {} \;

Repository: flyingrobots/echo

Length of output: 130


🏁 Script executed:

# Look at the full type_ids.generated.rs to understand its size and public surface
cat crates/echo-dind-tests/src/type_ids.generated.rs

Repository: flyingrobots/echo

Length of output: 3228


Generated modules must not use #![allow(missing_docs)]; public items require rustdoc regardless of auto-generation.

Lines 8–15 expose codecs and type_ids modules publicly. Both generated files suppress missing_docs with #![allow(...)], violating the crate-wide guideline that public APIs must carry rustdoc and missing docs should fail CI.

type_ids.generated.rs actually documents all public constants (e.g., /// make_type_id("payload/motion/v2")), so the suppression is unnecessary there.

codecs.generated.rs genuinely lacks docs on public items: Theme enum, MotionV2View, MotionV2Builder, and the entire ops module have no rustdoc.

Fix the generator to emit rustdoc for all public items, or make these modules private with documented wrapper APIs in a hand-written module.

🤖 Prompt for AI Agents
In `@crates/echo-dind-tests/src/lib.rs` around lines 8 - 15, The generated modules
codecs and type_ids must not suppress missing_docs; either update the generator
to emit rustdoc for all public items (add doc comments for Theme, MotionV2View,
MotionV2Builder, the ops module and public functions/types, and remove any
#![allow(missing_docs)] from the generated files) or make the generated modules
private (change pub mod codecs / pub mod type_ids to mod codecs / mod type_ids)
and provide hand-written, documented public wrappers exposing the intended API;
locate symbols codecs, type_ids, Theme, MotionV2View, MotionV2Builder, and ops
in the generated outputs to ensure docs are added or the modules are made
non-public and wrappers created.

Comment on lines 129 to 130
// Use bytemuck for safe transmutation
let header: &WscHeader = bytemuck::from_bytes(&data[..header_size]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

CRITICAL: bytemuck::from_bytes panics on misaligned data.

Line 130 uses bytemuck::from_bytes, which panics if the slice is not aligned to WscHeader's alignment requirement. For mmap'd files or arbitrary byte slices, alignment is not guaranteed. The read_slice function correctly uses try_cast_slice (line 178), but validate_header does not.

This is inconsistent and can cause panics on valid-but-misaligned input.

🐛 Proposed fix: Use try_from_bytes
     // Use bytemuck for safe transmutation
-    let header: &WscHeader = bytemuck::from_bytes(&data[..header_size]);
+    let header: &WscHeader = bytemuck::try_from_bytes(&data[..header_size])
+        .map_err(ReadError::Alignment)?;
🤖 Prompt for AI Agents
In `@crates/warp-core/src/wsc/read.rs` around lines 129 - 130, validate_header
currently uses bytemuck::from_bytes which can panic on misaligned slices; change
it to use bytemuck::try_from_bytes (or try_from_bytes_mut as appropriate) to
safely handle misaligned input and propagate the Result. Locate validate_header
and replace the from_bytes call that creates a &WscHeader from
data[..header_size] with
bytemuck::try_from_bytes<&WscHeader>(&data[..header_size]) (or
try_from_bytes_mut if mut needed), handle the returned Result (map or ?)
consistent with read_slice's use of try_cast_slice, and ensure the function
returns an error instead of panicking on alignment/size issues.

Comment on lines 4 to 5
"hash_domain": "DIND_STATE_HASH_V1",
"hash_alg": "BLAKE3",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Locate DIND hash domain usage and canonical hash versioning.
rg -n "DIND_STATE_HASH_V1|DIND_STATE_HASH_V2" -S
rg -n "canonical_state_hash" -S

Repository: flyingrobots/echo

Length of output: 2457


🏁 Script executed:

#!/bin/bash
# Check how canonical_state_hash() is used in validation/testing
rg -n "canonical_state_hash" -B2 -A2 crates/echo-dind-harness/src/dind.rs | head -60
rg -n "hash_domain" -B2 -A2 crates/echo-dind-harness/src/dind.rs | head -80

Repository: flyingrobots/echo

Length of output: 1346


🏁 Script executed:

#!/bin/bash
# Check if there's any other hash computation path or if V1 is legacy
rg -n "DIND_STATE_HASH_V1" -B3 -A3 crates/warp-core/src/graph.rs
# Also check git history for canonical_state_hash changes
git log --oneline -n 20 -- crates/warp-core/src/graph.rs | head -20

Repository: flyingrobots/echo

Length of output: 122


🏁 Script executed:

#!/bin/bash
# Read the canonical_state_hash implementation
sed -n '420,435p' crates/warp-core/src/graph.rs
# Also check docs to understand the version change rationale
grep -n "DIND_STATE_HASH_V1\|DIND_STATE_HASH_V2" docs/WARP-GRAPH.md | head -20

Repository: flyingrobots/echo

Length of output: 1037


Update hash_domain to "DIND_STATE_HASH_V2" in all fixtures and harness code.

The canonical_state_hash() function in crates/warp-core/src/graph.rs computes hashes with the b"DIND_STATE_HASH_V2\0" domain header, but all fixture files and the harness hardcode "DIND_STATE_HASH_V1". This metadata-computation mismatch corrupts fixture integrity—any downstream consumer reading hash_domain from the JSON will expect V1 hashing but receive V2 hashes.

Update:

  • All 8 fixture files (testdata/dind/*.hashes.json) at line 4
  • crates/echo-dind-harness/src/dind.rs at lines 152, 274, and 301
🤖 Prompt for AI Agents
In `@testdata/dind/030_error_determinism.hashes.json` around lines 4 - 5, Update
the mismatched hash domain string: change every "hash_domain" value from
"DIND_STATE_HASH_V1" to "DIND_STATE_HASH_V2" in the eight fixture JSONs and in
the harness code so it matches canonical_state_hash()'s domain header
(b"DIND_STATE_HASH_V2\0"); specifically replace the literal "DIND_STATE_HASH_V1"
occurrences used by the harness (the constants/strings in the dind.rs module)
and the JSON fixtures so consumers reading the hash_domain see V2 and computed
hashes remain consistent with canonical_state_hash().

Comment on lines 6 to +207
"hashes_hex": [
"e0c2acac7f7eca867f8575a1861460425976de2fb399146b694de06507a7f890",
"b11d14bf2f2f4549961e294b0180ede0120a273fbd56859152c97aa685915899",
"0ffb6879a98bb711a3cb5d8680651f0433f7d2fee22f3251c596d0f6ed4902cf",
"d74615efa3e0373c6b4d0eb5822e05573f49940ff63322e9507127b0725d337a",
"89119cd0409ae3d3126e0feead2a1832b3eb4521647b98475d23291b8eb99e36",
"fd1b3dac7aede525f224cbf6a579e22401eaf83934a931c2af688700b08dc218",
"ba6081d469bd57aa2898df9d9ec3a596233ceaf146e2bb0f9e8b510c7655a6de",
"3c119213749accb5ef33d4daad035a9930d8b5639961f3975be69df4cd0d0d88",
"2aebab7243f634193302d94fed55226dc668c5de55824f3a760d7cb79f64363c",
"dd8339e9526dda6d8f55e91a5605c871fa73301f122646c0e89888f210a7e1ca",
"5a2157245f81196cc84023e12e4ec52cbb1239061e66d8605c22bb96bd35ea27",
"89ff0d7e67d1ef4e1b15cf82fc629cd21d07e865161f9f0505b9658baaadc2d6",
"a4326f6160c429c3b5d0953fb15123a17b3f7b6b9230b937774c6d8191dead44",
"70383a3cdc20e958d306e6fcc42ac3dce0ca5f2eb96fd07baa0a3458e428f85f",
"eed0c84de6fecdcbad5c6141c3cae76f3399fa92563c0f01edf03036bc5862c4",
"8881f53d4f80e1e4e8de704219a462353a78face7dc89633f66f241dc3b29d22",
"f717e588f5a6204fcb136848a017a73aab71556f52ec9f84b050e0b426822fc2",
"4dcc8623833aa14ec1bbdd9ee9db0e2b1fcfe480eb790bdd19e51ec3f12cdd73",
"fc1d350b1726942ba899878eb2bc8a190302faa957080e917947d78c968bc94b",
"b1125aa963c7293399cc930b8c909be54b2259be740c985e50ef04afa744603c",
"79f1033de2af77236af48cf570bd9c04706611592afbb26a37e677493314fcc9",
"ad4c40951621e685718b6605d68c0a58d7f443c19a4d3d3ab4b92c80ad2ac57f",
"2b6cde2bb7094967ed537630475ef2d99d2507673992d926d264a28f4b295ac2",
"6a5b986963396ea668afb6752f6c62979be8d643b8a85307b7dd605c07ce5527",
"4212ed4dab302320c086af49f87ad77b8f227583106401d863677583ccac95f4",
"e76eff9b798052a9e0ca28a1d2d5de8ed3f046bf4f8f1499019efa279d0ad9f1",
"d70a717858ac668e2988117cbcfdd352166688299e32e4ec92c2844c0085cf17",
"1b9447457c336b027930aff9574fc772a5b2172de64382d48320ee94e733d01d",
"af1651b25e941b181ebb286dc28f95c311a6699185416575371b0989ed3f2a7f",
"709334251e51dbd326d67ed66ac9855b95e388e971ace8c71d9f390926ffd236",
"fa6d02c698dc7f3bda25c994270dcd9d2e9cdb40cc674da75e54ff8759be5033",
"022192de36f5d1a0d63b57868121894fc93a583dc9fe8bd3ca2917109cdb95db",
"ea68f98b601d0c3e671166f42ae24a70e19380f0fd35180b370df974a2f3101b",
"f353466acac89a2835d5dfda81104e72c9737ac5be6cdfd246357c1f8586b727",
"1f2d07863b8a29064f5b6e6ab6e9ba13878a88c621b729afc2db5928791fa8ba",
"dd5f9abc86be126b958c1a9fb46557ceddf7b0aa89416ed7439d30aa8ba5064f",
"aef856fdd7665c759c9bcfd50219d0610f46847cff064bab91cf04c388996c28",
"dd94cfd5af8dfbf33f80676e6e9608f5c6ec5322be23905953bffe4f17d20c90",
"0a0138bb770d7cdd9cb60001978997c973f9e4ae481ab5305388dfe8b066956c",
"7705e0bbf9dcc428ead323e054c43c6674841678082ebb8beae268552f3a2fe4",
"120c95791aca2cc7fd3bf827023ac022d1df8d174a99c3f50a700807b0cc7d57",
"073fe2b2e894eb5d94e1a8c3404f202768ecbc97f7ad70437dd3698202284930",
"c61e2208fbaea514718ff5e82a8c9a91f5ed97af4caa69dbb292d8377fa5dbab",
"a90e7e6725556ba32b8834f432589a0d7e4d0a37cd592300444b54b144860e53",
"fd83493b37b5ffc7ac596a099ef84f138f7c4f65e0bcc8d569b45b387b174f4e",
"624b11fd6844a3bd77e93d78027a788fb08132756cb4d3703083daebc4dd9504",
"fc217c41d47472d81256188b1a495aa19daaa473ed97084bdcc1a3a728237d9a",
"b0d95126b91e3553652366c0be8feffabe4e6d5cf5691d6e91c0388d5b24df89",
"1ccd1506bfc034310e6ac694b98eb39cdb624c37c67835463c96cf2ee1d223cc",
"23514f6442e61e4d1a04d6e67d7141dd4b73972462b4013673881e43b82bf753",
"a7d34d39cc297c528b36cce364c8d56180584f5f5ae1c9a9390c635aa1cd4455",
"5ee610fdbebc8b7f5f658a114aa5a93a9d02ef8b52451e9c9c3affd19782d6d7",
"a216c580771ada1585df18f97fd03dacb5d9e4da055bbf935520d55bfdc1c5cd",
"8e35c8db77c33eb6df9e266ffb926a8d6f8db48f97b7503301466484462f94e7",
"71d4af3d3c6b110729f28a121777a00a926e446b6e7724a86dadc3fa86935c99",
"9d904608c82edf082ad922c17ff85f1ded62800cadecbcb4c5ab4ecd09e709eb",
"b91fe21f4894e8bbd017181eab8a19c60c5bb5739343216d82de9ff36b52d4a2",
"f8013a937ab6a7f64665f2afcef3e3f29b19c92891c2a8778801fe96e65aa1a3",
"0144d88d949edf7950bc49946154989f4ba55e1164a3f0016520b2f8fc5e97e7",
"4e9518dcd1444db2d6d73607c4fef3754d14863ce06c44318dbe69dbbb06aec4",
"d1fa321d3c6102f6d30553c7ab13efd19b8e479e4e04f25cdb1487651ee6f217",
"85f8efa6f1d8173c8b97d91405b96efdfe626e36fb00fdd3992ae9d1e5ad1bcc",
"81f7ea589c7671987050bd383064926fc6ff761e9cc71d0640e9d9d2fdbab0c1",
"74abf3c9e5986310b2b32e30476b5f8e9d2eac0ca8ce3b7883bef9af5a2e2a65",
"8da2dfe1968174d2fa89d5514fc73b64c903034824467324f21a7d24389c6ec6",
"c96940f5e45e71eb234d8509f274a0dd0e9d0d11215a280612e4a6b58a1043d2",
"5752e648c7202099bf75479dae265c4d8814a41651055e8e9feb0ebf7f69f3f0",
"9788a43d65aea0fef04adaf0b3652ff3670437002eb7eac81be8b06b64bb6b11",
"5810a783c2880c3e0a752e03df6ba20deae1ab04bccb8d9ca041b8f7530858f2",
"193f04eb885a2734efae6d753dcca46a3f566d9e12400ae80d4e20cc5c0d8fa8",
"c98fe0ee4d4823c2d979567fdc8d916b251e2b78df7104b912ae8f1260300009",
"9b0d454c565ab7faa81a8bf64bde15ac9e54228be5eaa26b4eab7f4fb4de920d",
"e4a7fe8181de953353a5288afa9443c0f84c475efd1f7adf12df7146f7fb84f1",
"4974976dbd99dd1f9536c00ff8297ebe5729d83f870fe41d42ac215aeb7c8f57",
"312f52922222b754de69afb567e6da876b59256e465bac845b2b5f452fd17af5",
"d78c95b85c0d02700587405604b239775606f9c4c8c7e25ddf2a83b36d0d3a85",
"dc415fcb62dec91680dbe7979fdb663b572d9b6321586ba00ac9489cddcc2bb4",
"268505f6f922bed99012056051d892566d97e4500e82c4e2f900e99f8986dcce",
"2cb04c806aa8093f54cb4496bed4d511bc25a38113765af21af127f44390f5ea",
"b92bdb293d892ba9eda7c9fb8badc88bc39e69a79aaae6ce53fd582cf3a8d24b",
"caf6fa58f8672e8401642585be7df4c539d45ce6f4e14a1bf71f62036bc58e42",
"d6beae8f274e7e57f3b98ce04feb1833696db9e8a69a35dcea91e8ef5cbc771d",
"c9d9317a4d35d9860e64a971987a803957eadf6f239147a91888590a9038b1bc",
"095614f9d489fd6b28049eb360a450e57355ff40d9e6cfdefb2c47680f3d409c",
"bf2d2713e557afae108d6d92e1e9610d9a690998628b387456f84c45e4fafbce",
"29019a65247f2198a01fc9d41fda2cdc46add8869e172c2826364882b216f8ab",
"bec3628eedeea8a8ef6ebba5cd88639196b497877b0280fc46928f59ca82202d",
"a71cffcb3bda73047f04861de6d09beaf227d312157f47298f0f5e26fa8d80ef",
"8ba391aa91c66b7001bed072ba8af9d84b43a74b7a9fea0d82d3de4851036ec6",
"8bf20ca4b378fba7d22f96b96f500c51c10aac0119cb68a36f5bd0c3683ded75",
"a3e70b3a58f97a19622a179d402a305de13feae313c49f9167a3fc0fb94e9c6e",
"830800cd3cd3615f8c5aec1479169ce344c1db82bd6732f12de6df32811d567f",
"9daf9e85e9d49ea4362e14daf009a8b893d774b928be8c6a1efd771387450180",
"a3a5f02d1857c0ca1d3c82708e757b4454bab3a0ce8bb348605aca654cbf3f03",
"15e8d686f6f0a43d643dc6719efd52f2cdc8071ad0a3946bf936a0994fb725df",
"aeb24bab5ddfdfa3f133092612cf456034257bdd7824cb3e9b84540eb631024f",
"751afeb338eff260467f6f198476251d4b3d559755ad761b26b002c18ea0b9b2",
"a204bd2e196cd15be2aec095deac31b4783ea416506ea670ab3387e68c3dd855",
"b214f20923e79d476dc57dca306d5b0558bdf094d1908a643f8ba7c326a21a47",
"3bd82b97ce0188f0789ae455f8a943e3f63d23ab6d0c83158a36f8cfed7eeb00",
"1b5e9cccf85e1f5010e8cee51107977688ded75a2c5a397d9f41597b97bceb13",
"e60984003a761c37608db979e068f8645e1b8d0b55316a064710794cb4b16bee",
"f4ed1688093506f3029f5229b7ceb9f8769221947450b1209144c5aca1763cdb",
"e014b084b34509bc3c8b72b5142e69ea0a364d4b252520b8478a092b74e8a2f4",
"2893cab3bf3c556482289ecbf0c2ca9237bd9f8f325ffc726438286893d3f051",
"9cb0ef9510246913531ff478ba4540f457a52b775271c2ada25883be7a72e132",
"3e49ac0f3a23b95915349c1a6653abb57278e9d8bccea11671d5bab1b234a355",
"4df90913429103135d7e5ff2e2a507b1583a5351da90d298b63aaad01178a519",
"657978730668529681621ca71ac29b0ab8571fa68be542120c1e49ef4d42342e",
"87881d215f95a82b22a4b61f6b1bd1820dc4385bfe306a8fbe36ee9116e27f9d",
"aa108877ecf5880343943f8f61140a7074e7c6b76a448e2fd255141f38ac1f87",
"922031282cf91f74e940b02c687a7f8c2bbb153baaed2eab878cc1c3921d255a",
"462b027adc0b6284b2ac5733f9e8d3fb6b25525d8cdb60ce81302bf309b11f96",
"2338828b841c7d12bda6980f1ab4f0568dd1d687d02d778653f5165f48d153c6",
"1fb23bf20ccfb25376282f4f8309f8ff411fbcc86b4fa9e0386028684bb9d538",
"290d4b66f61ba8b8d5e15111a05e40e5fa3917b99be86a6b3ce6fe2add0a3afa",
"05c59712f7b6b58cd8c97c5c3746d47ec30d7ac7a2d1e12000b52b863661221f",
"66851d9a78b9a59a691b4485fbee60a27b9d2d0e43f314a9a79c7ac1b3e812a2",
"5fcbc2cc89edaab735d98beb528e487102a553857f216256c7754e6bb1b71edd",
"a532ac04ed3bd7b7e375ddcabc35985959229a85f6f461e43e45b1a237acea13",
"0a1bd0e6aa47a42af570c2eb5cbf2b117ad89fe1bd31381139d1ff526310ac6c",
"ac45f5486a6f0d09f404aece174c8319ea5f9998f40fc271ae00306133e10686",
"50764c078ee3c303013deca56e209cbcf05d2a7041819ac172bd92c0ee7ac057",
"8302c88e669b603b68d4e5dcf2e068f0baaf5e009fa07c8176c22b2e1db119fb",
"8f6df7374760ed7ac91b3fb22f22243ef37021e62133e9f1f982a066b9eaaa82",
"1a02d4df9b9ab4f3e0a508be8bcedc147257ba06efa855038ed19c7b4ad28505",
"3fb9563135aaa93e6d5a790a41dd07b862a34f7d60d40917288d706865eccad6",
"72dab0bc3e087d90858f75012201bc7140ab2836da9a8b9e944c8af07c57f3f2",
"5317ffcdd2b37c1453dce640b3b89f1f30e9464203d9b7541598eb2a88c762ef",
"ad0119687b9947cf78ed26c8a4bf873b58a4a66a8e6d8a0469273dba4cc4a942",
"c29400f61302e7fcede5fc06e2aeddac3a5bb17fa8965c8eef47fa3e54be3509",
"e628a00bc14bed339c68da1065711c32f975bc67be7ae6781c6fe50612485b98",
"5ab1438e674bfa9f4c5ad82b4118976b78c72c6c09b4423519f2563d3db23053",
"70b0a2ea7791ac8e1bea20570a0fab4f22a02f8e2c3429deae8182b8af65137f",
"ad9626f027eb711bf5646aaf2d813078b6488cf51002be3700557fd5d22d63e1",
"838a5f71b0ce8d3037dde3b72795ce7dec470525209b319431956bba237ac768",
"99b13fc6d8d4147fec005a0bd93cb72b8dceb33e65a645b65d14bcc8dfaa942d",
"d13385f81cf344984d073913a2045aec42a46a380a5fa664ae02b34d9d176594",
"1ecb7c5ae0f117fc7bfda24ecd7d3b87d6974bdd848e980a1886517286d36851",
"d9a7c6a9b1e4781c1034760940297b1df74b2f8df23a2ffbd3ea0325a3047827",
"55e49f66e1fae3a4c39cbe206785d28f9b165324a4351817f9618ed39d55caec",
"f88a71b3f2acc117ad025271bfc4a58305875f2f2ed0a5b958e80e01ed260f25",
"f83587b4f4aa0b1e4683ef10b36b9ecc43c2345a094f56530118ba49e7b8537d",
"288678686ef5bab520f3271a24066fcd06a37ae299b084750ae10d7b316c03be",
"00e8562dab48a730fabd71c9e5f725ac6ef7eb68c2977efdb658fa3ab3c960f7",
"c107a5c75523e34998eb4ba5a6460d644207728fdbaad33e09dbf9844e646a3e",
"9790ee47aef5ffac300d55a9c5d41777d2fdfb6635f0a89a22854715d140d713",
"e81c0c430d5cbb6d6da6bd2aa8095564d10e93d9efa2e109cffba440777efb8b",
"f9ea7e79356da9e82f9c56065985317229153738fd233175d6f1ac4b384cfee4",
"1b3a03878578f9efba965bc0d1454958e833940060b930aa80fe7ae7d9b70548",
"5bb850fb538cb7c344bcff57661183e0fb794bcc18fd41526e2e53165e1e7ef1",
"72f966c5a6a59650cf2c6f9777b4718c924dec43cf2c7e95f8a56bcdc519952c",
"28664177be00dfb016728a5183f3b5d25a5ce75e27057f57b2c30117619c437b",
"c36949ae29b10991b34b12acb347ebd806f6905342f6b724ddd38edd424081ea",
"25c79bb6fd9302e8d5ddaa9dca68fdbaa57307dc4473d1c36a3bd6d57c339e7d",
"d74cb0417d0961be1e11c162512845a25b9770e42a60ef822466fb00dda0c9ae",
"6d8dd363cb64a8cd9e983ae789e21047cdeac1ef0544246f3923d766c4df734a",
"69edca14a337881bcab82950363f197a230851c5e0035afae40ad39b7e616214",
"ed6153a27cba893083171013e03099cdcc4d81f1a7455839691b9b242b907d7d",
"73f89f0ec95e47e7dc2e00482094b9868a49b5e1e794fa4c7ddf91ebb6524973",
"e2dad1d6ed19bf9c38d3e3ca3289670c12aa483cddf9d45e791d15605910467e",
"51ed783d355d1aaf29df0fa98a7871aa2e24b58e7146b0c58873a572dac8c8e7",
"474dd02309d5500ecd1048a59dc079648da5cb7fab04790bf41fbc1dfa90d4b4",
"f5907faad2ae3bcc04a13e459ea68aa7f55b92b85456820d0c5e204e27093ed2",
"eebfc66ee05f06108d843b1181ace3d187dd0720eda8c91eb33f4699f0e37f76",
"e09c48a2b7e0a9cd18ec1fa1674e288413193c2c561e393ed4d6d73c51137331",
"46f796068fdb21ddea8588bc0794d42a14f52c47275f6102287118d2bfc2be08",
"09259220d2f3c4fa877c9ec3333e4b033cf461c33301e135c7219b064995fb4e",
"ff73c0f479f3ca2e6d596805cf4c72a0888e48215cf3084f0856745e560aca3f",
"307ebcfff65524c0a2689b71b448dca2e82d14b7e5921608365c1f28d46d8aa0",
"5af7c0d9a22328b111d472b3bd6d9cc0cae624e703d76d51fe3f526bf510aa94",
"0b0b5ebb29c566e03fa6ad524fb53fdc56ea72cde39b22bd38f491f6999976f7",
"cc3d363ddadf58b4ed1b6d1cff9b51d2a1110f7cd5a5f6963ec96b57e297ff2f",
"2cfc1804c4567a1378f0d1b7aadec978b71987505eb04646e029a8d15d84a37c",
"478733c017c0895a8a6d44aa3150bfab30d815d4ad1082eec5baf7e19b7593d0",
"549e0e235a4c7137baf2336e0a4175b3e966ba85b29893887e1df44d39e991d7",
"a9cc4dd21c2a3416666a87c06bb18d1ad9db47bdfad3c4a00a49f981e445c9b1",
"4c38349311446f6eb1103cbe64e79203e578be684738b72b67911c474b4b8b98",
"b6df5622a491908dd64386e5ce49d66ad60d1e5b818c428a9992ed9d06d4577f",
"cc0844aaee174943d079ed021f969cef2e59217e51f9b0a9b0fc380155fd2fc9",
"fcdf8e1f224240f549ff3916e6e6f4668c627ff47539ee174e2f5f5e0af8449f",
"9911f9f2b59f7028ea0aeaf15f25887bf9f893ca86f703c6653d71894da61df3",
"10b362ff66e26e102986a4700601e1b5f8ca0e1d54a6685fc341c609e1dc9e38",
"c18265949d77012099d0261021b3b21a2c0271f381c6f27f7d3617038a6b6705",
"5e7346a6e9d7e1b69d60b21c8f9f59e74bb1112a60db8c902028056358f9eb04",
"0de6b1870dcfa463912334f38805fbc1fec9f6ab5d8e95cf7c9052c7b19b7d54",
"91e9e7c5fd03e088fc20c4039f53bd99fb53c4a327ad755f961fad9b08375419",
"d8b0605cb561d275bd3ed5fa4e2c6d65d4740b66fa982c3b4561da4dde899643",
"d16eeaceb40ab53a94c2e028fce7f106519ad7ce05a5d636d3d7164ec9870043",
"85bf3a76c49bab28ca1f12f4c05f961dad919cb95dbcace53c1242b32461119b",
"ba879730884ead905747ae6eaa5219f9368d142953d47a5aa5dea6f02d97b7e7",
"0373bf9cd6d8cbec872d1d861e9829db87100c3c16d755187b914a774e2a0c86",
"0a691ae98dc8b38274aad40d5fedefb8e02d27fb8606d2ff0346461a0ccdc457",
"4387e7d7de5e777a701aefe7178727c12877d717be55b9893c38cf0c9637baf0",
"1b9ecb2e3c389529e54028e15d3888dfb9957c3c586ca7ad0fcc64b87112c9a0",
"8ee67ee03f806ddaf9fba77f3db58e1a9c5ed2e147f7958b331c7e2f23a713f7",
"4548bbed55d8662ce0a921fbd60dd844f1ebe850c2f5bf5c39fe6b83911350cb",
"4afc9f2b23bbfdc23feb1659da89117c8883d172c7efbde0ce1cee193988840a",
"e98b0c20e30ae6f532ab4bc043a0665576f3c0cb86136524f78115911acab764",
"1da65d34e71718d15375b976c953e7e2d93c093df018a3536eccd2af8d31cad1",
"0d545523954d772e0ddb00efe0cb39fe14053aea0689ee1df6123292dbded85c"
"735cc97c3644839430223c0f4df36d4a9fd9f3efb6d7c9558bc96745c1c181b8",
"afa07f48db14996ff65eca07a7378848823c8304bfba2e7935d7eb828a258335",
"d36cec523958a17d32b14939cc973aa83187790095c48754cd646dbaf62fd039",
"1b1797ad5b22f86bdf7aa4500a9d1b21d2b607e1df34d049ee5d2bca2eebb958",
"d8863d98465918db01aeebfe207c5cce5eef2debacaffa810d625b34cb071175",
"1128d10c9e5e43f6666c4ba4fd7e115be873896e7f96208138c36f87145c8030",
"938941e13e30446344bbfa386a68da56a2929886dcadc115ef63cfab8e966c14",
"dd8c31baff5f20b4b5db9f08c1dbf7b3e8749ec63c5e60ac6ea7fbc7e4e22fe8",
"487579823368e84a2990aba964de403561caf1498208b6b507bfa956272aaed9",
"2151443eed46a3b7f8715d679c3aec0a6a74efc226bd3352c59d6ec7f08def0c",
"011bc94fcd06567ee57feded90b2562b88f5be9b8b61b8c7ffdef7c923cfb6f3",
"9b524d0bbc49c0660c4325b1c43f7cdf87db5a589cd9862dfe69ca5ebe6a1ba5",
"a4e9e58b3dd8bee310a8a011a4ab99ae52e87e2c6b838ef9c95a7ea63dcb994e",
"ba63d852eb924212edf97d0464bd8782ebe3548f2588263a4c6add4715406084",
"792dd9c0515bbfb9fad4533918f2d08cb361b26305254525c01d56d06361e2b1",
"5e90c643c40db1c018f63c3f978deb5d70f29ed9eb01a1f7ac34a6721e6475b2",
"780b9f81687a581bcbb432d8ba03641120ae37f090dcd858499863d7faa714cc",
"05eca912226fa485e15c09921f98928960c4b9210cb2bf88ad1afb6f118d0c85",
"ade182bf161f2922287180d9c70c7b802daf30a6e382a16de6a68aff82c8403e",
"d8a53cc2cbfc976de42fc7ad5730283a13da857e9ca441410e030d5a720070a0",
"36d4b7d93e5ca3f9e5d22067d16c5dc610dcd9b265ce871479392c51ad313fc1",
"04471b395cceb40eec40296b7751bbbf948318a1a779bed0f3f807c3903a83d6",
"056b906dd9a14f060c47e099c6a59dc1a677a24e6ef9969397f4da546178f8e2",
"ed5053a44a88ccd164c65ca1e386a1564d3cf023b3718c9b70dab18c131d5e91",
"5b6b262563aec410edf4d714f334f4379a36c2875624774ec26a3d05573dfba7",
"c7d905769a23a2e5786613052ab034d3305f81b73c8eadefe9593ded108f36ca",
"966e200e65b5d23eff8c2c2b41894afcc8ad85e3d4341e2fd3ea9128b576631e",
"74b23b60271c734addc406925ac0e22a82425c06d13ff059e800af010699fee3",
"dd7e802f7506230bd7a8a2087d56ec1807b8a89b661195d73d5bba28fe29c732",
"3e0980aa62b660146db5039b1e9f32f63376e43a26db155a36832a48749efcc6",
"f25d72e771f40b08909a6d5db3d58451b1bdddcc8135f51908ebb3b3ce0a89ec",
"961bda48937dc7e5317a52e68e4c346d0f76314cdf011dcab0594bfdcf57fc97",
"b7169403b4ed5586ce9757a528cebcbe8a3058eaf6ce96c02b20c4a01439f6be",
"ef251e0316893643dbd00d7130a77fab98720c8f5180bdcc22eca4b501022560",
"c7bcff62cc70cf365db47819f91706502894466bf76bd014dc1bf6e0a2bad45a",
"69b85916dd29ed5ee871df9d5f95dafea4138773fb8feab0531508340ad6385d",
"4e4cc8ae9fc02da566e0af4e93e48b034df8b2c9937cdbe632a4a4643f19e599",
"2f424d5e22a137c4ac7bbba18ca06172e9e7175af9e09777b2365e3bfa5f4309",
"daf50496bca35f273a621cb119169e410b21b7b7fca8de0bb40af17108f8e4aa",
"75328b967e1f2be57215fd9ee34953efbadb5ecd46de126c070f11bd0096a73e",
"dba9d7d1143bb2550909101b7a9e087b6cee9fc09132e0d7e4c60fe1eb1be0ef",
"f3b7b3a1b9344f56ad395e3279eae28b8b80b2d1da0af0875b550e57dec212b6",
"dd0f96f6e9170180f24f05f77460d9348f640867b846a58d6d7bb3fef0be7703",
"b00fb6c2ad3987ad4b35ce08efe9663010a4511861d9a62b82949dd66084d418",
"3250a1fcf2750a1cb75aa2079641b71281fb0e34135f72ddbcd4d15a95b7ba2e",
"9a092f8f328cab493a094f76feb9270de1de6947744163eafa06a80a279c1b43",
"9051bdca3ad31655a0f3ed00a9491aed5bf845a5bc537cb63f15f4cb2b86af15",
"ffb9530c3072fa6b1d72f28b60b65e2bd759b23c077aba527aa1f003a0d08662",
"83576f2e87cae8a5e264cf6feee6e4e458d830b0c058f4a64a5deb8ec4e2e3e6",
"d14866546215796d0354307caa1554b3ea78ac9311f91cbe7d637dabc99259c2",
"a6bf49d50637291a5042efcc61a81cec746949ebd2f0876b6802595d9fa60d0d",
"fd7b257dfba53c694618f7a239bdec7e13799eb0df059a95aa300ae38c4109c6",
"5b2b8c05246ddbb0a86d2bddc422f373f2e457d88c45e29e0e82c3645ac2465b",
"314b517493d65856294bd6c5a2e791c4130eb9c9d908d8a4264cda84a18a1fc5",
"e911457467635af5afed2b1892c2bb35082bec445b267ca85df3a087be573e33",
"fc501cdf3e7ecbc307f7f503f89d4f9a61aa644c2e0e2a2a3a862b810a5dc46f",
"0302722d165e238abc2cb99c9d76d6edbc8ad9bd379d43ad2be7f15888a84cb5",
"b1d9b802b5493a9a062bfc4e4f094387bcc37cbbac2a64cae10dc31eba383b9d",
"4eafb8621b82176f9e90db344c93c04df6bb34664356e00660176db2faa54c99",
"89cda668d6b1c49484b4d7bc18a57d5eb688dce5a9bcdc30d1fd970f0480ac03",
"b5befe9519135cde4ac6115cafd85c769db1e333c99d37dd5452a4306d90fca1",
"aeb543a9351ac5138efaa16448f84b25a805f8cf716fe9b819814bec16083fc6",
"0e46f46eb43629062dcabcfc4de32fb54c7c333a245972cd6041e3d143c3d3c6",
"0b7dd72d4481c0e58b29f97f6d95bed251b570e80c110eec5d9fe514f613be8f",
"601c8bc4735585abefd90ab7d68ea09486b51a3b5d7fefb9f4fb6978f794d7c7",
"6e8ce6320b441d64d36ab1ac85edddb4ec5a28a57c3effbfd7cdf74dc1b30458",
"73378468668aff15ef7797823847645eb7d9ba6b338abb2aa1d1cebf3609d392",
"ce401e402f24b15d217973f9b74fdcda77230fae867d751b241ad4aae9de865a",
"26a2c0a492f4724bb024240f81289c816d2bf9d5ade0abc7f2c9e76dacc360f7",
"6f8377d4c03e3a380f583c0ec4875a1cd68fb1288fecbc7a3107be1719a329eb",
"c9af019f93138fbdb3edec3a6969df64d20388ffa763cf8d4af70405b2ba634e",
"a57e9bf6c5c4d80979bb38a55c7bf6e70037a405bdf55640fd8218ccab3acfce",
"d56e1da4e326c754b9e1ff15a0dd32c6e0ad818d5430cf7b22cf896a7f05c26e",
"2d3167c96c286f4b694c00a8ecdd7ccc73b2ee2a6966792f639f7082c314f2da",
"0e308ffdb466f54768c14e041489113298b79c9461c327e20a8402667980694c",
"fb54092c94420abd7783d1d2e00070ace4ac54f5682cdb246e8e6e159fd789e2",
"41fc385808c6a40748eb232a16fe0661e6c879da895866d7cfc57677a4f34dc9",
"3bea56083ed913dc5538d9ce1fa776015e826a9b7b417926351feb950e8a5c7e",
"41b7b4a3c8bf3d3044176d943a30427ae47d977be37dcf7bcb9018c0c7d4f79f",
"36fcbe68037fca1f6d613cb54be47f7e3a33bb791b2cf8fb132e9f4cfa2053a8",
"0b3976e2d1ca95d3ee9c88782264a92f5935be9dc37109e61be0023dc023c46d",
"d31a59d042bff288da14b71fef33a54d55b212b81141189557bedf904d6189be",
"ceb493f4213bad1adbbffd4af7df7789ce5c2cb7612eefa1692463fa8b968354",
"3395b62859645b5410e134c0330a010ff3a0d478109279b48b42b689a8555a5e",
"e7a57206d68be50c802b2560452b25449a94bc1e2940eacb31fb5d5752277d17",
"f1c9e5ef599b084bf7d8100997c996c7baefc8b2abd3e93e14cf8fd7a4620a94",
"859ac6a730e0724cc5813155b1f233ce3732894931ec576f5ae1138b693194e4",
"f3682d926ae8b7e88a9cc91bf7192d131e9b3a41c1a25a4baab1ff761b4bac67",
"bd23d5c48a45c8263cbbd657ffd684e0aa62c81c86ce0a77996c0c2ff385d119",
"ce4e046cebf1291d038cdc5d958929981b4bb7bde160fcd3c10c9efa2b8f107d",
"395d31bbab29521b80890dec6b4cd15b84557f2f8c72301334083412e742df9b",
"42351b54c3782f043c81c9d5de19c757e34204a1960bd4a1e9c89aeb515e6bd9",
"3778a27a20e4c99a868e27b2e11c435f25181a913ed9544b1fe3f63d1e016272",
"1ff7ff7b4978efef7c7505dc19222a910d990b45c1347d954b63c1b3be27b9b7",
"87f2742d34001ae611fe5d176d06641a4d3ca5ab634f0f96f1844cc58a00c96d",
"899f303887eb48af9df292ceaee203c820dae32143001eb4adc7114c4bd2d9af",
"3408c7342387c5e30892763e2c3a82bfe62f9b5d07c2477cfe196f4b7cf16ac1",
"7f6de3afc1f3360c9d3579fc1c37cb16cc7dd7f756fa3647f99cc9a629a18dfe",
"c21151e444a61d76f2d0ca6f2cb6ffd80cef25f4d86c22d7344eb231188fafed",
"8f5b2e47c4284fbfaae37394e869316c294b561072d17aa9ddad6977c2abec1e",
"c1c3dcc3884ad486fe11d2ab8e3b86d7aa56da37bcf039f94b7fdaad6ecb6308",
"b59a7779f1a345f8009832a82ca2d411529237f8b100843d0bc62074a750e09d",
"5537a80215de4e3c9b9a8fc96784fb1ac839e5ef0db1c04e5d40588d25e1c359",
"c78c16b5f94f7fdefe38b0a20017920b332764aab1479c2b43b0f47928260aa2",
"9057fe4fe34207b9b6736769665081984c9bf6ae4bbceff8db8d788b7be9e127",
"469f647cbd8f9179b3c6c7aa19611c639bb48dbd73a413fdb681f038ae18c5ff",
"63e6f083e954de3004d90441b9d4b363dd78f48c4750e3056718044989aa70a2",
"e78d543f55b9780d8ddb0dc8416fc73d73f1b0119aad0fb45fb4ff0d889ef639",
"0b6f60a0b3b674a70ae26f7d8e8e7101d6ad673452e9f1e3ccd968a521c29e88",
"b73c6925ea3afc16a95d698773ac0527b787cc1c20fe19f9e6eb954bc7a0626f",
"f5454c7ed8088f3f668b0d909868416f577976666b227d9901ee9eaf1a3c377d",
"9a5cb0cefe1f8d0c8ea6f50d5f840deb2bf5326c882aeaff1869f4322c8d8bff",
"973406be7a9222adeda514765dfcbd9516725cbef6a885ac21c983ffae88a999",
"68445e37767f0451b1361250d4dc7ac2c2b62a997d2f325a20c7d2c8208d54c0",
"eda5c03e446a0584bdc815a18ac7b621b3fdeb025c9817cfa5c5414339f46032",
"91f269563b7e0b99cc53186e95677b09d546da8f7669861d148194d5d6007912",
"a6416337c02533011bb9ce858b25c6d060c76b384849d46f622c19a5999baa6b",
"35673974dbf30b175acd00599720057d14028b4d308ad3ee67f71e747f7571bc",
"d489c710eed4fb60d5e240b13d476dd92a0e7deadd9a492d05f6b8e1821cd715",
"eb1460fa2ad9e205ab9d0c420da34424b6a3038f1447638ebe2cd8991154be0c",
"5f67fe80431ca93afee9106b74b54187e690e896f9a82a245bf6bc63acae0483",
"3ca970440319aaff1dc12cb6a773eed8ce9e4e99327fb3302110358574526576",
"7bbc444a36bdc401b8813139c13199945245bb7525e244e6f602a51e1d431f4e",
"31771b069a18e8f1ded97b17020885514f1a483ade43857fceaadcbe683fbc7c",
"df5989e22086aa2ba31aae1e0a65cafa06712b467417c657fb7f70a6de9338a8",
"0ba8635fad2d720e153d645d3e60fa300ba8cb1235b6dde6c2901807d2a615d1",
"35adaeb8f49854071be21338abb8489e109a4e51b73d092c960db7464cd22ed5",
"58c440d49997d16fc93a80f24115b9034b18f71ca3829afb3ae4af88e1b4bea8",
"448e8f7d72a267ea8f237c19b8cc7478f43a0233ee50aeb8e8f90d791f213106",
"99d4d5f1b062dbd5d4f09dcf38b7386fdda4f544af4873f0ac8d713faeefc374",
"fc1a652dd0751df584ea1050148d1fb8251d5720346354e40838de78faf45931",
"d1819f073fdc68e2d581a8273e2dfe074509423552bd91890a6a7244792f6847",
"cad1a77feefeacc105a413229e28698d098e421ec2c8040a414ab97e9eb068b9",
"72daac0e17c534664109c18b7c711836728a8bbccc7e0b25af93fbbb92c4213a",
"e58a27bea63b19c355cf28fb0bec86222bd51d076e1aacfa391e44741face1ba",
"1b0ce0e3e1eb181e9a9cbe90348bd232029926d01062f563d0ef2228e71bd204",
"d1f13eb3c0d058487725badd959821e352030b11627359552d7cfb5ecab285ca",
"eb8af2968e52f6825506987c18aeb0b71d4e78fc8d733808e9d4c37ed1c096ad",
"372fc4c2e08f57a5effc7c8ce4f030d54df25ef0a320e94f1c2f076751656a1f",
"c4e4fc56b8ec3b6fb98be36ebe65541b5bd120e524a8bedacfb434372f2c576f",
"a0854078d297badb66bb0112d1db0c0d0bede55ecb405bdc98588414c6a5e446",
"f814dfc5262306d64e4a88b1d5c8e31c7f76fe95d1664c9d54d1aa7c6c8974bb",
"bfc15925208b303ca926bb91170f0344c1673a7c469a70ec5c081d6917d678b7",
"ecce0745e0ecf573b1e68bd3a18c4d24c337fd4fbec5df0535a5b0e031b1ec13",
"5475eb4794324e270b5d05c1b58afa518dd6c274fabb4777fd6d578c2ecdcea3",
"11f47ba2ee8f659e6b4e9c796a4aadacece7f73715eb665ca2adb89f5204a91e",
"8c1e64d1266e8ce1b52eb558cc076b873470df5cd78aa7eed9614616fbf82629",
"77d2ee0a9cca314648e7379c8f48793e03533365b6d7fd40546fc70fd85cf611",
"8195605fa9ceba81ef75ca9acdba570c1b86792243fb87c051cedb00c3319b57",
"270b4b2314af5d04990872ab223b8c02d5eac9273dcc15ed2f20255dd6c943fd",
"ce3f49c8dc50211e9da395597f3bcfad361ad977a0519a4b474799ef31f4d132",
"158f6fb7b6a0ec579e6790c21b4f0f906b60d24f2dd65153d3db58f85fb7c031",
"8869e690485ae0a3d424c1c1304fa848154081f88f18b9738d7e276c9390328a",
"6e5df52a73a6e052f47eedf534a7f048e53229042225f43a95d4a103364448fe",
"598f4e7203118da61dfc67a346f0be90f657d39d6aff95119561fde390976f0e",
"c801a5ddda5896ce4a32f58a19257b97f1ccd24b644c29208cf1ea157a618779",
"51654297ec0993518cf823381970fa93c8165ffd1854b35d47615060a7f55438",
"42951808dcea717f7b09dc90b6b9af6421df47e90ab7e350aeb332999a5bb13c",
"c7db6d0305cdf41490dd72d4a18d4a9cb942e425752882eca2ae1772644fb7f1",
"57173091401675b46e20132dc13ce3d36f37f692261532486f50521fc0be2441",
"9e38f53bb074c450f05fe6204521e8cb0813eb134a49487090e2bbcd5eeaa93e",
"ed9e1d6c72a34f5d323155448da983c4c2e8e55f6e6d07efdbe906093fa4c3f7",
"75546d33806bd7ae16ad2eadb4d3cb7f1af1a96647575b44b41cd01bb1827e9e",
"ed55193ba5f3747361bcab35e321c6db4f618a037f67611b16322d6f41b5d0c6",
"d8695c3bf71181ee199c6d684bf3624350c32ff5fc2515fb8535174882881743",
"054a83fbd9df3eb095a6df764b9b17d3834365d646d1975e77ea146067ee5905",
"a9fb8fb08d481448e9fe1439baa88f00bc77b4643a65b3534e59740654496705",
"6713b1d7cd37ac3eb61b0607a4111f56c9caa6a8d39052af4667157385b2fb7a",
"56a633e1cd7a489b03175a1fc5658ada31cc1fd4e3fa555841935c9837a6a144",
"e072424d5c85c381d8de50a9bda4d99d2c5a87f5485da0c756b60d1e32d0a521",
"d5dfe98aa43730181aca0fd06935fb4ed4cd10f0c5da65afe8ed127b1fcd6d9d",
"15a2710739618145ddc340ebc454d63296d1859b15b8c41c58938df8b1d4e803",
"f9e7c369fd585ca6ebee94683948e5d2de46da53f5a086912a8359b17d5ffef2",
"16cbdb1d3d6d6a17beda1d0b88eb1b35265825c468810b117bf064f10970eca2",
"c5d5caab59802343331665cca262725ae3bfec3b2dda91e5ebf9b253ee97034c",
"4df00181988210e056ac9a9490a38baade9db19ee476b9faa1584dccf1fe958e",
"52b425b73122e7bf496dd25a782e6fe9ee964efed9c9b1b27992c3cf17e6fcf1",
"aa85ca6179c6503ef8b22669023966fa75e628c6a23d75db2ca492145bbcfe71",
"7a9a3879cd72b5a556103340573102b5e4a1053f05b66060fbc6ae03e19a2fc9",
"2ea801ae44ab24f8b71ef697c2ab776c1c218b282ca84b9ad26b8c2499899de3",
"1c8745e8d79ed7af554c906fe4e0bf6eb4a188f2b47b463f035f62ddbbb194dc",
"dfc06bd9713e7700980b883bd20fc9d50496245a8a4d1839c27c6ba97838a251",
"adda320af7f4f47a7c0881fbbc9971cd250735e318a53d826d07ec0ae44e7113",
"17365da9048c00d154d4cc3fe58833e0630caadef125c98ecd0fe2a3190d8cb7",
"87c95d5bb2238f4f4b5b34b6027531e41e926991ff4a4f2047f0332733e700b9",
"cec3050b969a97db0f67393b701f29062025eaff3498b9eee8027414a81adaf0",
"5290c32e06e0d30c94f94cc12e56f8a4fc23a4c4125fb535e82a2b5c7d657bd7",
"821f640a048f3d101abc660b616857a2da1e12ad0f399b7f9969c60f10a2afb0",
"53c905a1245462a018b92869658b632528c40a9d585f369fd2c9de356ef317db",
"bb9b9564045e1b42871fb990f2cf8231b5270c112e43a2ba850529f3b784133e",
"3b8e0b253ee7978afdfc2891904ce339af6d213c7bcf2aa9c4d30f682ee50825",
"ca07d1ae8b66d005ba946540e4b971cb926917dfc0d3c1f95f1eb822adceee1a",
"8bc21ff4ee1bee4508d0a54271f5a014e6960bfb790061095fa13e13d2e61205",
"f1a604204d089601607b8bbdbdcbb6a520d68c19468ecfa541213f7a17e04eca",
"e1abe1eb437a6de04eb8825fdf576f1bbbbeee5768d5093082b13267d03957b3",
"10189cf0918d177a64d9ba5a70adcb8fa8a26d03e250dce3d07263476f2dbbec",
"3a01650da970ea59871616782972f5e15428657a7dc42119a2ca06138f7f2983",
"e89d843645b503c5a5352e06d6dddba208fd6ab7f63cc6ba3168f3e91c2bd136",
"5a10dcce898edc69750fd3edbfa6e048858dd5b63894aa70609b7229934a7001",
"fdcf4460011b5f4e9d7f53d67c4c9f51d436a2a572375ffa10accf529b741de7",
"8c336f268ee0a5ac137589dcfb723df6eea6999796d97fdc7f315c983c3931e6"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Compare the two seed files
diff -u testdata/dind/050_randomized_order_small_seed0001.hashes.json \
        testdata/dind/050_randomized_order_small_seed0003.hashes.json

Repository: flyingrobots/echo

Length of output: 43


Regenerate seed0003 fixture with actual different randomization.
The hashes are identical to seed0001, gutting the entire point of seed-parameterized randomized-order tests. Either the fixture generation consumed the wrong seed, or the RNG is broken. Regenerate with seed0003 actually wired through to produce demonstrably different iteration orders.

🤖 Prompt for AI Agents
In `@testdata/dind/050_randomized_order_small_seed0003.hashes.json` around lines 6
- 207, The fixture file
testdata/dind/050_randomized_order_small_seed0003.hashes.json contains the same
hash ordering as seed0001; regenerate the seed0003 fixture by re-running the
fixture generator with seed "0003" (or the intended numeric seed) so the RNG is
actually seeded differently, ensure the generator accepts and applies the seed
parameter (wire the seed through the generator invocation you use to produce
050_randomized_order_small_* fixtures), and verify the new file's "hashes_hex"
order differs from seed0001 before committing.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@README.md`:
- Line 67: Update the README entry for "WSC (Write-Streaming Columnar) snapshot
format (`wsc::*`)" to mention both deterministic serialization (the
write/streaming side) and zero-copy mmap deserialization (the read side);
specifically, change the short description to state that WSC implements
deterministic, write-streaming serialization for reproducible snapshots and
zero-copy mmap access for fast, zero-copy reads so readers understand both write
and read capabilities.

- deterministic scheduler (radix drain ordering + footprint independence checks)
- snapshot hashing (`state_root`) + commit hashing (`commit_id`)
- deterministic math + PRNG (`math::{Vec3, Mat4, Quat, Prng}`)
- WSC (Write-Streaming Columnar) snapshot format (`wsc::*`) for zero-copy mmap access
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Incomplete feature description—you're selling WSC short.

The line mentions "zero-copy mmap access" (read side) but completely omits the write side. Per your own PR objectives, WSC implements deterministic serialization (write) and zero-copy deserialization (read). The acronym literally starts with "Write-Streaming" yet your description says nothing about writing.

User-facing documentation demands completeness. Either mention both capabilities or you're misleading readers about what WSC actually does.

📝 Proposed fix
-  - WSC (Write-Streaming Columnar) snapshot format (`wsc::*`) for zero-copy mmap access
+  - WSC snapshot format (`wsc::*`) for deterministic serialization and zero-copy mmap deserialization

Or if you want to keep the expansion:

-  - WSC (Write-Streaming Columnar) snapshot format (`wsc::*`) for zero-copy mmap access
+  - deterministic WSC (Write-Streaming Columnar) snapshot format (`wsc::*`) with zero-copy mmap deserialization
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- WSC (Write-Streaming Columnar) snapshot format (`wsc::*`) for zero-copy mmap access
- WSC snapshot format (`wsc::*`) for deterministic serialization and zero-copy mmap deserialization
🤖 Prompt for AI Agents
In `@README.md` at line 67, Update the README entry for "WSC (Write-Streaming
Columnar) snapshot format (`wsc::*`)" to mention both deterministic
serialization (the write/streaming side) and zero-copy mmap deserialization (the
read side); specifically, change the short description to state that WSC
implements deterministic, write-streaming serialization for reproducible
snapshots and zero-copy mmap access for fast, zero-copy reads so readers
understand both write and read capabilities.

flyingrobots and others added 5 commits January 17, 2026 00:57
Previously, accessors like `out_edges_for_node()` used `.get().unwrap_or(&[])`
which silently returned empty slices for corrupted index ranges. This meant
`validate_wsc()` would pass validation while silently dropping data.

Changes:
- Add `IndexRangeOutOfBounds` error variant to `ReadError`
- Add `WarpView::validate_index_ranges()` for explicit range checking
- Call this validation early in `validate_warp_view()`
- Add tests for detecting corrupted out_index and node_atts_index ranges

Fixes the P2 issue where corrupted WSC files could pass validation while
yielding incomplete graphs at read time.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The double backslash in single-quoted grep pattern matched literal
backslash + any char, not a literal dot. Use single backslash for ERE.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace manual offset arithmetic and bytemuck::from_bytes with
read_slice helper which handles alignment errors safely via
try_cast_slice.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The generated files moved from src/generated/*.rs to src/*.generated.rs.
Update all six bootstrap scripts to reference the new locations.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add proper H1 heading at top of document
- Move Rust code into fenced code blocks
- Replace hard tabs with spaces
- Fix ordered list prefixes to be sequential

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@crates/warp-core/src/wsc/validate.rs`:
- Around line 79-90: The bounds check currently casts out_edge.edge_ix() to
usize before comparing to edges.len(), which can truncate large u64 indices on
32-bit targets; modify validate logic in the loop using
view.out_edges_for_node(node_ix) so you first compare out_edge.edge_ix() (u64)
against edges.len() as u64, and only cast to usize after the check succeeds;
update the ReadError::SectionOutOfBounds offset/length values to use the
original u64 index when returning the error.
♻️ Duplicate comments (7)
crates/warp-core/src/wsc/validate.rs (4)

48-61: Ordering checks are no-ops — validation is lying.

You detect ordering violations and then silently do nothing. That defeats the entire purpose of validation. Return an error (preferably a dedicated variant like ReadError::OrderingViolation).

🐛 Proposed fix (add a real error on ordering violation)
     // Validate node ordering (must be sorted by node_id for binary search)
     for window in nodes.windows(2) {
         if window[0].node_id >= window[1].node_id {
-            // Note: We could add a specific error type for ordering violations
-            // For now, we just document this as a validation requirement
+            return Err(ReadError::OrderingViolation {
+                kind: "node",
+                previous: window[0].node_id,
+                current: window[1].node_id,
+            });
         }
     }

     // Validate edge ordering (must be sorted by edge_id for binary search)
     for window in edges.windows(2) {
         if window[0].edge_id >= window[1].edge_id {
-            // Note: Could add specific error for ordering violations
+            return Err(ReadError::OrderingViolation {
+                kind: "edge",
+                previous: window[0].edge_id,
+                current: window[1].edge_id,
+            });
         }
     }

63-76: Magic-number attachment index will collide.

warp_index * 1000 + … silently collides on large graphs and produces garbage diagnostics. Use a monotonically increasing counter or include node/edge indices.

💡 Safer index computation
-    // Validate node attachments
+    let mut att_index = 0usize;
+    // Validate node attachments
     for (node_ix, _node) in nodes.iter().enumerate() {
         let atts = view.node_attachments(node_ix);
         for (att_ix, att) in atts.iter().enumerate() {
-            validate_attachment(att, view.blobs().len(), warp_index * 1000 + att_ix)?;
+            validate_attachment(att, view.blobs().len(), att_index)?;
+            att_index += 1;
         }
     }

     // Validate edge attachments
     for (edge_ix, _edge) in edges.iter().enumerate() {
         let atts = view.edge_attachments(edge_ix);
         for (att_ix, att) in atts.iter().enumerate() {
-            validate_attachment(att, view.blobs().len(), warp_index * 1000 + 500 + att_ix)?;
+            validate_attachment(att, view.blobs().len(), att_index)?;
+            att_index += 1;
         }
     }

108-111: Reserved-byte check is a no-op.

If non-zero reserved bytes matter enough to check, they matter enough to reject or warn. Silent acceptance defeats forward-compat checks.

🐛 Proposed strict validation
     // Validate reserved bytes are zero
     if att.reserved0 != [0u8; 7] {
-        // Could add a specific error, but for now we're lenient on reserved fields
+        return Err(ReadError::NonZeroReservedBytes {
+            index,
+            value: att.reserved0,
+        });
     }

131-300: Negative validation cases still missing.

Coverage still skips ordering violations, reserved-byte violations, invalid attachment tags, and out_edges out-of-bounds. Those are exactly the cases that protect against silent corruption.

crates/warp-core/src/wsc/read.rs (2)

132-159: bytemuck::from_bytes can panic on misalignment.

You already have ReadError::Alignment; use try_from_bytes so malformed/misaligned input returns a proper error instead of panicking.

🐛 Proposed fix (alignment-safe header read)
-    // Use bytemuck for safe transmutation
-    let header: &WscHeader = bytemuck::from_bytes(&data[..header_size]);
+    // Use bytemuck for safe transmutation
+    let header: &WscHeader = bytemuck::try_from_bytes(&data[..header_size])?;

228-255: Add a misalignment regression test.

Once the alignment fix lands, add a test that feeds a valid header at an odd offset and asserts ReadError::Alignment instead of panicking.

crates/warp-core/src/wsc/view.rs (1)

96-132: warp_view still does unsafe manual parsing and misaligned casts.

You’re reimplementing slice bounds, doing unchecked usize arithmetic, and calling bytemuck::from_bytes() which can panic on misalignment. The existing read_slice() already handles alignment and bounds safely. Use it.

🔧 Proposed fix (reuse read_slice)
-    #[allow(clippy::cast_possible_truncation)] // We bounds-check before casting
     pub fn warp_view(&self, index: usize) -> Result<WarpView<'_>, ReadError> {
-        let count = self.warp_count();
-        if index >= count {
-            return Err(ReadError::WarpIndexOutOfBounds { index, count });
-        }
-
-        // Read directory entry
-        let dir_off = self.header().warp_dir_off() as usize;
-        let entry_size = std::mem::size_of::<WarpDirEntry>();
-        let entry_off = dir_off + index * entry_size;
-
-        if entry_off + entry_size > self.data.len() {
-            return Err(ReadError::SectionOutOfBounds {
-                name: "warp_directory",
-                offset: entry_off as u64,
-                length: entry_size as u64,
-                file_size: self.data.len(),
-            });
-        }
-
-        // Use bytemuck for safe transmutation
-        let entry: &WarpDirEntry =
-            bytemuck::from_bytes(&self.data[entry_off..entry_off + entry_size]);
-
-        WarpView::new(&self.data, entry)
+        let entries = read_slice::<WarpDirEntry>(
+            &self.data,
+            self.header().warp_dir_off(),
+            self.header().warp_count(),
+            "warp_directory",
+        )?;
+
+        let entry = entries.get(index).ok_or(ReadError::WarpIndexOutOfBounds {
+            index,
+            count: entries.len(),
+        })?;
+
+        WarpView::new(&self.data, entry)
     }

flyingrobots and others added 10 commits January 17, 2026 01:12
Replace byte-by-byte loop with buf.resize() for clarity and efficiency.
Behavior is preserved: pads buffer to next multiple of alignment.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace manual offset arithmetic and bytemuck::from_bytes with
read_slice helper which handles alignment errors safely via
try_cast_slice.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The canonical_state_hash() uses DIND_STATE_HASH_V2 domain header.
Update all fixture JSONs to match.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The ordering validation loops detected but ignored violations.
Add OrderingViolation error and return it when IDs are not sorted.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The fixture contained 27 identical hash entries due to a generation bug.
Fix generator and regenerate with properly varied hashes.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The ordering validation loops detected but ignored violations.
Add OrderingViolation error and return it when IDs are not sorted.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Also adds dead_code and clippy::upper_case_acronyms allows to generated
files since private modules trigger dead_code warnings for unused items.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests for:
- Unordered nodes (OrderingViolation)
- Invalid attachment tag (InvalidAttachmentTag)
- Non-zero reserved bytes (NonZeroReservedBytes)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The codecs module was made private to avoid missing_docs lint,
but SCHEMA_HASH is needed by echo-dind-harness. Re-export it
at the crate root.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🤖 Fix all issues with AI agents
In @.githooks/pre-commit:
- Around line 5-10: The hook defines STAGED="$(git diff --cached --name-only)"
but the PRNG coupling guard still calls git diff directly; change the guard that
currently does git diff --cached --name-only | grep -qx "$PRNG_FILE" to use the
STAGED variable instead (e.g., echo "$STAGED" | grep -qx "$PRNG_FILE"), and
audit other checks that should reuse STAGED so the pre-commit uses the cached
list consistently (referencing STAGED and PRNG_FILE to locate the code to
update).

In `@crates/echo-dind-tests/src/lib.rs`:
- Line 12: The public re-export pub use codecs::SCHEMA_HASH is missing rustdoc
which triggers the missing_docs CI check; add a short /// doc comment
immediately above this re-export explaining what SCHEMA_HASH represents (e.g.,
its purpose, format, and where it comes from in codecs), and include any usage
notes or references to the original definition if helpful so the public API is
documented.

In `@crates/warp-core/src/wsc/build.rs`:
- Around line 198-203: The align8_vec function currently pads by pushing zeros
in a while loop; replace that with a single resize call: compute the target
length as the next multiple of 8 based on v.len() (e.g., ((v.len() + 7) / 8) *
8) and call v.resize(target_len, 0) to append zeros efficiently and
idiomatically in the align8_vec helper.
- Around line 91-103: The loop over `bucket` silently drops edges in release
builds because `debug_assert!` does not run and the `if let Some(&ix) =
edge_ix.get(&e.id)` branch simply skips missing entries; replace this with a
loud failure by calling `edge_ix.get(&e.id).expect(...)` (or match and return a
Result) when constructing `OutEdgeRef` so missing `e.id` panics with a clear
message; update the loop to use `let &ix =
edge_ix.get(&e.id).expect(&format!("missing edge_ix entry for edge id {:?}",
e.id)); out_edges.push(OutEdgeRef { edge_ix_le: ix.to_le(), edge_id: e.id.0 });`
(or alternatively propagate an error from this function) so invariants are
enforced in release builds.

In `@crates/warp-core/src/wsc/write.rs`:
- Line 229: The debug-only check using debug_assert_eq!(buf.len(), total_size)
can let size mismatches silently produce corrupt files in release builds; change
this to a runtime check: replace the debug_assert_eq! with a hard assertion or,
better, return an error when buf.len() != total_size so callers can handle it.
Locate the debug_assert_eq!(buf.len(), total_size) in the write logic (the code
that computes total_size and writes buf) and either use assert_eq!(buf.len(),
total_size) or propagate a Result::Err (create and return a descriptive error)
from the surrounding function when the lengths differ.

In `@docs/WARP-GRAPH.md`:
- Line 633: The markdown file uses 4-space indented code blocks (examples
include the block containing "let node_rows: Vec<NodeRow> = nodes") which
triggers MD046; replace each indented block with a fenced code block using
triple backticks and an appropriate language tag (e.g., ```rust) to wrap the
same code content. Locate occurrences like the "let node_rows: Vec<NodeRow> =
nodes" snippet (and other similar indented snippets at lines called out in the
review) and convert them to fenced code blocks, ensuring you keep the exact code
text and add the language identifier for proper highlighting.
- Line 820: The ordered list prefix "0)" is invalid Markdown and causes
inconsistent numbering; update the list starting with the token "0)" (the "0)
Minimal ident.rs upgrades (recommended)" item) and all similarly formatted items
("2)", "3)", etc.) to use a consistent ordered-list style—either numeric "1.",
"2.", "3." sequence or the auto-numbering style where every item is prefixed
"1."—so the document conforms to markdownlint MD029 and renders consistently;
scan the document for all occurrences of "0)" and other numeric-with-parenthesis
prefixes and normalize them to the chosen style.
- Around line 1-6: Remove the duplicated SPDX and copyright HTML comment block
that appears a second time in the document (the repeated "<!--
SPDX-License-Identifier: Apache-2.0 OR MIND-UCAL-1.0 -->" and copyright line).
Keep the first occurrence at the top and delete the later duplicate so the
header appears only once; verify the top block (first HTML comment pair) remains
unchanged.

In `@scripts/bootstrap_dense_rewrite.mjs`:
- Around line 105-107: The thrown Error message is referencing the old filename
"codecs.rs" while the script reads "codecs.generated.rs"; update the error text
in the block that reads into the codecs variable and checks match (the const
codecs = fs.readFileSync(...) and the if (!match) throw new Error(...) lines) to
mention "codecs.generated.rs" (or a generic "codecs.generated.rs or expected
filename") so the error accurately reflects the source being read.

In `@scripts/bootstrap_dind_log.mjs`:
- Around line 66-68: The regex used to extract SCHEMA_HASH from the codecs file
is inconsistent with other bootstrap scripts; update the pattern used in this
file so it matches the sibling scripts' regex (/pub const SCHEMA_HASH: &str =
"([0-9a-fA-F]+)";/) by replacing the current
/SCHEMA_HASH:\s*&str\s*=\s*"([0-9a-fA-F]+)"/ with that exact pattern where you
construct match from codecs (the code referencing codecs.match and the
subsequent match check should remain unchanged).

In `@scripts/bootstrap_error_determinism.mjs`:
- Around line 89-91: The error message thrown when the regex fails references
the wrong filename; update the thrown Error in the block that reads into the
codecs variable and checks match (the const codecs = fs.readFileSync(...) and if
(!match) throw new Error(...)) so the message names "codecs.generated.rs"
instead of "codecs.rs" to match the actual file being read and stay consistent
with other bootstrap scripts.

In `@scripts/bootstrap_randomized_convergent.mjs`:
- Around line 106-108: The error message thrown when SCHEMA_HASH isn't found
references the stale filename "codecs.rs" and there's duplicated extraction
logic; create a shared helper module (e.g., exports from
scripts/lib/schema_hash.mjs) that exposes a function like
getSchemaHash(filePath) which: reads the given file (used by
bootstrap_randomized_convergent.mjs), applies the regex /pub const SCHEMA_HASH:
&str = "([0-9a-fA-F]+)";/ to extract and validate a 64-hex hash, and throws a
clear error that includes the actual file name (e.g., "Could not find
SCHEMA_HASH in <filePath>"); then replace the inline code that sets codecs,
match, and the throw in bootstrap_randomized_convergent.mjs with a call to
getSchemaHash("crates/echo-dind-tests/src/codecs.generated.rs").

In `@scripts/bootstrap_randomized_order.mjs`:
- Around line 98-100: The error message thrown when the regex match fails still
refers to the old filename "codecs.rs"; update the thrown Error at the match
check (the new Error created when `match` is falsy) to reference
"codecs.generated.rs" so the message matches the actual file read by the
`codecs` variable and aids debugging.

In `@testdata/dind/060_math_determinism.hashes.json`:
- Line 4: The file's metadata field hash_domain is incorrectly set to
"DIND_STATE_HASH_V1" despite the hashes in hashes_hex being regenerated with V2;
update the hash_domain value to "DIND_STATE_HASH_V2" so the metadata matches the
regenerated hashes (verify the hashes_hex array remains unchanged) — look for
the literal "hash_domain" and the "hashes_hex" array in this JSON and change
only the domain string to DIND_STATE_HASH_V2.
♻️ Duplicate comments (3)
testdata/dind/030_error_determinism.hashes.json (1)

4-13: LGTM. V2 migration complete.

The hash_domain is now correctly set to DIND_STATE_HASH_V2, and the hashes_hex array has been regenerated accordingly. This addresses the prior review concern about the V1/V2 mismatch.

crates/warp-core/src/wsc/validate.rs (1)

84-98: Truncation before bounds validation remains a 32-bit time bomb.

Line 88 casts out_edge.edge_ix() (a u64) to usize BEFORE comparing against edges.len(). On 32-bit platforms, a malicious file with edge_ix = 0x0000_0001_0000_0000 truncates to 0 and passes validation, pointing to the wrong edge.

A past review flagged this exact issue. The fix was proposed but not applied.

🐛 Proposed fix: Compare as u64 first
         for out_edge in out_edges {
-            let edge_ix = out_edge.edge_ix() as usize;
-            if edge_ix >= edges.len() {
+            let edge_ix = out_edge.edge_ix();
+            if edge_ix >= edges.len() as u64 {
                 return Err(ReadError::SectionOutOfBounds {
                     name: "out_edge reference",
-                    offset: edge_ix as u64,
+                    offset: edge_ix,
                     length: 1,
                     file_size: edges.len(),
                 });
             }
         }
docs/WARP-GRAPH.md (1)

586-601: Unfenced Rust code triggers a cascade of false-positive markdown lint errors.

Lines 588-601 contain raw Rust code (starting with #[repr(transparent)]) that isn't wrapped in a fenced code block. Markdownlint interprets #[ as malformed ATX headings (MD018). This pattern repeats throughout the document (lines 620+, 740+, etc.).

Wrap ALL Rust code examples in triple-backtick fences with the rust language identifier. The static analysis is screaming at you because you didn't.

flyingrobots and others added 2 commits January 17, 2026 03:19
On 32-bit targets, casting u64 to usize truncates large values,
which could cause invalid edge indices to pass bounds checks.

- Compare edge_ix (u64) against edges.len() as u64 first
- Use original u64 value in error messages
- Add regression test with index > u32::MAX

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace warp_index * 1000 + att_ix formula with a simple monotonically
increasing counter. The old scheme could produce colliding indices on
large graphs and garbage diagnostics.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@crates/warp-core/src/wsc/validate.rs`:
- Around line 313-344: Add a new unit test validate_rejects_unordered_edges that
mirrors validate_rejects_unordered_nodes but constructs OneWarpInput with two
EdgeRow entries out of order; call write_wsc_one_warp, parse with
WscFile::from_bytes, run validate_wsc and assert the returned error matches
ReadError::OrderingViolation { kind: "edge", .. } to cover the edge-ordering
validation path (use the same helpers/types: OneWarpInput, EdgeRow,
write_wsc_one_warp, WscFile::from_bytes, validate_wsc).
- Around line 197-231: Add a negative unit test named
validate_rejects_blob_out_of_bounds in the same test module that constructs a
OneWarpInput (use NodeRow, Range and AttRow) where node_atts contains an AttRow
with blob_off_le set to 0 and blob_len_le set to a value larger than the
provided blobs vec, then call write_wsc_one_warp, parse with WscFile::from_bytes
and assert that validate_wsc(&file) returns an Err matching
ReadError::BlobOutOfBounds; this exercises the existing blob bounds check in
validate_wsc.
- Around line 36-39: The parameter _warp_index in function validate_warp_view is
unused; remove the dead parameter from the function signature (change
validate_warp_view(view: &super::view::WarpView<'_>) -> Result<(), ReadError>)
and update every call site that currently passes a warp_index to call the new
two-argument form; if you intentionally want to reserve it, instead add a clear
comment above validate_warp_view explaining it's reserved for future use and
keep the underscore prefix to avoid warnings, but prefer removing the parameter
to eliminate dead code.

flyingrobots and others added 2 commits January 17, 2026 03:41
- Add pre-push-parallel: runs fmt/clippy/tests/rustdoc concurrently
  using separate target dirs (11s warm vs ~60s sequential)
- Add sweep-stale-artifacts.sh: rm -rf directories >14 days old
- Hook sweep into pre-commit on odd hours to prevent target/ bloat
- Update .gitignore for parallel target dirs

Usage:
  ./scripts/sweep-stale-artifacts.sh [days]  # manual sweep
  ./.githooks/pre-push-parallel              # parallel checks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
External dependencies rarely change and are expensive to recompile.
Only sweep target/*/incremental/ which grows from our code changes.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Fix all issues with AI agents
In @.githooks/pre-push-parallel:
- Around line 22-32: The pre-push hook is inconsistent: run_rustdoc uses cargo
+"$PINNED" but run_fmt, run_clippy, and run_tests call cargo directly; update
run_fmt, run_clippy, and run_tests to invoke cargo with the pinned toolchain
(add +"$PINNED" before cargo) so all stages use the same toolchain as
run_rustdoc (or alternatively add a short comment explaining why rustdoc is
special if you intentionally want different behavior).
- Around line 18-19: The current trap only removes LOGDIR but doesn't stop the
background cargo processes; update the trap for LOGDIR to also handle interrupts
(SIGINT/SIGTERM) and kill any child/background jobs before cleanup so Ctrl+C
stops the five cargo processes. Locate the LOGDIR and trap usage, and change the
trap command used with LOGDIR to first kill child processes (e.g., via jobs -p |
xargs -r kill or pkill -P $$) and then remove "$LOGDIR", ensuring the trap is
registered for EXIT, INT and TERM to cover interrupts.
- Around line 9-14: The script currently verifies commands but not that the
pinned Rust toolchain exists, so before invoking cargo +"$PINNED" (used by
run_rustdoc) add a check that the $PINNED toolchain is installed (e.g. run
`rustup toolchain list` and grep for the exact "$PINNED" entry); if it's not
found, print a clear error like "[pre-push] missing toolchain: $PINNED" to
stderr and exit 1 to avoid rustup auto-downloads or confusing runtime errors.
- Around line 34-43: run_rustdoc currently collects command output into a string
and always exits with 0, so cargo doc failures are ignored; modify run_rustdoc
(the function) to capture each cargo invocation's exit status (e.g., check $?
immediately after the cargo +"$PINNED" doc call), accumulate both the per-crate
output and the highest-precedence nonzero exit code (store in a local RUSTDOC_RC
variable), and at the end echo the collected output and return that exit code
(use `return` or `exit` with RUSTDOC_RC) so callers see failures; if you still
need the "[rustdoc] ${krate}" prefixes, append them to the collected output per
crate before updating RUSTDOC_RC.
- Line 4: Add failing-fast behavior to the hook by enabling "set -e" (so the
script exits on any command error) or, if you prefer manual error handling,
validate the mktemp call that sets LOGDIR: after the mktemp invocation check
that LOGDIR is non-empty and mkdir/writeable before using redirects like >
"$LOGDIR/fmt.log"; update the top-level shell flags near set -uo pipefail to
include -e or add an explicit guard that exits with a clear error if LOGDIR is
empty or mktemp fails.
- Around line 45-59: The run_patterns function currently suppresses ripgrep
output by redirecting rg to /dev/null, so callers only see generic "FAIL"
messages; update the two rg invocations in run_patterns (the one checking
"#![allow(...missing_docs...)]" and the one checking
"\#\[unsafe\(no_mangle\)\]") to preserve ripgrep's matched file/line output —
either remove the ">/dev/null 2>&1" redirection or capture rg's stdout/stderr
and print it when rg exits non-zero — so the failing file paths and line numbers
are visible alongside the existing failure messages.

In `@scripts/sweep-stale-artifacts.sh`:
- Around line 20-22: The current SIZE assignment can fail under set -euo
pipefail if du or cut errors; make size-reporting best-effort so the sweep
continues. Replace the SIZE assignment with a safe command substitution that
falls back on a default (e.g. "unknown" or "-") on error, e.g. use SIZE=$( { du
-sh "$subdir" 2>/dev/null | cut -f1; } || echo "unknown") so failures from
du/cut won't abort the script, then continue to echo "rm -rf $subdir ($SIZE)"
and run rm -rf "$subdir".
♻️ Duplicate comments (1)
.githooks/pre-commit (1)

12-18: STAGED still ignored in PRNG guard.
You introduced STAGED, but Line 17 still shells out again. Use the cached list for consistency.

♻️ Minimal fix
-if git diff --cached --name-only | grep -qx "$PRNG_FILE"; then
+if echo "$STAGED" | grep -qx "$PRNG_FILE"; then

flyingrobots and others added 5 commits January 17, 2026 03:57
- pre-push now alternates between sequential and parallel variants
- Logs timing to .githooks/timing.jsonl (gitignored)
- Add plot-prepush-timing.mjs for ASCII chart visualization

Usage:
  # Just push normally, it auto-alternates and logs
  git push

  # View timing comparison
  node scripts/plot-prepush-timing.mjs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Median is more robust to outliers in timing data.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Timing data is meaningless if no Rust compilation occurred.
Now checks for .rs/.toml changes before logging to JSONL.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Check for "Compiling" in output instead of file changes.
This correctly handles cached vs uncached builds.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- pre-push-parallel: use pinned toolchain for all stages (fmt, clippy, tests)
- pre-push-parallel: trap kills background jobs on EXIT/INT/TERM
- pre-push-parallel: verify toolchain exists before using cargo +"$PINNED"
- pre-push-parallel: run_rustdoc now propagates cargo doc exit codes
- pre-push-parallel: enable set -e for fail-fast on mktemp failure
- pre-push-parallel: run_patterns shows rg match output, not just "FAIL"
- pre-push-parallel: add run_determinism stage (ban-nondeterminism.sh)
- sweep-stale-artifacts: SIZE assignment uses fallback on du failure
- pre-commit: use cached $STAGED variable in PRNG guard
- allowlist: add wsc/mod.rs and wsc/view.rs for std::fs usage
- tests: add tests/hooks/test_hook_issues.sh regression tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/bootstrap_math_determinism.mjs (1)

23-39: File descriptor leak on write failure.

If any fs.writeSync call throws (disk full, permissions revoked mid-write, cosmic ray flipping a bit in your NVMe controller), the file descriptor is never closed. Yes, the OS will clean up when the process exits, but that's the "my room is clean because I'll die eventually" school of resource management.

🛠️ Suggested fix with try/finally
 function writeLog(schemaHashHex, frames) {
     const fd = fs.openSync(OUT_PATH, "w");
-    fs.writeSync(fd, MAGIC);
-    fs.writeSync(fd, VERSION);
-    fs.writeSync(fd, FLAGS);
-    fs.writeSync(fd, Buffer.from(schemaHashHex, "hex"));
-    fs.writeSync(fd, RESERVED);
-
-    for (const frame of frames) {
-        const len = Buffer.alloc(4);
-        len.writeUInt32LE(frame.byteLength);
-        fs.writeSync(fd, len);
-        fs.writeSync(fd, frame);
+    try {
+        fs.writeSync(fd, MAGIC);
+        fs.writeSync(fd, VERSION);
+        fs.writeSync(fd, FLAGS);
+        fs.writeSync(fd, Buffer.from(schemaHashHex, "hex"));
+        fs.writeSync(fd, RESERVED);
+
+        for (const frame of frames) {
+            const len = Buffer.alloc(4);
+            len.writeUInt32LE(frame.byteLength);
+            fs.writeSync(fd, len);
+            fs.writeSync(fd, frame);
+        }
+    } finally {
+        fs.closeSync(fd);
     }
-    fs.closeSync(fd);
     console.log(`Wrote ${OUT_PATH}`);
 }
🤖 Fix all issues with AI agents
In @.githooks/pre-push:
- Around line 31-35: The hook currently runs "$SCRIPT" in a pipeline under set
-euo pipefail so a failure causes the shell to exit before RC=${PIPESTATUS[0]}
runs; wrap the pipeline that sets START, runs "$SCRIPT" | tee "$OUTFILE", and
captures RC so the errexit behavior is disabled only for that section (e.g., use
"set +e" before START/PIPE, run "$SCRIPT" 2>&1 | tee "$OUTFILE" and then set
RC=${PIPESTATUS[0]}, then restore "set -e" or "set -euo pipefail"); ensure you
still record START and END timestamps and preserve variables START, RC, OUTFILE,
PIPESTATUS and END as in the original block.
- Around line 32-41: The timing uses GNU-only date (%s.%N) for START and END
which breaks on macOS; replace the START=$(date +%s.%N) and END=$(date +%s.%N)
assignments with a portable timer call (e.g. use python3 to print epoch seconds
with fractional part) so START and END are set via python3 (or another portable
fallback) while leaving the rest of the flow (running "$SCRIPT", capturing RC
from PIPESTATUS, checking OUTFILE for "Compiling", computing DURATION with bc,
and writing to LOGFILE with VARIANT and RC) unchanged; ensure the new START/END
values are plain numeric seconds (float) so the existing DURATION calculation
and JSON log appending continue to work.

In @.githooks/pre-push-parallel:
- Around line 112-117: The current waits (e.g., wait $PID_FMT, wait $PID_CLIPPY,
wait $PID_TESTS, wait $PID_RUSTDOC, wait $PID_PATTERNS, wait $PID_DETERMINISM)
will abort the script early if set -e is in effect; wrap the block of waits with
set +e before they start and restore set -e afterwards, then capture each wait
exit status into FMT_RC, CLIPPY_RC, TESTS_RC, RUSTDOC_RC, PATTERNS_RC,
DETERMINISM_RC as already written and update FAILED when any RC is non‑zero so
you still collect all results and print the summary without prematurely exiting.

In @.githooks/pre-push-sequential:
- Around line 18-25: The pre-push hooks run cargo fmt/clippy/test against the
active toolchain instead of the pinned one; update the three commands (cargo fmt
--all -- --check, cargo clippy --all-targets -- -D warnings -D missing_docs,
cargo test --workspace) to explicitly use the pinned toolchain by prefixing each
cargo invocation with +$PINNED (e.g., cargo +$PINNED fmt ...), matching how
rustdoc uses +"$PINNED" and ensuring consistency with the toolchain extracted
from rust-toolchain.toml.

In @.gitignore:
- Line 48: The .claude/ entry is incorrectly listed under the "# Playwright
artifacts" section in .gitignore; move the ".claude/" line out of that
Playwright block and place it under the "# Editor cruft" section instead (remove
the duplicate from Playwright), so references to the Claude Code artifacts are
correctly grouped with other editor cruft entries.
- Around line 13-16: The .gitignore has inconsistent blank-line grouping between
the "target-*" block and the subsequent entries; make the separation consistent
by adding or removing a blank line so that the group containing
".githooks/timing.jsonl" and "docs/.vitepress/cache" follows the same blank-line
convention as the "target-*" entries—i.e., either insert a blank line between
".githooks/timing.jsonl" and "docs/.vitepress/cache" or remove the blank line
after the "target-*" group to match the other grouping.
- Line 14: Remove the redundant comment line "Pre-push benchmark timing data"
from .gitignore; leave the existing `.githooks/timing.jsonl` entry intact since
the filename is self-documenting. Locate the comment just above
`.githooks/timing.jsonl` and delete that single comment line so the file only
contains the ignore pattern.
- Around line 9-12: Remove any proposed target-* wildcard replacement and keep
the explicit target-fmt, target-clippy, target-test, and target-doc entries
(they are intentionally used by CARGO_TARGET_DIR); also relocate the .claude/
entry out from under the "Playwright artifacts" comment into the "Editor cruft"
section next to .vscode, .idea, and .obsidian (or give it its own category) so
the intent is clear. Ensure comments above each group reflect their purpose
(Playwright artifacts vs Editor cruft) and do not collapse the explicitly named
target directories into a wildcard.

In `@crates/warp-core/src/wsc/build.rs`:
- Around line 27-171: Add a precondition check in build_one_warp_input to assert
the provided root_node_id exists in the GraphStore (e.g., call
store.node(root_node_id) or an equivalent contains/lookup method and
panic/assert if missing) before constructing OneWarpInput, and update the
function rustdoc to state that root_node_id must refer to an existing node in
the store; reference build_one_warp_input and OneWarpInput (and store.node /
GraphStore lookup) so the check is placed early in the function to fail fast on
invalid root IDs.

In `@crates/warp-core/src/wsc/validate.rs`:
- Around line 122-135: Non-ATOM attachments currently ignore blob_off/blob_len
and allow silent corruption; update the validation in validate.rs to check that
for att.is_atom() == false both att.blob_off() and att.blob_len() are zero and
return a new ReadError variant (e.g., BlobFieldsOnNonAtom or
NonAtomHasBlobFields) when they are non‑zero; add the new variant to the
ReadError enum and use it in the existing validation block (alongside the
existing BlobOutOfBounds error) so non‑ATOM DESCEND rows fail explicitly instead
of silently passing.
- Around line 41-63: validate_warp_view currently never checks that
view.root_node_id exists in the nodes array; add an explicit existence check
after the node ordering validation by using
nodes.binary_search_by_key(&view.root_node_id, |n| n.node_id) (or equivalent)
and return a descriptive ReadError (e.g., a new ReadError::MissingRoot { root:
view.root_node_id } or an existing appropriate variant) if the search fails;
update error type definitions if needed so callers can distinguish dangling root
errors.

In `@docs/WARP-GRAPH.md`:
- Around line 853-862: The fenced code block listing crate files in WARP-GRAPH
is missing a language tag; update the opening fence (the triple backticks before
the block containing mod.rs, types.rs, read.rs, view.rs, validate.rs, write.rs,
build.rs) to include a language identifier such as rust or text (e.g., change
``` to ```rust) so the block satisfies MD040 and improves syntax highlighting
and readability.

In `@scripts/bootstrap_math_determinism.mjs`:
- Around line 41-43: The OP_DROP_BALL and OP_ROUTE_PUSH constants are hardcoded
while SCHEMA_HASH is extracted dynamically; update
scripts/bootstrap_math_determinism.mjs to derive OP_DROP_BALL and OP_ROUTE_PUSH
from the same codecs.generated.rs parsing logic that produces SCHEMA_HASH (look
for the SCHEMA_HASH extraction code and mirror its approach to locate the
OP_DROP_BALL and OP_ROUTE_PUSH symbols), or if these opIds are truly stable, add
a clear comment next to OP_DROP_BALL and OP_ROUTE_PUSH stating they are
intentionally fixed and will not be regenerated so future maintainers understand
the exception.

In `@scripts/plot-prepush-timing.mjs`:
- Around line 11-21: The current code maps lines to JSON.parse without guarding
against empty or malformed JSONL (variables: lines, logfile, data), so replace
the single data = lines.map(...) with logic that first filters out empty strings
after trim().split('\n') and then iterates over each line, wrapping
JSON.parse(line) in a try/catch that on error prints a clear message including
logfile and the offending line and exits non-zero; build up data by pushing
parsed objects instead of using map so parse failures are handled per-line.

In `@scripts/sweep-stale-artifacts.sh`:
- Around line 8-10: The DAYS parameter in sweep-stale-artifacts.sh can be
non-numeric which breaks the later find -mtime call; validate DAYS after
assignment (DAYS="${1:-14}") using an integer check (e.g. a regex like ^[0-9]+$
or POSIX-safe test) and either default it to 14 or exit with a clear error if
invalid, ensuring the script won't call find -mtime with non-integer input;
reference the DAYS variable, the find -mtime usage, and the script's set -e
behavior when making the change.
♻️ Duplicate comments (2)
docs/WARP-GRAPH.md (2)

632-640: Indented code blocks still present — switch to fenced.

These trigger MD046 and render inconsistently. Convert to fenced blocks with a language tag (e.g., ```rust).

Also applies to: 747-754


849-851: Ordered list prefixes are inconsistent (MD029).

Normalize to a single style (1., 2., 3. or all 1.) across these sections.

Also applies to: 880-882, 911-913, 1082-1084, 1118-1120, 1212-1214

.gitignore Outdated
Comment on lines 13 to 16

# Pre-push benchmark timing data
.githooks/timing.jsonl
docs/.vitepress/cache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Inconsistent blank line usage.

Line 13 adds a blank separator after the target-* entries, but there's no corresponding blank line after .githooks/timing.jsonl (line 15) before docs/.vitepress/cache (line 16). Either both logical groups should be separated, or neither.

♻️ Proposed fix for consistency
 target-*

-# Pre-push benchmark timing data
 .githooks/timing.jsonl
+
 docs/.vitepress/cache

(Assuming the comment is removed per the previous suggestion.)

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Pre-push benchmark timing data
.githooks/timing.jsonl
docs/.vitepress/cache
.githooks/timing.jsonl
docs/.vitepress/cache
🤖 Prompt for AI Agents
In @.gitignore around lines 13 - 16, The .gitignore has inconsistent blank-line
grouping between the "target-*" block and the subsequent entries; make the
separation consistent by adding or removing a blank line so that the group
containing ".githooks/timing.jsonl" and "docs/.vitepress/cache" follows the same
blank-line convention as the "target-*" entries—i.e., either insert a blank line
between ".githooks/timing.jsonl" and "docs/.vitepress/cache" or remove the blank
line after the "target-*" group to match the other grouping.

Comment on lines 11 to 21
let lines;
try {
lines = readFileSync(logfile, 'utf-8').trim().split('\n');
} catch (e) {
console.error(`No timing data yet. Run some pushes first!`);
console.error(`Expected: ${logfile}`);
process.exit(1);
}

const data = lines.map(l => JSON.parse(l));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n scripts/plot-prepush-timing.mjs

Repository: flyingrobots/echo

Length of output: 2824


Wrap JSON.parse() in error handler—empty files and malformed JSONL will crash silently.

Line 20 executes JSON.parse() on every line with zero guards. An empty file produces [''] after trim().split('\n'), which throws SyntaxError on parse. Any malformed JSONL line crashes identically. The try-catch at lines 12–18 only protects file I/O, not parsing errors.

Replace the map with a loop that catches parse failures:

const data = [];
for (const line of lines) {
  try {
    data.push(JSON.parse(line));
  } catch (e) {
    console.error(`Malformed JSONL in ${logfile}:`, line);
    process.exit(1);
  }
}

Also filter empty lines after split to avoid parsing blank strings.

🤖 Prompt for AI Agents
In `@scripts/plot-prepush-timing.mjs` around lines 11 - 21, The current code maps
lines to JSON.parse without guarding against empty or malformed JSONL
(variables: lines, logfile, data), so replace the single data = lines.map(...)
with logic that first filters out empty strings after trim().split('\n') and
then iterates over each line, wrapping JSON.parse(line) in a try/catch that on
error prints a clear message including logfile and the offending line and exits
non-zero; build up data by pushing parsed objects instead of using map so parse
failures are handled per-line.

Comment on lines +8 to +10
DAYS="${1:-14}"

echo "🧹 Sweeping stale incremental cache (>${DAYS} days old)..."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n scripts/sweep-stale-artifacts.sh

Repository: flyingrobots/echo

Length of output: 1390


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Test 1: find with non-numeric -mtime value"
find . -maxdepth 0 -mtime +"not_a_number" -print0 2>/dev/null
echo "Exit code: $?"

Repository: flyingrobots/echo

Length of output: 103


🏁 Script executed:

#!/bin/bash
echo "Test 2: find with non-numeric -mtime (showing stderr)"
find . -maxdepth 0 -mtime +"not_a_number" -print0
echo "Exit code: $?"

Repository: flyingrobots/echo

Length of output: 195


Add integer validation for DAYS parameter to prevent find failure.

Line 8 accepts arbitrary input. Non-numeric DAYS causes find -mtime to fail with exit code 1, aborting the script under set -e (line 6) despite the 2>/dev/null redirection on line 24. A simple integer guard prevents this.

🔧 Proposed fix
 DAYS="${1:-14}"
+if ! [[ "$DAYS" =~ ^[0-9]+$ ]]; then
+  echo "sweep-stale-artifacts: DAYS must be an integer number of days" >&2
+  exit 1
+fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DAYS="${1:-14}"
echo "🧹 Sweeping stale incremental cache (>${DAYS} days old)..."
DAYS="${1:-14}"
if ! [[ "$DAYS" =~ ^[0-9]+$ ]]; then
echo "sweep-stale-artifacts: DAYS must be an integer number of days" >&2
exit 1
fi
echo "🧹 Sweeping stale incremental cache (>${DAYS} days old)..."
🤖 Prompt for AI Agents
In `@scripts/sweep-stale-artifacts.sh` around lines 8 - 10, The DAYS parameter in
sweep-stale-artifacts.sh can be non-numeric which breaks the later find -mtime
call; validate DAYS after assignment (DAYS="${1:-14}") using an integer check
(e.g. a regex like ^[0-9]+$ or POSIX-safe test) and either default it to 14 or
exit with a clear error if invalid, ensuring the script won't call find -mtime
with non-integer input; reference the DAYS variable, the find -mtime usage, and
the script's set -e behavior when making the change.

- Add USE_NEXTEST flag when cargo-nextest is available
- Fall back to cargo test with hint to install nextest
- Update both pre-push-sequential and pre-push-parallel

Benchmark: 15.4s → 6.2s for workspace tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @.githooks/pre-push-parallel:
- Around line 36-40: In cleanup(), avoid the GNU-only xargs -r by capturing the
background PIDs first and only invoking kill when non-empty; e.g. store the
output of jobs -p in a variable (or an array), check [ -n "$pids" ] (or length)
and then echo "$pids" | xargs kill 2>/dev/null || true, keeping the existing rm
-rf "$LOGDIR" intact and referencing the cleanup() function and the jobs -p
pipeline so the change is portable to macOS/BSD.

In @.githooks/pre-push-sequential:
- Around line 6-13: After the dependency loop, verify the PINNED toolchain is
installed before using cargo +"$PINNED"; check rustup for the presence of the
$PINNED entry (e.g. via rustup toolchain list | grep -F -- "$PINNED" or rustup
toolchain list and pattern match) and if not found print a clear error like
"[pre-push] missing toolchain: $PINNED. Install it with 'rustup toolchain
install $PINNED'." and exit non-zero; reference the PINNED variable and the
rustup/cargo usage so the check runs prior to running cargo +"$PINNED" doc.
- Around line 74-94: The script currently runs rg twice for each pattern (once
redirecting to /dev/null to test, then again to print matches); change each
check to run rg only once and capture its output (e.g., with a variable or
command substitution) and then test if the captured output is non-empty; for the
'#!\[allow\([^]]*missing_docs[^]]*\)\]' check and the
"\\#\[unsafe\(no_mangle\)\]" check, replace the two rg invocations with a single
rg invocation that stores results, and if non-empty print the stored output to
stderr and exit 1.
♻️ Duplicate comments (2)
.githooks/pre-push-sequential (1)

27-38: Toolchain pinning inconsistency — fmt/clippy/test use active toolchain while rustdoc uses $PINNED.

The script extracts $PINNED on lines 5-6 but then ignores it for the three most critical checks. This defeats the entire point of pinning. A developer with a different default toolchain will pass local checks but fail CI, or worse, introduce formatting/lint drift that only surfaces later.

The echo messages even say "(stable)" — this isn't stable, it's whatever the developer happens to have active.

-echo "[pre-push] fmt (stable)"
-cargo fmt --all -- --check
+echo "[pre-push] fmt (pinned @ $PINNED)"
+cargo +"$PINNED" fmt --all -- --check

-echo "[pre-push] clippy (workspace, stable)"
-cargo clippy --all-targets -- -D warnings -D missing_docs
+echo "[pre-push] clippy (workspace, pinned @ $PINNED)"
+cargo +"$PINNED" clippy --all-targets -- -D warnings -D missing_docs

-echo "[pre-push] tests (workspace, stable)"
+echo "[pre-push] tests (workspace, pinned @ $PINNED)"
 if [[ $USE_NEXTEST -eq 1 ]]; then
-  cargo nextest run --workspace
+  cargo +"$PINNED" nextest run --workspace
 else
-  cargo test --workspace
+  cargo +"$PINNED" test --workspace
 fi
.githooks/pre-push-parallel (1)

122-130: set -e still active during wait — first failure aborts before collecting remaining results.

With set -euo pipefail from line 4, the first wait returning non-zero immediately terminates the script. If fmt fails, you never wait for clippy, tests, rustdoc, patterns, or determinism. Their background processes become orphans (the trap does kill them, but you lose any diagnostic value), and the failure summary on lines 138-143 never runs.

Wrap the wait block with set +e / set -e:

 # Wait and collect results
 FAILED=0
 
+set +e  # Disable errexit so we can collect all exit codes
 wait $PID_FMT;         FMT_RC=$?;         [ $FMT_RC -ne 0 ]         && FAILED=1 && echo "[FAIL] fmt (see log)"
 wait $PID_CLIPPY;      CLIPPY_RC=$?;      [ $CLIPPY_RC -ne 0 ]      && FAILED=1 && echo "[FAIL] clippy (see log)"
 wait $PID_TESTS;       TESTS_RC=$?;       [ $TESTS_RC -ne 0 ]       && FAILED=1 && echo "[FAIL] tests (see log)"
 wait $PID_RUSTDOC;     RUSTDOC_RC=$?;     [ $RUSTDOC_RC -ne 0 ]     && FAILED=1 && echo "[FAIL] rustdoc (see log)"
 wait $PID_PATTERNS;    PATTERNS_RC=$?;    [ $PATTERNS_RC -ne 0 ]    && FAILED=1 && echo "[FAIL] patterns (see log)"
 wait $PID_DETERMINISM; DETERMINISM_RC=$?; [ $DETERMINISM_RC -ne 0 ] && FAILED=1 && echo "[FAIL] determinism (see log)"
+set -e  # Re-enable errexit

- pre-push-parallel: replace GNU-only xargs -r with portable pids check
- pre-push-parallel: wrap wait block with set +e/set -e to collect all exit codes
- pre-push-sequential: run rg once per pattern with output capture
- pre-push-sequential: add missing toolchain check, use +$PINNED consistently
- Add test script for hook issue regression testing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@flyingrobots flyingrobots merged commit 8a7e949 into main Jan 17, 2026
17 of 18 checks passed
@flyingrobots flyingrobots deleted the warp-graph branch January 17, 2026 13:26
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🤖 Fix all issues with AI agents
In @.githooks/pre-commit:
- Around line 126-135: The MD_FILES splitting is unsafe for filenames with
spaces; replace the current grep-into-variable approach (MD_FILES=$(echo
"$STAGED" | grep -E '\.md$' || true)) and the SC2086 suppression with a
null-delimited safe iteration: produce null-separated markdown file names from
STAGED and iterate with a while read -r -d '' loop (or pass to xargs -0) to call
npx markdownlint-cli2 per-batch; update the conditional that checks for
non-empty results to use the safer iteration so npx markdownlint-cli2 receives
correctly quoted filenames and no word-splitting occurs when running the lint
step.

In @.githooks/pre-push:
- Around line 10-22: The get_timestamp function currently treats any output from
date +%s.%N as valid because grep -q '.' always matches; change the check to
capture the date output into a variable (e.g., ts=$(date +%s.%N 2>/dev/null))
and test that the fractional part is numeric (use a regex like '\.[0-9]\+$' or
grep -qE '\.[0-9]+$') before echoing it; if the regex fails, fall back to
python3/perl/date branches as currently implemented. Ensure the check and
variable names reference get_timestamp so the behavior only uses date's output
when %N actually produced digits.

In @.githooks/pre-push-parallel:
- Around line 115-138: The timing uses integer seconds by setting START=$(date
+%s) and END=$(date +%s); replace those with the project's higher-resolution
timestamp helper (call get_timestamp() for START and END) so the hook records
sub-second timestamps consistently with pre-push; update references to START and
END variables (used in the final echo "[parallel] completed in $((END -
START))s") to compute and display the elapsed time using the same timestamp
format as get_timestamp().

In @.githooks/pre-push-sequential:
- Around line 4-6: The current PINNED_FROM_FILE extraction misses indented
"channel" keys and retains trailing CRLF; update the awk extraction used to set
PINNED_FROM_FILE so the pattern allows optional leading whitespace before
"channel" and trim any trailing carriage return from the captured field, then
keep the existing PINNED fallback logic that uses PINNED_FROM_FILE or defaults
to 1.90.0; adjust the command that defines PINNED_FROM_FILE and leave the PINNED
assignment (`PINNED="${PINNED:-${PINNED_FROM_FILE:-1.90.0}}"`) unchanged.
- Around line 78-97: The current use of if match_output=$(rg …) treats rg exit
code 2 (errors) like “no matches”; update the two rg invocations (the one
searching for '#!\[allow([^]]*missing_docs[^]]*)\]' and the one for
"\\#\[unsafe\(no_mangle\)\]") to capture both the output and the exit status
(e.g., save $? after running rg into a variable), then branch on that status: if
rc>1 print a clear error message and the stderr/rg output and exit non‑zero
(fail fast on malformed globs/missing paths), if rc==1 print the match_output
and exit 1 (matches found), and if rc==0 continue (no matches). Ensure you still
preserve redirection of stderr for rg so error details are available in the log;
use the existing match_output variable names to locate the code to change.

In @.gitignore:
- Around line 9-12: The four Rust/Cargo build directories (target-clippy,
target-doc, target-fmt, target-test) in .gitignore are out of alphabetical order
and are miscategorized under the "Node modules & build artifacts" header;
reorder them alphabetically as target-clippy, target-doc, target-fmt,
target-test and either rename the section header to a language-agnostic label
like "Build artifacts & dependencies" or move these four entries into a new
"Rust / Cargo artifacts" section so they are correctly grouped and sorted.
- Line 13: Move the `.githooks/timing.jsonl` entry out of the "Node modules &
build artifacts" block in .gitignore and place it under a new "Git hook
artifacts" (or "Generated runtime data") heading so the file is categorized
correctly; specifically locate the `.githooks/timing.jsonl` line and cut/paste
it into the new section with a brief comment header to indicate it's a git hook
timing log.

In @.markdownlint.json:
- Around line 3-5: The MD033 rule configuration contains a non-HTML token "u8"
in the allowed_elements array which is dead/invalid config; open the MD033 entry
(the allowed_elements array) and either remove the "u8" string or replace it
with the intended HTML tag (e.g., "u" for underline or the actual custom element
name) so only valid HTML tag names remain in allowed_elements; ensure the
resulting JSON is valid after the edit.

In `@crates/warp-core/src/wsc/read.rs`:
- Around line 170-197: The rustdoc for validate_header omits the Alignment error
path: when bytemuck::try_from_bytes fails it can return ReadError::Alignment in
addition to FileTooSmall and InvalidMagic, so update the function doc comments
to list all possible errors (FileTooSmall, Alignment, InvalidMagic) and briefly
describe when each occurs (FileTooSmall if data.len() < size_of::<WscHeader>(),
Alignment if the slice isn’t properly aligned for WscHeader when calling
bytemuck::try_from_bytes, and InvalidMagic if header.magic !=
WscHeader::MAGIC_V1) so callers see every outcome.

In `@scripts/bootstrap_math_determinism.mjs`:
- Around line 83-86: The loop uses a magic literal 50 when generating route-push
frames; replace that with a named constant (e.g., PHYSICS_SIMULATION_STEPS) and
use it in the for loop, and add a short comment above the constant explaining
why that number was chosen (e.g., "number of simulation steps to ensure the ball
settles / coverage required for determinism"); update references in the loop
that call encodeRoutePushArgs and makeFrame with OP_ROUTE_PUSH to use the new
constant so future readers won't cargo-cult or change the value without
understanding the rationale.
- Around line 89-91: The regex used to extract SCHEMA_HASH is too strict about
spacing; update the pattern to allow flexible whitespace (e.g., use something
like /pub\s+const\s+SCHEMA_HASH\s*:\s*&str\s*=\s*"([0-9a-fA-F]+)"/) and use that
to capture the hex group from the codecs string variable, and when no match is
found throw an Error that includes the filename string passed to fs.readFileSync
and maybe the first few characters of the codecs content to aid debugging so the
thrown message is descriptive rather than the generic "Could not find
SCHEMA_HASH".

In `@scripts/plot-prepush-timing.mjs`:
- Around line 66-77: The code recomputes medians when calculating speedup;
capture the medians once and reuse them instead of calling median(...) again.
Compute or hoist const seqMed and parMed so they are available when both
seqValid and parValid are non-empty (e.g., declare seqMed and parMed in the
outer scope or compute them before the final if), then use seqMed and parMed in
the speedup calculation rather than recomputing from seqValid and parValid.
- Around line 42-45: The padding logic that fills shorter arrays sequential and
parallel with undefined will break asciichart.plot (and its format callback
x.toFixed(1)). Change the pad to use the last valid value from the respective
array (or 0 if the array is empty) instead of pushing undefined: find the last
numeric entry in sequential/parallel, default to 0 when none, and push that
value until both arrays reach maxLen so asciichart.plot and the format callback
always receive numbers.
♻️ Duplicate comments (1)
scripts/bootstrap_math_determinism.mjs (1)

44-48: Previous concern addressed.

The comment now explicitly documents that these OpIDs are stable FNV-1a hashes derived from operation names. This was the exact clarification requested in the prior review. Good.

Comment on lines +126 to +135
# 9) Markdown linting for staged .md files
MD_FILES=$(echo "$STAGED" | grep -E '\.md$' || true)
if [[ -n "$MD_FILES" ]] && command -v npx >/dev/null 2>&1; then
echo "pre-commit: linting markdown files"
# shellcheck disable=SC2086
if ! npx markdownlint-cli2 $MD_FILES; then
echo "pre-commit: markdown lint failed. Run: npx markdownlint-cli2 --fix <files>" >&2
exit 1
fi
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Word splitting on $MD_FILES breaks on filenames with spaces.

The shellcheck disable=SC2086 is a band-aid. If anyone ever creates docs/my file.md, this will explode spectacularly. Consider using a while read loop or null-delimited output:

♻️ More robust alternative
-MD_FILES=$(echo "$STAGED" | grep -E '\.md$' || true)
-if [[ -n "$MD_FILES" ]] && command -v npx >/dev/null 2>&1; then
-  echo "pre-commit: linting markdown files"
-  # shellcheck disable=SC2086
-  if ! npx markdownlint-cli2 $MD_FILES; then
+if command -v npx >/dev/null 2>&1; then
+  MD_FILES=()
+  while IFS= read -r f; do
+    [[ "$f" == *.md ]] && MD_FILES+=("$f")
+  done <<< "$STAGED"
+  if [[ ${`#MD_FILES`[@]} -gt 0 ]]; then
+    echo "pre-commit: linting markdown files"
+    if ! npx markdownlint-cli2 "${MD_FILES[@]}"; then

That said, if your repo convention forbids spaces in filenames, this is tolerable. Just don't be surprised when it bites someone.

🤖 Prompt for AI Agents
In @.githooks/pre-commit around lines 126 - 135, The MD_FILES splitting is
unsafe for filenames with spaces; replace the current grep-into-variable
approach (MD_FILES=$(echo "$STAGED" | grep -E '\.md$' || true)) and the SC2086
suppression with a null-delimited safe iteration: produce null-separated
markdown file names from STAGED and iterate with a while read -r -d '' loop (or
pass to xargs -0) to call npx markdownlint-cli2 per-batch; update the
conditional that checks for non-empty results to use the safer iteration so npx
markdownlint-cli2 receives correctly quoted filenames and no word-splitting
occurs when running the lint step.

Comment on lines +10 to +22
# Portable high-resolution timestamp (macOS date doesn't support %N)
get_timestamp() {
if date +%s.%N 2>/dev/null | grep -q '\.'; then
date +%s.%N
elif command -v python3 >/dev/null 2>&1; then
python3 -c 'import time; print(f"{time.time():.6f}")'
elif command -v perl >/dev/null 2>&1; then
perl -MTime::HiRes=time -e 'printf "%.6f\n", time'
else
echo "[pre-push] skipping ${krate}: missing crates/${krate}/Cargo.toml"
# Fallback to integer seconds
date +%s
fi
done
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: macOS detection is broken — grep -q '.' always matches.

On macOS, date +%s.%N outputs literal 1234567890.N (the %N doesn't expand). Your grep -q '.' matches ANY output containing any character, which is... all output. The check always passes, so macOS users get 1234567890.N fed to bc, which promptly chokes.

You need to verify that the fractional part is actually numeric:

🐛 Proposed fix
 get_timestamp() {
-  if date +%s.%N 2>/dev/null | grep -q '\.'; then
-    date +%s.%N
+  # Check if date supports nanoseconds (GNU) by verifying fractional part is numeric
+  local ts
+  ts=$(date +%s.%N 2>/dev/null) || ts=""
+  if [[ "$ts" =~ ^[0-9]+\.[0-9]+$ ]]; then
+    echo "$ts"
   elif command -v python3 >/dev/null 2>&1; then
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Portable high-resolution timestamp (macOS date doesn't support %N)
get_timestamp() {
if date +%s.%N 2>/dev/null | grep -q '\.'; then
date +%s.%N
elif command -v python3 >/dev/null 2>&1; then
python3 -c 'import time; print(f"{time.time():.6f}")'
elif command -v perl >/dev/null 2>&1; then
perl -MTime::HiRes=time -e 'printf "%.6f\n", time'
else
echo "[pre-push] skipping ${krate}: missing crates/${krate}/Cargo.toml"
# Fallback to integer seconds
date +%s
fi
done
}
# Portable high-resolution timestamp (macOS date doesn't support %N)
get_timestamp() {
# Check if date supports nanoseconds (GNU) by verifying fractional part is numeric
local ts
ts=$(date +%s.%N 2>/dev/null) || ts=""
if [[ "$ts" =~ ^[0-9]+\.[0-9]+$ ]]; then
echo "$ts"
elif command -v python3 >/dev/null 2>&1; then
python3 -c 'import time; print(f"{time.time():.6f}")'
elif command -v perl >/dev/null 2>&1; then
perl -MTime::HiRes=time -e 'printf "%.6f\n", time'
else
# Fallback to integer seconds
date +%s
fi
}
🤖 Prompt for AI Agents
In @.githooks/pre-push around lines 10 - 22, The get_timestamp function
currently treats any output from date +%s.%N as valid because grep -q '.' always
matches; change the check to capture the date output into a variable (e.g.,
ts=$(date +%s.%N 2>/dev/null)) and test that the fractional part is numeric (use
a regex like '\.[0-9]\+$' or grep -qE '\.[0-9]+$') before echoing it; if the
regex fails, fall back to python3/perl/date branches as currently implemented.
Ensure the check and variable names reference get_timestamp so the behavior only
uses date's output when %N actually produced digits.

Comment on lines +115 to +138
START=$(date +%s)

run_fmt > "$LOGDIR/fmt.log" 2>&1 & PID_FMT=$!
run_clippy > "$LOGDIR/clippy.log" 2>&1 & PID_CLIPPY=$!
run_tests > "$LOGDIR/tests.log" 2>&1 & PID_TESTS=$!
run_rustdoc > "$LOGDIR/rustdoc.log" 2>&1 & PID_RUSTDOC=$!
run_patterns > "$LOGDIR/patterns.log" 2>&1 & PID_PATTERNS=$!
run_determinism > "$LOGDIR/determinism.log" 2>&1 & PID_DETERMINISM=$!

# Wait and collect results
FAILED=0

# Disable errexit so we can collect all exit codes (otherwise first failure aborts)
set +e
wait $PID_FMT; FMT_RC=$?; [ $FMT_RC -ne 0 ] && FAILED=1 && echo "[FAIL] fmt (see log)"
wait $PID_CLIPPY; CLIPPY_RC=$?; [ $CLIPPY_RC -ne 0 ] && FAILED=1 && echo "[FAIL] clippy (see log)"
wait $PID_TESTS; TESTS_RC=$?; [ $TESTS_RC -ne 0 ] && FAILED=1 && echo "[FAIL] tests (see log)"
wait $PID_RUSTDOC; RUSTDOC_RC=$?; [ $RUSTDOC_RC -ne 0 ] && FAILED=1 && echo "[FAIL] rustdoc (see log)"
wait $PID_PATTERNS; PATTERNS_RC=$?; [ $PATTERNS_RC -ne 0 ] && FAILED=1 && echo "[FAIL] patterns (see log)"
wait $PID_DETERMINISM; DETERMINISM_RC=$?; [ $DETERMINISM_RC -ne 0 ] && FAILED=1 && echo "[FAIL] determinism (see log)"
set -e # Re-enable errexit

END=$(date +%s)
echo "[parallel] completed in $((END - START))s"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor inconsistency: integer-second timing vs. fractional in pre-push.

This hook uses date +%s (integer seconds), while pre-push uses get_timestamp() with nanosecond precision. The parallel hook's timing won't contribute to the JSONL comparison since it doesn't log there, but if you ever want sub-second resolution here, consider reusing get_timestamp().

Not a bug — just noting the divergence.

🤖 Prompt for AI Agents
In @.githooks/pre-push-parallel around lines 115 - 138, The timing uses integer
seconds by setting START=$(date +%s) and END=$(date +%s); replace those with the
project's higher-resolution timestamp helper (call get_timestamp() for START and
END) so the hook records sub-second timestamps consistently with pre-push;
update references to START and END variables (used in the final echo "[parallel]
completed in $((END - START))s") to compute and display the elapsed time using
the same timestamp format as get_timestamp().

Comment on lines +4 to +6
# Resolve the pinned toolchain from rust-toolchain.toml, fallback to explicit env or a sane default
PINNED_FROM_FILE=$(awk -F '"' '/^channel/ {print $2}' rust-toolchain.toml 2>/dev/null || echo "")
PINNED="${PINNED:-${PINNED_FROM_FILE:-1.90.0}}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Harden toolchain parsing against whitespace/CRLF.
/^[^]channel/ misses indented keys and CRLF can leave a trailing \r, causing false “missing toolchain” failures. Trim and allow leading whitespace.

🔧 Proposed fix
-PINNED_FROM_FILE=$(awk -F '"' '/^channel/ {print $2}' rust-toolchain.toml 2>/dev/null || echo "")
+PINNED_FROM_FILE=$(awk -F '"' '/^[[:space:]]*channel[[:space:]]*=/ {print $2}' rust-toolchain.toml 2>/dev/null | tr -d '\r' || echo "")
🤖 Prompt for AI Agents
In @.githooks/pre-push-sequential around lines 4 - 6, The current
PINNED_FROM_FILE extraction misses indented "channel" keys and retains trailing
CRLF; update the awk extraction used to set PINNED_FROM_FILE so the pattern
allows optional leading whitespace before "channel" and trim any trailing
carriage return from the captured field, then keep the existing PINNED fallback
logic that uses PINNED_FROM_FILE or defaults to 1.90.0; adjust the command that
defines PINNED_FROM_FILE and leave the PINNED assignment
(`PINNED="${PINNED:-${PINNED_FROM_FILE:-1.90.0}}"`) unchanged.

Comment on lines +78 to +97
# Banned patterns
echo "[pre-push] scanning banned patterns"
# Forbid crate-level allow(missing_docs) in library source files, but allow in tests and build scripts
# Run rg once and capture output to avoid duplicate invocations
if match_output=$(rg -n '#!\[allow\([^]]*missing_docs[^]]*\)\]' \
crates \
--glob 'crates/**/src/**/*.rs' \
--glob '!**/telemetry.rs' \
--glob '!**/tests/**' \
--glob '!**/build.rs' \
--glob '!**/*.generated.rs' 2>&1); then
echo "pre-push: crate-level allow(missing_docs) is forbidden (except telemetry.rs and *.generated.rs)." >&2
echo "$match_output" >&2
exit 1
fi
if match_output=$(rg -n "\\#\[unsafe\(no_mangle\)\]" crates 2>&1); then
echo "pre-push: #[unsafe(no_mangle)] is invalid; use #[no_mangle]." >&2
echo "$match_output" >&2
exit 1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don’t silently ignore rg failures.
if match_output=$(rg …) treats exit code 2 as “no matches,” so a malformed glob or missing path can slip by. Fail fast on non‑1 errors.

🔧 Proposed fix
-if match_output=$(rg -n '#!\[allow\([^]]*missing_docs[^]]*\)\]' \
+if match_output=$(rg -n '#!\[allow\([^]]*missing_docs[^]]*\)\]' \
   crates \
   --glob 'crates/**/src/**/*.rs' \
   --glob '!**/telemetry.rs' \
   --glob '!**/tests/**' \
   --glob '!**/build.rs' \
   --glob '!**/*.generated.rs' 2>&1); then
   echo "pre-push: crate-level allow(missing_docs) is forbidden (except telemetry.rs and *.generated.rs)." >&2
   echo "$match_output" >&2
   exit 1
+elif [ $? -ne 1 ]; then
+  echo "pre-push: rg failed while scanning allow(missing_docs)." >&2
+  echo "$match_output" >&2
+  exit 1
 fi
-if match_output=$(rg -n "\\#\[unsafe\(no_mangle\)\]" crates 2>&1); then
+if match_output=$(rg -n "\\#\[unsafe\(no_mangle\)\]" crates 2>&1); then
   echo "pre-push: #[unsafe(no_mangle)] is invalid; use #[no_mangle]." >&2
   echo "$match_output" >&2
   exit 1
+elif [ $? -ne 1 ]; then
+  echo "pre-push: rg failed while scanning unsafe(no_mangle)." >&2
+  echo "$match_output" >&2
+  exit 1
 fi
🤖 Prompt for AI Agents
In @.githooks/pre-push-sequential around lines 78 - 97, The current use of if
match_output=$(rg …) treats rg exit code 2 (errors) like “no matches”; update
the two rg invocations (the one searching for
'#!\[allow([^]]*missing_docs[^]]*)\]' and the one for
"\\#\[unsafe\(no_mangle\)\]") to capture both the output and the exit status
(e.g., save $? after running rg into a variable), then branch on that status: if
rc>1 print a clear error message and the stderr/rg output and exit non‑zero
(fail fast on malformed globs/missing paths), if rc==1 print the match_output
and exit 1 (matches found), and if rc==0 continue (no matches). Ensure you still
preserve redirection of stderr for rg so error details are available in the log;
use the existing match_output variable names to locate the code to change.

Comment on lines +170 to +197
/// Validates that a byte slice contains a valid WSC header.
///
/// # Errors
///
/// Returns [`ReadError::FileTooSmall`] if the data is shorter than the header size.
/// Returns [`ReadError::InvalidMagic`] if the magic bytes don't match.
pub fn validate_header(data: &[u8]) -> Result<&WscHeader, ReadError> {
let header_size = std::mem::size_of::<WscHeader>();

if data.len() < header_size {
return Err(ReadError::FileTooSmall {
size: data.len(),
minimum: header_size,
});
}

// Use bytemuck for safe transmutation
let header: &WscHeader = bytemuck::try_from_bytes(&data[..header_size])?;

if header.magic != WscHeader::MAGIC_V1 {
return Err(ReadError::InvalidMagic {
expected: WscHeader::MAGIC_V1,
actual: header.magic,
});
}

Ok(header)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, find and examine the target file
fd -t f "read.rs" crates/warp-core/src/wsc/

Repository: flyingrobots/echo

Length of output: 93


🏁 Script executed:

# Check the file size and content around the specified lines
cat -n crates/warp-core/src/wsc/read.rs | sed -n '160,210p'

Repository: flyingrobots/echo

Length of output: 1850


🏁 Script executed:

# Search for ReadError enum definition to see all variants
fd -t f "*.rs" crates/warp-core/src/ | xargs grep -l "enum ReadError"

Repository: flyingrobots/echo

Length of output: 346


🏁 Script executed:

# Fix the fd command syntax
rg "enum ReadError" crates/warp-core/src/

Repository: flyingrobots/echo

Length of output: 114


🏁 Script executed:

# Also search for where ReadError is defined and all its variants
rg "ReadError::" crates/warp-core/src/wsc/read.rs | head -30

Repository: flyingrobots/echo

Length of output: 925


🏁 Script executed:

# Check the full ReadError enum definition
cat -n crates/warp-core/src/wsc/read.rs | sed -n '1,170p' | tail -80

Repository: flyingrobots/echo

Length of output: 3116


🏁 Script executed:

# Verify the file starts with proper SPDX header and copyright
head -5 crates/warp-core/src/wsc/read.rs

Repository: flyingrobots/echo

Length of output: 276


🏁 Script executed:

# Check if bytemuck::try_from_bytes documentation or the ? operator behavior on PodCastError
rg "try_from_bytes" crates/warp-core/src/wsc/read.rs -B 2 -A 2

Repository: flyingrobots/echo

Length of output: 228


🏁 Script executed:

# Look for any tests that verify the Alignment error path
rg "Alignment" crates/warp-core/src/wsc/read.rs -B 3 -A 3

Repository: flyingrobots/echo

Length of output: 1072


Document the alignment error path.

validate_header can return ReadError::Alignment via bytemuck::try_from_bytes, but the rustdoc only documents FileTooSmall and InvalidMagic. Update the docs so callers know all error outcomes.

📝 Suggested doc update
 /// # Errors
 ///
 /// Returns [`ReadError::FileTooSmall`] if the data is shorter than the header size.
 /// Returns [`ReadError::InvalidMagic`] if the magic bytes don't match.
+/// Returns [`ReadError::Alignment`] if the header is misaligned for `WscHeader`.

Per coding guidelines, all possible error outcomes must be documented in rustdoc.

🤖 Prompt for AI Agents
In `@crates/warp-core/src/wsc/read.rs` around lines 170 - 197, The rustdoc for
validate_header omits the Alignment error path: when bytemuck::try_from_bytes
fails it can return ReadError::Alignment in addition to FileTooSmall and
InvalidMagic, so update the function doc comments to list all possible errors
(FileTooSmall, Alignment, InvalidMagic) and briefly describe when each occurs
(FileTooSmall if data.len() < size_of::<WscHeader>(), Alignment if the slice
isn’t properly aligned for WscHeader when calling bytemuck::try_from_bytes, and
InvalidMagic if header.magic != WscHeader::MAGIC_V1) so callers see every
outcome.

Comment on lines 83 to 86
for (let i = 0; i < 50; i++) {
frames.push(makeFrame(OP_TOGGLE_NAV));
const args = encodeRoutePushArgs(`/physics/step/${i}`);
frames.push(makeFrame(OP_ROUTE_PUSH, args));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Magic number 50 deserves a name.

Why 50? Is it "enough steps for the ball to settle"? "Minimum required for coverage"? "Arbitrary but sufficient"? A named constant like PHYSICS_SIMULATION_STEPS with a comment explaining the rationale would prevent the next developer from cargo-culting or "optimizing" this to 10.

♻️ Suggested refactor
+// Number of physics simulation steps required for the ball to drop from
+// height 400 and settle at y=0 with gravity and initial velocity -5.
+const PHYSICS_SIMULATION_STEPS = 50;
+
 const frames = [];
 
 // 1. Drop Ball (Initiates Motion/Physics)
 frames.push(makeFrame(OP_DROP_BALL));
 
 // 2. Padding steps to allow physics to simulate
-for (let i = 0; i < 50; i++) {
+for (let i = 0; i < PHYSICS_SIMULATION_STEPS; i++) {
     const args = encodeRoutePushArgs(`/physics/step/${i}`);
     frames.push(makeFrame(OP_ROUTE_PUSH, args));
 }
🤖 Prompt for AI Agents
In `@scripts/bootstrap_math_determinism.mjs` around lines 83 - 86, The loop uses a
magic literal 50 when generating route-push frames; replace that with a named
constant (e.g., PHYSICS_SIMULATION_STEPS) and use it in the for loop, and add a
short comment above the constant explaining why that number was chosen (e.g.,
"number of simulation steps to ensure the ball settles / coverage required for
determinism"); update references in the loop that call encodeRoutePushArgs and
makeFrame with OP_ROUTE_PUSH to use the new constant so future readers won't
cargo-cult or change the value without understanding the rationale.

Comment on lines +89 to +91
const codecs = fs.readFileSync("crates/echo-dind-tests/src/codecs.generated.rs", "utf8");
const match = codecs.match(/pub const SCHEMA_HASH: &str = "([0-9a-fA-F]+)";/);
if (!match) throw new Error("Could not find SCHEMA_HASH in codecs.generated.rs");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Regex is brittle to whitespace variations.

The regex /pub const SCHEMA_HASH: &str = "([0-9a-fA-F]+)";/ assumes exact spacing. If someone runs rustfmt with different settings or the generator emits pub const SCHEMA_HASH : &str="...", this silently fails with a generic "Could not find SCHEMA_HASH" error.

Consider a more permissive pattern or at minimum improve the error message to include the path that was searched.

♻️ Suggested fix
-const match = codecs.match(/pub const SCHEMA_HASH: &str = "([0-9a-fA-F]+)";/);
-if (!match) throw new Error("Could not find SCHEMA_HASH in codecs.generated.rs");
+const CODECS_PATH = "crates/echo-dind-tests/src/codecs.generated.rs";
+const codecs = fs.readFileSync(CODECS_PATH, "utf8");
+const match = codecs.match(/pub\s+const\s+SCHEMA_HASH\s*:\s*&str\s*=\s*"([0-9a-fA-F]+)"\s*;/);
+if (!match) throw new Error(`Could not find SCHEMA_HASH in ${CODECS_PATH}`);
🤖 Prompt for AI Agents
In `@scripts/bootstrap_math_determinism.mjs` around lines 89 - 91, The regex used
to extract SCHEMA_HASH is too strict about spacing; update the pattern to allow
flexible whitespace (e.g., use something like
/pub\s+const\s+SCHEMA_HASH\s*:\s*&str\s*=\s*"([0-9a-fA-F]+)"/) and use that to
capture the hex group from the codecs string variable, and when no match is
found throw an Error that includes the filename string passed to fs.readFileSync
and maybe the first few characters of the codecs content to aid debugging so the
thrown message is descriptive rather than the generic "Could not find
SCHEMA_HASH".

Comment on lines +42 to +45
// Pad shorter array to match lengths for chart alignment
const maxLen = Math.max(sequential.length, parallel.length);
while (sequential.length < maxLen) sequential.push(undefined);
while (parallel.length < maxLen) parallel.push(undefined);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat scripts/plot-prepush-timing.mjs | head -60

Repository: flyingrobots/echo

Length of output: 1979


🌐 Web query:

asciichart npm documentation undefined values NaN handling

💡 Result:

Short answer: asciichart (npm) does not special-case undefined/NaN values — they propagate through the math and will break the autoscaling/plot. The library computes min/max with Math.min/Math.max and later multiplies/rounds every series value, so any NaN or undefined will produce NaN range/coordinates and corrupt the output. See the implementation (raw asciichart.js) where values are passed directly into Math.min/Math.max and arithmetic without checks. [1][2]

Sources:

  • asciichart.js source (shows Math.min/Math.max and direct arithmetic on series values). [1]
  • asciichart README / npm page (usage and no mention of NaN-handling). [2]

undefined values will break asciichart.plot() and crash at format callback.

asciichart does not guard against undefined/NaN—it passes series values directly into Math.min/Math.max and arithmetic operations. The chart's autoscaling will compute NaN, corrupting the plot. Worse, the format callback x.toFixed(1) on line 51 will throw TypeError: Cannot read property 'toFixed' of undefined.

Pad with the last valid value (or 0 if empty) instead:

🔧 Fix
 // Pad shorter array to match lengths for chart alignment
 const maxLen = Math.max(sequential.length, parallel.length);
-while (sequential.length < maxLen) sequential.push(undefined);
-while (parallel.length < maxLen) parallel.push(undefined);
+const seqLast = sequential[sequential.length - 1] ?? 0;
+const parLast = parallel[parallel.length - 1] ?? 0;
+while (sequential.length < maxLen) sequential.push(seqLast);
+while (parallel.length < maxLen) parallel.push(parLast);
🤖 Prompt for AI Agents
In `@scripts/plot-prepush-timing.mjs` around lines 42 - 45, The padding logic that
fills shorter arrays sequential and parallel with undefined will break
asciichart.plot (and its format callback x.toFixed(1)). Change the pad to use
the last valid value from the respective array (or 0 if the array is empty)
instead of pushing undefined: find the last numeric entry in
sequential/parallel, default to 0 when none, and push that value until both
arrays reach maxLen so asciichart.plot and the format callback always receive
numbers.

Comment on lines +66 to +77
if (seqValid.length > 0) {
const seqMed = median(seqValid.map(d => d.duration));
console.log(` Sequential: ${seqValid.length} runs, median ${seqMed.toFixed(1)}s`);
}
if (parValid.length > 0) {
const parMed = median(parValid.map(d => d.duration));
console.log(` Parallel: ${parValid.length} runs, median ${parMed.toFixed(1)}s`);
}

if (seqValid.length > 0 && parValid.length > 0) {
const speedup = median(seqValid.map(d => d.duration)) / median(parValid.map(d => d.duration));
console.log(`\n Speedup: ${speedup.toFixed(1)}x`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor redundancy: median calculated twice for speedup.

Lines 67 and 71 compute medians that are already available, then lines 76 recomputes them again. Not a bug, but wasteful for large datasets:

♻️ Optional: cache medians
+let seqMed, parMed;
 if (seqValid.length > 0) {
-  const seqMed = median(seqValid.map(d => d.duration));
+  seqMed = median(seqValid.map(d => d.duration));
   console.log(`  Sequential: ${seqValid.length} runs, median ${seqMed.toFixed(1)}s`);
 }
 if (parValid.length > 0) {
-  const parMed = median(parValid.map(d => d.duration));
+  parMed = median(parValid.map(d => d.duration));
   console.log(`  Parallel:   ${parValid.length} runs, median ${parMed.toFixed(1)}s`);
 }

-if (seqValid.length > 0 && parValid.length > 0) {
-  const speedup = median(seqValid.map(d => d.duration)) / median(parValid.map(d => d.duration));
+if (seqMed !== undefined && parMed !== undefined) {
+  const speedup = seqMed / parMed;
   console.log(`\n  Speedup: ${speedup.toFixed(1)}x`);
 }
🤖 Prompt for AI Agents
In `@scripts/plot-prepush-timing.mjs` around lines 66 - 77, The code recomputes
medians when calculating speedup; capture the medians once and reuse them
instead of calling median(...) again. Compute or hoist const seqMed and parMed
so they are available when both seqValid and parValid are non-empty (e.g.,
declare seqMed and parMed in the outer scope or compute them before the final
if), then use seqMed and parMed in the speedup calculation rather than
recomputing from seqValid and parValid.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

tooling Tooling/CI/CLI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants