M14 HYGIENE: test hardening, DRY extraction, SOLID quick-wins#61
M14 HYGIENE: test hardening, DRY extraction, SOLID quick-wins#61flyingrobots merged 18 commits intomainfrom
Conversation
… property tests (B132)
… global mutation (B133)
…ClockAdapter (B140)
… lazy init (B141)
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (8)
📝 WalkthroughWalkthroughConsolidates duplicated utilities (trailer validation, checksum, HTTP stream helpers), removes deprecated clock re-exports, makes BitmapNeighborProvider lazy-validate on use, and hardens tests for determinism and behavioral assertions. No public API signatures were added. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
test/unit/domain/properties/Join.property.test.js (2)
308-314: Use RNG helper’s built-in shuffle to remove duplicate Fisher-Yates code.Line 308–314 manually re-implements shuffling even though
createRng(...).shuffle()already exists intest/helpers/seededRng.js.Suggested refactor
- const shuffleRng = createRng(42); - const shuffled = [...patches]; - for (let i = shuffled.length - 1; i > 0; i--) { - const j = Math.floor(shuffleRng.next() * (i + 1)); - [shuffled[i], shuffled[j]] = [shuffled[j], shuffled[i]]; - } + const shuffled = createRng(42).shuffle(patches);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/properties/Join.property.test.js` around lines 308 - 314, The test duplicates a Fisher–Yates shuffle instead of using the RNG helper's built-in shuffle: replace the manual loop that creates shuffleRng via createRng(42) and computes shuffled = [...patches] then swaps, with a call to createRng(42).shuffle(patches) (or use shuffleRng.shuffle(patches)) to produce the shuffled array; update references to shuffled to use the result of shuffle(...) and remove the manual loop and temporary variables (shuffleRng/shuffled).
199-199: Extract shared property-test seed constant for DRY and easier override.The repeated
seed: 42values are consistent, but centralizing them makes future updates (or env-based overrides) trivial.Suggested refactor
+const PROPERTY_TEST_SEED = 42; - { numRuns: 100, seed: 42 } + { numRuns: 100, seed: PROPERTY_TEST_SEED }(Apply to all
fc.assertoptions in this file.)Also applies to: 210-210, 220-220, 231-231, 244-244, 257-257, 323-323, 345-345, 394-394
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/properties/Join.property.test.js` at line 199, Extract a single shared seed constant (e.g., const PROPERTY_TEST_SEED = Number(process.env.PROPERTY_TEST_SEED) || 42) at the top of the test file and replace all inline `seed: 42` occurrences used in `fc.assert` options with `seed: PROPERTY_TEST_SEED`; update every `fc.assert(..., { numRuns: X, seed: 42 })` instance in this test (and the other listed occurrences) so the seed is centralized and can be overridden via env.test/benchmark/ReducerV5.benchmark.js (1)
39-39: Scope RNG per generation to keep fixtures reproducible per test case.Right now, Line 39 creates shared mutable RNG state for all calls.
generateV5Patchesand the shuffle test then consume a single sequence, so outcomes vary with prior benchmark execution order.Suggested refactor
-const rng = createRng(0xDEADBEEF); +const BENCHMARK_SEED = 0xDEADBEEF; function generateV5Patches(patchCount, options = {}) { const { writerCount = 5, opsPerPatch = 3, includeRemoves = true, + seed = BENCHMARK_SEED, } = options; + const rng = createRng(seed); ... } ... -const shuffled = rng.shuffle(patches); +const shuffled = createRng(BENCHMARK_SEED).shuffle(patches);Also applies to: 68-173, 322-323
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/benchmark/ReducerV5.benchmark.js` at line 39, The shared RNG created by createRng(0xDEADBEEF) at top-level causes non-deterministic fixtures because generateV5Patches and the shuffle benchmark consume the same mutable RNG across tests; fix by scoping RNG creation to each fixture generation and test (e.g., call createRng with the fixed seed inside each generateV5Patches invocation and inside the shuffle test setup or pass a fresh rng instance into those functions) so each call gets its own independent RNG instance; update calls to generateV5Patches and the shuffle test to use the locally created rng instead of the shared rng.test/benchmark/benchmarkUtils.js (1)
12-12: Avoid cross-benchmark coupling from module-global RNG state.Line 12 + Line 88 make
randomHexdepend on prior calls in the same process. Keep deterministic seeding, but allow RNG injection so each benchmark can isolate its sequence.Suggested refactor
-const rng = createRng(0xDEADBEEF); +const BENCHMARK_RNG_SEED = 0xDEADBEEF; +const rng = createRng(BENCHMARK_RNG_SEED); -export function randomHex(length = 8) { +export function randomHex(length = 8, random = rng) { let result = ''; const chars = '0123456789abcdef'; for (let i = 0; i < length; i++) { - result += chars[Math.floor(rng.next() * 16)]; + result += chars[Math.floor(random.next() * 16)]; } return result; }Also applies to: 84-89
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/benchmark/benchmarkUtils.js` at line 12, The module currently creates a module-global RNG via const rng = createRng(0xDEADBEEF) and uses it inside randomHex, causing cross-benchmark coupling; change randomHex to accept an optional RNG parameter (e.g., function randomHex(len, rngArg) { const r = rngArg ?? createRng(0xDEADBEEF); ... }) or provide a factory like createRandomHex(rng) that returns a randomHex closure so each benchmark can construct its own seeded RNG via createRng(seed) and pass it in; update callers in benchmarks to create and pass a per-benchmark RNG instead of relying on the module-global rng to preserve determinism while isolating sequences.src/domain/services/CheckpointMessageCodec.js (1)
98-103: Same validation gap as PatchMessageCodec: decoder lacks semantic validation.The encoder validates
graphwithvalidateGraphName,stateHashwithvalidateSha256, andfrontierOid/indexOidwithvalidateOid. The decoder only checks presence viarequireTrailer.🛡️ Proposed fix to add semantic validation
validateKindDiscriminator(trailers, 'checkpoint'); const graph = requireTrailer(trailers, 'graph', 'checkpoint'); + validateGraphName(graph); const stateHash = requireTrailer(trailers, 'stateHash', 'checkpoint'); + validateSha256(stateHash, 'stateHash'); const frontierOid = requireTrailer(trailers, 'frontierOid', 'checkpoint'); + validateOid(frontierOid, 'frontierOid'); const indexOid = requireTrailer(trailers, 'indexOid', 'checkpoint'); + validateOid(indexOid, 'indexOid'); const schema = parsePositiveIntTrailer(trailers, 'schema', 'checkpoint');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/services/CheckpointMessageCodec.js` around lines 98 - 103, Decoder currently only checks presence of trailers (using requireTrailer/parsePositiveIntTrailer) but misses semantic validation present in the encoder; after extracting values in CheckpointMessageCodec (the variables graph, stateHash, frontierOid, indexOid), call the corresponding validators—validateGraphName(graph), validateSha256(stateHash), validateOid(frontierOid) and validateOid(indexOid)—and ensure any validation errors propagate (throw or return) the same way other codec validations do; keep validateKindDiscriminator(trailers, 'checkpoint') in place.src/domain/services/PatchMessageCodec.js (1)
86-91: Decoder lacks semantic validation applied by encoder.The encoder validates
graphwithvalidateGraphName,writerwithvalidateWriterId, andpatchOidwithvalidateOid, but the decoder only usesrequireTrailerwhich checks for presence, not validity. In contrast,AuditMessageCodec.js(lines 84-94) applies secondary validation (validateGraphName,validateWriterId,validateOid,validateSha256) after extracting trailers.Consider adding the same semantic validations for consistency and defense-in-depth:
🛡️ Proposed fix to add semantic validation
validateKindDiscriminator(trailers, 'patch'); const graph = requireTrailer(trailers, 'graph', 'patch'); + validateGraphName(graph); const writer = requireTrailer(trailers, 'writer', 'patch'); + validateWriterId(writer); const lamport = parsePositiveIntTrailer(trailers, 'lamport', 'patch'); const patchOid = requireTrailer(trailers, 'patchOid', 'patch'); + validateOid(patchOid, 'patchOid'); const schema = parsePositiveIntTrailer(trailers, 'schema', 'patch');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/services/PatchMessageCodec.js` around lines 86 - 91, The decoder currently extracts trailers using requireTrailer/parsePositiveIntTrailer in PatchMessageCodec.js but does not perform the semantic checks the encoder applies; after extracting graph, writer, and patchOid (and before using them) call validateGraphName(graph), validateWriterId(writer), and validateOid(patchOid) respectively (similar to AuditMessageCodec.js) to enforce the same validation rules and keep defense-in-depth; keep existing validateKindDiscriminator(trailers, 'patch') and existing lamport/schema parsing as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/domain/services/TrailerValidation.js`:
- Around line 42-44: The validation uses parseInt(str, 10) which accepts
partial-numeric inputs like "2abc"; update the trailer numeric validation in
src/domain/services/TrailerValidation.js to first assert the string is a strict
unsigned integer (e.g. test str against a /^\d+$/ or equivalent) before
converting, then parse (or Number) and keep the existing check (Number.isInteger
and >0) and error throw; look for the parseInt(str, 10) call and the surrounding
variables kind, key and TRAILER_KEYS to modify.
In `@test/unit/domain/services/GitGraphAdapter.stress.test.js`:
- Around line 15-16: The comment above the simulated latency is stale: it says
"0-10ms" but the implementation uses await new Promise(r => setTimeout(r, (id %
3) * 2)); which yields deterministic delays of 0, 2, or 4 ms; update the comment
to match the actual behavior (e.g., "Simulate deterministic latency: 0, 2, or 4
ms based on id") next to the setTimeout line (referencing the id variable and
the setTimeout call), or alternatively change the expression if you intended a
0–10ms range.
---
Nitpick comments:
In `@src/domain/services/CheckpointMessageCodec.js`:
- Around line 98-103: Decoder currently only checks presence of trailers (using
requireTrailer/parsePositiveIntTrailer) but misses semantic validation present
in the encoder; after extracting values in CheckpointMessageCodec (the variables
graph, stateHash, frontierOid, indexOid), call the corresponding
validators—validateGraphName(graph), validateSha256(stateHash),
validateOid(frontierOid) and validateOid(indexOid)—and ensure any validation
errors propagate (throw or return) the same way other codec validations do; keep
validateKindDiscriminator(trailers, 'checkpoint') in place.
In `@src/domain/services/PatchMessageCodec.js`:
- Around line 86-91: The decoder currently extracts trailers using
requireTrailer/parsePositiveIntTrailer in PatchMessageCodec.js but does not
perform the semantic checks the encoder applies; after extracting graph, writer,
and patchOid (and before using them) call validateGraphName(graph),
validateWriterId(writer), and validateOid(patchOid) respectively (similar to
AuditMessageCodec.js) to enforce the same validation rules and keep
defense-in-depth; keep existing validateKindDiscriminator(trailers, 'patch') and
existing lamport/schema parsing as-is.
In `@test/benchmark/benchmarkUtils.js`:
- Line 12: The module currently creates a module-global RNG via const rng =
createRng(0xDEADBEEF) and uses it inside randomHex, causing cross-benchmark
coupling; change randomHex to accept an optional RNG parameter (e.g., function
randomHex(len, rngArg) { const r = rngArg ?? createRng(0xDEADBEEF); ... }) or
provide a factory like createRandomHex(rng) that returns a randomHex closure so
each benchmark can construct its own seeded RNG via createRng(seed) and pass it
in; update callers in benchmarks to create and pass a per-benchmark RNG instead
of relying on the module-global rng to preserve determinism while isolating
sequences.
In `@test/benchmark/ReducerV5.benchmark.js`:
- Line 39: The shared RNG created by createRng(0xDEADBEEF) at top-level causes
non-deterministic fixtures because generateV5Patches and the shuffle benchmark
consume the same mutable RNG across tests; fix by scoping RNG creation to each
fixture generation and test (e.g., call createRng with the fixed seed inside
each generateV5Patches invocation and inside the shuffle test setup or pass a
fresh rng instance into those functions) so each call gets its own independent
RNG instance; update calls to generateV5Patches and the shuffle test to use the
locally created rng instead of the shared rng.
In `@test/unit/domain/properties/Join.property.test.js`:
- Around line 308-314: The test duplicates a Fisher–Yates shuffle instead of
using the RNG helper's built-in shuffle: replace the manual loop that creates
shuffleRng via createRng(42) and computes shuffled = [...patches] then swaps,
with a call to createRng(42).shuffle(patches) (or use
shuffleRng.shuffle(patches)) to produce the shuffled array; update references to
shuffled to use the result of shuffle(...) and remove the manual loop and
temporary variables (shuffleRng/shuffled).
- Line 199: Extract a single shared seed constant (e.g., const
PROPERTY_TEST_SEED = Number(process.env.PROPERTY_TEST_SEED) || 42) at the top of
the test file and replace all inline `seed: 42` occurrences used in `fc.assert`
options with `seed: PROPERTY_TEST_SEED`; update every `fc.assert(..., { numRuns:
X, seed: 42 })` instance in this test (and the other listed occurrences) so the
seed is centralized and can be overridden via env.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
CHANGELOG.mdROADMAP.mdcontracts/type-surface.m8.jsonindex.d.tsindex.jssrc/domain/services/AnchorMessageCodec.jssrc/domain/services/AuditMessageCodec.jssrc/domain/services/BitmapIndexBuilder.jssrc/domain/services/BitmapNeighborProvider.jssrc/domain/services/CheckpointMessageCodec.jssrc/domain/services/PatchMessageCodec.jssrc/domain/services/StreamingBitmapIndexBuilder.jssrc/domain/services/TrailerValidation.jssrc/domain/utils/checksumUtils.jssrc/infrastructure/adapters/BunHttpAdapter.jssrc/infrastructure/adapters/DenoHttpAdapter.jssrc/infrastructure/adapters/GlobalClockAdapter.jssrc/infrastructure/adapters/NodeHttpAdapter.jssrc/infrastructure/adapters/PerformanceClockAdapter.jssrc/infrastructure/adapters/httpAdapterUtils.jstest/benchmark/ReducerV5.benchmark.jstest/benchmark/benchmarkUtils.jstest/unit/domain/WarpGraph.timing.test.jstest/unit/domain/WarpGraph.watch.test.jstest/unit/domain/index.exports.test.jstest/unit/domain/properties/Join.property.test.jstest/unit/domain/services/BitmapIndexReader.test.jstest/unit/domain/services/BitmapNeighborProvider.test.jstest/unit/domain/services/GitGraphAdapter.stress.test.jstest/unit/domain/services/PatchBuilderV2.snapshot.test.jstest/unit/domain/services/SyncAuthService.test.jstest/unit/domain/services/noBufferGlobal.test.js
💤 Files with no reviewable changes (6)
- contracts/type-surface.m8.json
- index.js
- src/infrastructure/adapters/GlobalClockAdapter.js
- test/unit/domain/index.exports.test.js
- src/infrastructure/adapters/PerformanceClockAdapter.js
- index.d.ts
…ordering and whitespace nits - Replace inline regex+Number schema parsing in AuditMessageCodec with shared parsePositiveIntTrailer() from TrailerValidation.js - Harden parsePositiveIntTrailer() with digits-only regex guard so non-integer strings like "1.5" are rejected before parseInt - Reorder CHANGELOG [Unreleased] sections: Changed → Removed → Fixed (Keep a Changelog canonical order) - Remove double blank lines in index.d.ts and index.exports.test.js
Release Preflight
If you tag this commit as |
Defense-in-depth: PatchMessageCodec and CheckpointMessageCodec decoders now validate field semantics (validateGraphName, validateWriterId, validateOid, validateSha256) after extracting trailers, matching the pattern already established in AuditMessageCodec. Added 7 tests covering invalid graph names, writer IDs, OIDs, and SHA-256 hashes in both decoders.
Comment said "0-10ms" but the deterministic expression (id % 3) * 2 produces 0, 2, or 4ms. Updated to match actual behavior.
…le helper - Replaced 9 inline `seed: 42` values with `PROPERTY_TEST_SEED` constant - Replaced manual Fisher-Yates shuffle with `createRng().shuffle()`
- benchmarkUtils.js: randomHex() accepts optional RNG parameter - ReducerV5.benchmark.js: generateV5Patches() creates its own seeded RNG instead of consuming shared module-level state; shuffle test uses a fresh RNG instance
Review Feedback — All Items AddressedThanks @coderabbitai for the thorough review! All items have been addressed: Actionable (2)
Nitpicks (6)
All 4569 tests passing (7 new tests added for decoder validation). Ready for re-review. |
|
@coderabbitai review please 🥕 |
|
✅ The I'll conduct a fresh review of the PR now. 🥕 ✅ Actions performedReview triggered.
|
|
✅ Actionable items fixed: ✅ Nitpicks addressed:
The addition of 7 new validation tests is great for coverage. I'll trigger a fresh review now to verify all the changes! ✅ Actions performedReview triggered.
|
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CHANGELOG.md`:
- Line 13: Update the CHANGELOG entry to remove the "no behavioral change" claim
and state that adding TrailerValidation.js (requireTrailer,
parsePositiveIntTrailer, validateKindDiscriminator) introduces stricter semantic
validation: Patch and Checkpoint decoders now perform the same graph name,
writer ID, OID, and SHA-256 checks as the Audit decoder and will reject certain
previously-accepted malformed messages; edit the sentence describing the change
to reflect this behavioral impact.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
CHANGELOG.mdsrc/domain/services/CheckpointMessageCodec.jssrc/domain/services/PatchMessageCodec.jstest/benchmark/ReducerV5.benchmark.jstest/benchmark/benchmarkUtils.jstest/unit/domain/properties/Join.property.test.jstest/unit/domain/services/GitGraphAdapter.stress.test.jstest/unit/domain/services/WarpMessageCodec.test.js
🚧 Files skipped from review as they are similar to previous changes (4)
- test/benchmark/benchmarkUtils.js
- test/benchmark/ReducerV5.benchmark.js
- test/unit/domain/properties/Join.property.test.js
- test/unit/domain/services/GitGraphAdapter.stress.test.js
The codec B134/B138 entry claimed "no behavioral change" but Patch and Checkpoint decoders now reject previously-accepted malformed inputs. Updated to "stricter rejection of malformed messages".
|
@coderabbitai review please 🥕 |
|
✅ Actions performedReview triggered.
|
Release Preflight
If you tag this commit as |
Summary
Executes M14 HYGIENE milestone — the top-priority work from the HEX_AUDIT covering test quality, DRY extraction, and SOLID quick-wins (T1–T6, B130–B141).
_idToShaCache,_snapshotState,_cachedState) with behavioral assertions; movedvi.useFakeTimers()frombeforeAlltobeforeEach/afterEachMath.random()in benchmarks with Mulberry32 RNG, addedseed: 42to all fast-check property tests, replaced random delays with deterministic values, documented intentional global mutationsTrailerValidation.jswithrequireTrailer(),parsePositiveIntTrailer(),validateKindDiscriminator(); all 4 codec decoders refactoredhttpAdapterUtils.jswithMAX_BODY_BYTES,readStreamBody(),noopLogger; eliminates duplication across Node/Bun/Deno adapterscomputeChecksum()to sharedchecksumUtils.jsPerformanceClockAdapter/GlobalClockAdaptershims (BREAKING); movedBitmapNeighborProvidervalidation to method level for lazy initDeferred
New files
src/domain/services/TrailerValidation.jssrc/infrastructure/adapters/httpAdapterUtils.jssrc/domain/utils/checksumUtils.jsDeleted files
src/infrastructure/adapters/PerformanceClockAdapter.jssrc/infrastructure/adapters/GlobalClockAdapter.jsTest plan
npm run lint— clean (changed files)npx tsc --noEmit— cleannpx tsc --noEmit -p test/type-check/tsconfig.json— cleannpm run test:local— 13,686 tests passnode scripts/check-dts-surface.js— pass (clock shims removed from manifest)Summary by CodeRabbit
Breaking Changes
Bug Fixes / Validation
Tests
Chores