test: harden V10 test coverage — assertions, publisher queue, memory layers, sub-graph gossip#118
Conversation
…y layers, and sub-graph gossip Add 4 new e2e test files covering critical V10 features that had insufficient test coverage: - e2e-assertion-lifecycle: create/write/query/promote/discard through DKGAgent API, entity-selective promote, sub-graph assertions, and two-node promote-via-gossip (7 tests) - e2e-publisher-queue: async lift publisher pipeline including FIFO ordering, pause/resume, cancel, stats, wallet contention, chain recovery, and retryable failure retry flow (10 tests) - e2e-memory-layers: full WM→SWM→VM pipeline on single and two-node setups, memory layer isolation, includeSharedMemory query view, and clearSharedMemoryAfter cleanup (7 tests) - e2e-sub-graph-gossip: sub-graph SWM write gossip, promote gossip, publish finalization across nodes, cross-sub-graph isolation, and concurrent multi-sub-graph writes on 3 nodes (8 tests) Also enhance devnet-test.sh (sections 14-16) and devnet-deep-test.sh (tests 9-10) with assertion lifecycle, publisher queue, and sub-graph assertion tests against the running devnet. Made-with: Cursor
| \"contextGraphId\":\"$ASSERT_CG\", | ||
| \"name\":\"devnet-draft\" | ||
| }") | ||
| ASSERT_URI=$(json_get "$ASSERT_CREATE" uri) |
There was a problem hiding this comment.
🔴 Bug: /api/assertion/create now returns assertionUri, not uri. This makes section 14a report a failure even when the assertion was created successfully. Parse assertionUri here instead. The same response-shape mismatch is duplicated in scripts/devnet-deep-test.sh.
| echo "$PUB_JOBS" | python3 -c 'import sys,json;d=json.load(sys.stdin);print(len(d) if isinstance(d,list) else len(d.get("jobs",[])))' 2>/dev/null && ok "Publisher jobs endpoint works" || warn "Publisher jobs: $PUB_JOBS" | ||
|
|
||
| echo "--- 15c: Enqueue a publish job ---" | ||
| PUB_ENQUEUE=$(c -X POST "http://127.0.0.1:9201/api/publisher/enqueue" -d "{ |
There was a problem hiding this comment.
🔴 Bug: This payload does not match the daemon contract for /api/publisher/enqueue (it requires a full LiftRequest with fields like roots, shareOperationId, namespace, scope, and authority proof). As written, the new queue section only gets a 400/warning and never exercises the publisher queue. Build the enqueue request from the preceding SWM operation metadata, and when polling status read job.status from /api/publisher/job's wrapped response.
| try { await nodeB?.stop(); } catch {} | ||
| }); | ||
|
|
||
| it('bootstraps two agents and connects', async () => { |
There was a problem hiding this comment.
🟡 Issue: The rest of this suite depends on this test mutating nodeA/nodeB and creating the shared context graph. That makes later cases fail if someone runs a single test or if execution order changes. Move the shared bootstrap into beforeAll so each it only verifies behavior, not suite setup.
| ]); | ||
|
|
||
| await nodeA.assertion.promote(CG_ID, 'gossip-draft'); | ||
| await sleep(3000); |
There was a problem hiding this comment.
🟡 Issue: This fixed 3s delay makes the gossip assertion timing-dependent and likely flaky under CI load. Poll nodeB.query(...) until the triple appears (or timeout) instead of assuming replication always finishes within a hard-coded sleep.
- devnet-test.sh: parse `assertionUri` (not `uri`) from /api/assertion/create - devnet-test.sh: send correct LiftRequest payload to /api/publisher/enqueue with roots, shareOperationId, namespace, scope, and authorityProofRef - devnet-test.sh: parse job status from wrapped /api/publisher/job response - e2e-sub-graph-gossip: move shared bootstrap into beforeAll so individual tests don't depend on execution order - e2e-assertion-lifecycle: replace fixed 3s sleep with poll loop for gossip replication to avoid CI flakiness Made-with: Cursor
| \"contextGraphId\": \"$CG\", | ||
| \"name\": \"deep-draft\" | ||
| }") | ||
| ASSERT_URI=$(echo "$ASSERT_CREATE" | python3 -c 'import sys,json;print(json.load(sys.stdin).get("uri",""))' 2>/dev/null) |
There was a problem hiding this comment.
🔴 Bug: /api/assertion/create returns assertionUri, not uri (see the daemon route). This lookup stays empty even when creation succeeds, so the new assertion smoke test fails immediately. Read assertionUri here.
| echo "$PUB_JOBS" | python3 -c 'import sys,json;json.load(sys.stdin)' 2>/dev/null && ok "Publisher jobs valid JSON" || warn "Publisher jobs: $PUB_JOBS" | ||
|
|
||
| echo "--- 10c: Enqueue via API ---" | ||
| PUB_ENQ=$(post 9201 /api/publisher/enqueue -H "Content-Type: application/json" -d "{ |
There was a problem hiding this comment.
🔴 Bug: /api/publisher/enqueue is the async-lift control-plane endpoint, not /api/shared-memory/publish. This payload only sends contextGraphId + selection, so the daemon returns Missing required enqueue fields and this section never exercises the queue. Build a real lift request (shareOperationId, roots, namespace, scope, authorityProofRef, etc.) or call /api/shared-memory/publish instead.
| echo "--- 10d: Check job status ---" | ||
| sleep 8 | ||
| JOB_CHECK=$(get 9201 "/api/publisher/job?id=$JOB_ID") | ||
| JOB_ST=$(echo "$JOB_CHECK" | python3 -c 'import sys,json;print(json.load(sys.stdin).get("status","?"))' 2>/dev/null) |
There was a problem hiding this comment.
🟡 Issue: GET /api/publisher/job responds with { job: {...} }, so reading top-level status here always yields ?. Because line 564 still reports success unconditionally, this check won't catch regressions. Unwrap job.status and fail when it's missing.
| ENQUEUE_OP_ID="devnet-enqueue-test-$(date +%s)" | ||
| PUB_ENQUEUE=$(c -X POST "http://127.0.0.1:9201/api/publisher/enqueue" -d "{ | ||
| \"contextGraphId\":\"$CONTEXT_GRAPH\", | ||
| \"shareOperationId\":\"$ENQUEUE_OP_ID\", |
There was a problem hiding this comment.
🔴 Bug: the queue resolves SWM state by the original shareOperationId, so generating a fresh timestamp-based ID here can never match the assertion-promote write from section 14. The next line also sends roots as objects, but the lift request expects string[]. Reuse the real SWM operation ID and send plain root IRIs, otherwise queued jobs will fail during workspace resolution.
| const quads = [ | ||
| { subject: `${ENTITY_BASE}:pub`, predicate: 'http://schema.org/name', object: '"Published"', graph: '' }, | ||
| ]; | ||
| await agent.publish(CG_ID, quads); |
There was a problem hiding this comment.
🟡 Issue: this test name says published data is absent from SWM, but it only asserts the canonical data graph. A regression that leaves the publish in _shared_memory would still pass. Add the negative SWM query/assertion here so the test covers the behavior it documents.
|
|
||
| await nodeA.createContextGraph({ id: CG_ID, name: 'Gossip Promote' }); | ||
| nodeB.subscribeToContextGraph(CG_ID); | ||
| await sleep(500); |
There was a problem hiding this comment.
🟡 Issue: the first promote happens only 500ms after subscribeToContextGraph. Other GossipSub E2Es in this repo wait longer or poll readiness, because the initial publish can race mesh formation on CI. Consider using the same settle window here to avoid a flaky test.
Summary
devnet-test.shanddevnet-deep-test.shNew test files
packages/agent/test/e2e-assertion-lifecycle.test.tspackages/publisher/test/e2e-publisher-queue.test.tspackages/agent/test/e2e-memory-layers.test.tspackages/agent/test/e2e-sub-graph-gossip.test.tsDevnet script enhancements
devnet-test.sh: Added sections 14 (Assertion Lifecycle), 15 (Publisher Queue), 16 (Sub-graph Assertions), plus SKILL.md assertion/sub-graph checksdevnet-deep-test.sh: Added tests 9 (Assertion Lifecycle with 5-node gossip + SWM→VM publish), 10 (Publisher Queue API)Rationale (TORNADO/BURA/KOSAVA)
These tests target the TORNADO tier — core V10 features (assertions, publisher queue, memory layers, sub-graphs) that had unit/integration coverage but lacked end-to-end tests through the DKGAgent API and multi-node gossip paths.
Test plan
devnet-test.sh+devnet-deep-test.shwith new sectionsMade with Cursor