Skip to content

Replit Tasks 4-12: Raven persona work + Replit migration#441

Closed
DHCross wants to merge 31 commits intomainfrom
replit-tasks-4-12
Closed

Replit Tasks 4-12: Raven persona work + Replit migration#441
DHCross wants to merge 31 commits intomainfrom
replit-tasks-4-12

Conversation

@DHCross
Copy link
Copy Markdown
Owner

@DHCross DHCross commented Apr 25, 2026

Superseded by the conflict-resolved PR. The 11-file conflicts were resolved on Replit via /tmp clone + git merge-tree, then a merge commit was built via the GitHub Trees/Commits API. See the new PR for the clean, one-click-mergeable version.

Baalorisn added 30 commits April 24, 2026 14:02
Modify next.config.ts to include allowedDevOrigins, update package.json scripts to bind to 0.0.0.0:5000, and add replit.md documentation.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 835b640d-85a8-4b96-a7d8-1cf9342390fa
Replit-Helium-Checkpoint-Created: true
Refactors persona prompts and translation layers to incorporate new "Three-Channel Physics" mapping and enforce a "Silhouette Rule" by replacing technical jargon with somatic equivalents.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 4dbb8f36-68de-4c72-afa3-8758a1974045
Replit-Helium-Checkpoint-Created: true
Add `chartjs-adapter-date-fns` and `date-fns` dependencies to `vessel/package.json` and regenerate lockfiles to resolve a build error on Vercel.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 1b803171-94f8-496c-a3fd-67df859d0b6e
Replit-Helium-Checkpoint-Created: true
Modify route and stream reply logic to ensure all errors result in narrated messages rather than silent failures, improving user feedback and debugging.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: cce7bbf9-1d8f-4564-ae23-df4ccdf9f516
Replit-Helium-Checkpoint-Created: true
Enhance the upstream error fallback message in `useOracleChat.ts` to include HTTP status codes and response body snippets for better debugging.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 5f99a27f-9b0d-4793-b21c-87c5ecd0b0d5
Replit-Helium-Checkpoint-Created: true
Add a wall-clock timer that races against the request pipeline, returning a fallback response if the pipeline exceeds the soft timeout.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 1d350edf-f015-4c30-8d3d-6830ab7363a9
Replit-Helium-Checkpoint-Created: true
Extend chat message types and update the UI to display a recovery badge for messages generated through error handling paths.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 4efc18ab-dd0e-479f-9ff4-1fee0004d20a
Replit-Helium-Checkpoint-Created: true
…m-only schema

Original: the chat pipeline's eight hardcoded silence-route fallbacks
narrated infrastructure status (provider 500, finalize crash, soft timeout,
missing relational scaffold, etc.) using the doctrinal FIELD/MAP/VOICE
three-channel grammar. That schema is reserved for symbolic readings; using
it for telemetry trains both the user and any model that ingests transcripts
to read mechanical failure as archetypal content.

- New module vessel/src/app/api/raven-chat/recoveryMessages.ts collects all
  eight operational fallback strings in one diff-able place.
- A doctrine-boundary doc comment at the top spells out: this is system-only
  language, the chosen schema is "SYSTEM: <subsystem> REASON: <why> ACTION:
  <next step>", and FIELD/MAP/VOICE labels must NOT be borrowed here.
- Three static constants (STREAM_REPLY_RESPONSE_MISSING,
  RELATIONAL_FALLBACK_SNAPSHOT_MISSING, PROVIDER_STREAM_NULL_BODY) and four
  builders (relational mapping unavailable, provider stream open failed,
  stream finalize failed, handler soft timeout) cover all interpolations.
- All eight call sites in route.ts and streamReply.ts rewired to the module.
  systemNotice payloads, applyTelemetrySignalVoid calls, and the
  RECOVERY_NOTICE_LABELS badge in page.tsx are unchanged.

- npm run typecheck passes (full tsc --noEmit).
- Spot-check via tsx confirms all 8 messages render with SYSTEM/REASON/ACTION
  and contain no FIELD:/MAP:/VOICE: leaks.
- Restarted dev workflow compiles and serves HTTP 200.
- Raven-chat unit tests: 2 failures (promptLines #1, enrichmentPhase #2) are
  preexisting on the baseline commit and unrelated; promptLines.test.ts is
  explicitly out of scope per the task brief.

Deviation: production "next build" did not complete in the agent timebox
(Webpack stalled in "Creating an optimized production build" under shared
CPU; same behaviour on baseline). Typecheck is the strongest static guarantee
available and it passes.

Replit-Task-Id: fa02d591-6763-4a20-908a-cb18c503cbb9
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 2abe65c6-7291-4cdc-81f9-f7f83870a80b
Replit-Helium-Checkpoint-Created: true
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: e629bde0-2818-40f4-821b-f6043e3ffb42
Replit-Helium-Checkpoint-Created: true
Refactors the LLM provider to support Gemini models, introduces a translation adapter for seamless integration with existing chat and image generation APIs, and enhances batch processing utilities with improved error handling and retry mechanisms.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: c1d68ea6-fa56-4a16-a022-18e435169f39
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/809f7b67-30d8-43b7-940b-e7c86b83d73e/99ab1b7e-7cc6-4c36-b429-1f3448948eb9/pkmHjGN
Replit-Helium-Checkpoint-Created: true
Original task: Split `applyTelemetrySignalVoid` in
`vessel/src/lib/raven/affirmativeRuntime.ts` into a strict-doctrine
helper (kept; still emits `TELEMETRY_SIGNAL_VOID` and retains the OSR
weather-authority guard) and a new broad-infrastructure helper
`applyTelemetryInfrastructureEvent` (emits new
`TELEMETRY_INFRASTRUCTURE_EVENT`, no OSR guard). Both helpers carry
TSDoc spelling out which doctrinal bucket each belongs to.

Migration: audited 35 actual call sites (task summary said 42) and
classified them. 3 are strict pre-testimony refusals and stay on
`applyTelemetrySignalVoid`:
- `route.ts` symbolic-moments anchor missing
- `route.ts` field-report two-clock absent
- `enrichmentPhase.ts` symbolic-moment anchor unavailable

The remaining 32 (route.ts, upstreamContext.ts, entityGuard.ts,
enrichmentPhase.ts, llmProvider.ts, relationalPrep.ts) cover
upstream/lens/protocol-repair/scaffold-recovery/throttle paths and
moved to `applyTelemetryInfrastructureEvent`.

Downstream consumers updated to recognize the new event type:
- `systemEventsMirror.ts`: added to `PersistableEventType` union and
  `isPersistableEventType` guard.
- `plannerSignals.ts`: added to `isSignalVoidEventType` so perturbation
  density continues to count both buckets.
- `SessionFlightRecorder.tsx`: relational-mapping-degraded detector and
  `describeRuntimeEvent` switch now handle the new type alongside the
  existing strict and legacy types.
- `RavenThinkingFeed.tsx`: noise-suppression list updated.
- `DownloadSessionButton.tsx`: `PIPELINE_EVENT_TYPES` set updated;
  `readTelemetrySignalReason` / `readTelemetryInputSnapshot` parsers
  now accept both event types via shared `isTelemetryEvent` predicate.
- `DownloadSessionButton.test.ts`: 4 fixture sites (protocol_repair /
  scaffold_recovery / debug-truncation) re-typed to the new event and
  the `pipelineEvents.every` allowed-list now includes both types.

Validation: `npm run typecheck` passes; full smoke suite shows the same
26 pre-existing failures as before this task (wrap-up artifacts, planner
copy, instrument-ledger PENDING→LIVE state machine, turn-continuity
prompt regex, etc.) — none reference the migrated helpers, the new
event type, or the modified files. `next build --webpack` compiles all
42 pages successfully.

Downstream contract change: admin telemetry views, log aggregators, and
any external mirrors should be made aware of the new
`TELEMETRY_INFRASTRUCTURE_EVENT` event type so that infrastructure
perturbations are no longer conflated with doctrinal Signal Voids.

Replit-Task-Id: 30fb17cf-7757-4706-9c21-bc6600353514
Add module-level guards to clear AI integration environment variables in api-smoke.test.ts to force fallback to direct mocking path.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 4f81f94d-0afd-4095-808b-8e758cdd52e4
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/809f7b67-30d8-43b7-940b-e7c86b83d73e/99ab1b7e-7cc6-4c36-b429-1f3448948eb9/pkmHjGN
Replit-Helium-Checkpoint-Created: true
Task #3: Add tests for all constants and builders in
vessel/src/app/api/raven-chat/recoveryMessages.ts to verify they pass
the same runtime validator pipeline as LLM output.

Changes:
- vessel/src/app/api/raven-chat/__tests__/recoveryMessages.test.ts (new)
  40 tests covering all 10 recovery samples (3 constants + 7 builders)
  across four checks each: entity guard (no unauthorised counterpart
  names), deterministic reply hardening (idempotent; non-empty), protocol
  repair gate (needed: false), and doctrine-prose rules (no standalone
  "weather", no manifestation-event vocabulary, no stacked
  planet-signature/transmission-condition language).

- vessel/src/app/api/raven-chat/recoveryMessages.ts
  Extended TRY_AGAIN_SOON constant from 'please try again in a moment.'
  to 'please try the same prompt again in a moment.' so the last sentence
  of any recovery string that uses it meets the >= 48-character threshold
  required by hasNavigationalClose (intentDetection.ts:137), preventing
  those strings from triggering needsProtocolRepair.

- vessel/package.json
  Added recoveryMessages.test.ts to the test:smoke command so it runs
  under `npm test` alongside the rest of the integration suite.

Replit-Task-Id: cca66557-87ef-42f9-a660-11fbe4584ff3
…re-event type

Task #5: split the doctrinal `TELEMETRY_SIGNAL_VOID` and the broad
`TELEMETRY_INFRASTRUCTURE_EVENT` runtime events into distinct buckets
across every in-repo admin/dashboard surface and internal-doc reference,
so operators (and downstream log aggregators) can count doctrinal
refusals separately from upstream/infrastructure perturbations.

Changes:

- vessel/src/components/chat/SessionFlightRecorder.tsx
  `describeRuntimeEvent` now branches inside the telemetry-event case so
  TELEMETRY_SIGNAL_VOID renders as "Signal void (doctrinal)" (warn),
  TELEMETRY_INFRASTRUCTURE_EVENT renders as "Infrastructure event" (info,
  reflecting that a reply is usually still emitted), and the legacy
  OSR_WEATHER_SIGNAL_VOID renders as "Signal void (legacy)". The pre-
  existing `relational_mapping_unavailable` special case still triggers
  first.

- vessel/src/components/chat/DownloadSessionButton.tsx
  `ReplyLifecycleSummary` gains a new exported field
  `latestInfrastructureEventReasons: string[]` alongside the existing
  `latestSignalVoidReasons`. Both are documented with TSDoc explaining
  which bucket each represents. `buildReplyLifecycle` introduces an
  internal pooled `telemetryReasons` array for pattern matching
  (preserving prior behaviour for protocol_repair / scaffolded_full_read
  detection, which can be carried by either event type) and two narrowed
  arrays for the exported fields.

- vessel/src/lib/server/systemEventsMirror.ts
  Module-level docstring documenting the runtime event vocabulary
  persisted to Postgres, calling out the two telemetry buckets and how
  the legacy OSR_WEATHER_SIGNAL_VOID is normalised.

- docs/stable-central-llm-guardrails.md, docs/PLANNER_IMPLEMENTATION_BRIEF.md
  Updated the "signal-void" references to mention both event types and
  what each one means.

Surfaces intentionally NOT touched (already correct):
- vessel/src/components/chat/RavenThinkingFeed.tsx — both events return
  null (admin-only; not surfaced to the user-facing thinking feed).
- vessel/src/lib/plannerSignals.ts — already has an explanatory comment
  saying the planner intentionally pools both for perturbation density.
- vessel/src/lib/raven/affirmativeRuntime.ts — TSDoc was already explicit
  about the doctrinal vs infrastructure split.

Verification:
- typecheck (vessel/tsconfig.json): clean
- DownloadSessionButton.test.ts: 8/8 pass
- SessionFlightRecorder.test.tsx: 1/1 pass
- Workflow restarted, app responds 200 on /
- Architect review: no high/medium issues, safe to merge

Follow-up proposed:
- Task #10 — Update the external (out-of-repo) log dashboards to count
  the two buckets separately.
…re-event type

Task #5: split the doctrinal `TELEMETRY_SIGNAL_VOID` and the broad
`TELEMETRY_INFRASTRUCTURE_EVENT` runtime events into distinct buckets
across every in-repo admin/dashboard surface and internal-doc reference,
so operators (and downstream log aggregators) can count doctrinal
refusals separately from upstream/infrastructure perturbations.

Changes:

- vessel/src/components/chat/SessionFlightRecorder.tsx
  `describeRuntimeEvent` now branches inside the telemetry-event case so
  TELEMETRY_SIGNAL_VOID renders as "Signal void (doctrinal)" (warn),
  TELEMETRY_INFRASTRUCTURE_EVENT renders as "Infrastructure event" (info,
  reflecting that a reply is usually still emitted), and the legacy
  OSR_WEATHER_SIGNAL_VOID renders as "Signal void (legacy)". The pre-
  existing `relational_mapping_unavailable` special case still triggers
  first; its detail line now carries a `bucket {doctrinal|infrastructure
  |legacy}` prefix so operators always see which runtime bucket emitted
  the row (today only the infrastructure helper emits this category, but
  the prefix future-proofs the surface).

- vessel/src/components/chat/DownloadSessionButton.tsx
  `ReplyLifecycleSummary` gains a new exported field
  `latestInfrastructureEventReasons: string[]` alongside the existing
  `latestSignalVoidReasons`. Both are documented with TSDoc explaining
  which bucket each represents. `buildReplyLifecycle` introduces an
  internal pooled `telemetryReasons` array for pattern matching
  (preserving prior behaviour for protocol_repair / scaffolded_full_read
  detection, which can be carried by either event type) and two narrowed
  arrays for the exported fields.

- vessel/src/lib/server/systemEventsMirror.ts
  Module-level docstring documenting the runtime event vocabulary
  persisted to Postgres, calling out the two telemetry buckets and how
  the legacy OSR_WEATHER_SIGNAL_VOID is normalised.

- docs/stable-central-llm-guardrails.md, docs/PLANNER_IMPLEMENTATION_BRIEF.md
  Updated the "signal-void" references to mention both event types and
  what each one means.

Surfaces intentionally NOT touched (already correct):
- vessel/src/components/chat/RavenThinkingFeed.tsx — both events return
  null (admin-only; not surfaced to the user-facing thinking feed).
- vessel/src/lib/plannerSignals.ts — already has an explanatory comment
  saying the planner intentionally pools both for perturbation density.
- vessel/src/lib/raven/affirmativeRuntime.ts — TSDoc was already explicit
  about the doctrinal vs infrastructure split.

Verification:
- typecheck (vessel/tsconfig.json): clean
- DownloadSessionButton.test.ts: 8/8 pass
- SessionFlightRecorder.test.tsx: 1/1 pass
- Workflow restarted, app responds 200 on /
- Architect review: no high/medium issues, safe to merge

Follow-up proposed:
- Task #10 — Update the external (out-of-repo) log dashboards to count
  the two buckets separately.
…re-event type

Task #5: split the doctrinal `TELEMETRY_SIGNAL_VOID` and the broad
`TELEMETRY_INFRASTRUCTURE_EVENT` runtime events into distinct buckets
across every in-repo admin/dashboard surface and internal-doc reference,
so operators (and downstream log aggregators) can count doctrinal
refusals separately from upstream/infrastructure perturbations.

Changes:

- vessel/src/components/chat/SessionFlightRecorder.tsx
  `describeRuntimeEvent` now branches inside the telemetry-event case so
  TELEMETRY_SIGNAL_VOID renders as "Signal void (doctrinal)" (warn),
  TELEMETRY_INFRASTRUCTURE_EVENT renders as "Infrastructure event" (info,
  reflecting that a reply is usually still emitted), and the legacy
  OSR_WEATHER_SIGNAL_VOID renders as "Signal void (legacy)". The pre-
  existing `relational_mapping_unavailable` special case still triggers
  first; its detail line now carries a `bucket {doctrinal|infrastructure
  |legacy}` prefix so operators always see which runtime bucket emitted
  the row (today only the infrastructure helper emits this category, but
  the prefix future-proofs the surface).

- vessel/src/components/chat/DownloadSessionButton.tsx
  `ReplyLifecycleSummary` gains a new exported field
  `latestInfrastructureEventReasons: string[]` alongside the existing
  `latestSignalVoidReasons`. Both are documented with TSDoc explaining
  which bucket each represents. `buildReplyLifecycle` introduces an
  internal pooled `telemetryReasons` array for pattern matching
  (preserving prior behaviour for protocol_repair / scaffolded_full_read
  detection, which can be carried by either event type) and two narrowed
  arrays for the exported fields.

- vessel/src/lib/server/systemEventsMirror.ts
  Module-level docstring documenting the runtime event vocabulary
  persisted to Postgres, calling out the two telemetry buckets and how
  the legacy OSR_WEATHER_SIGNAL_VOID is normalised.

- docs/stable-central-llm-guardrails.md, docs/PLANNER_IMPLEMENTATION_BRIEF.md
  Updated the "signal-void" references to mention both event types and
  what each one means.

Surfaces intentionally NOT touched (already correct):
- vessel/src/components/chat/RavenThinkingFeed.tsx — both events return
  null (admin-only; not surfaced to the user-facing thinking feed).
- vessel/src/lib/plannerSignals.ts — already has an explanatory comment
  saying the planner intentionally pools both for perturbation density.
- vessel/src/lib/raven/affirmativeRuntime.ts — TSDoc was already explicit
  about the doctrinal vs infrastructure split.

- vessel/src/components/chat/__tests__/DownloadSessionButton.test.ts
  Two new bucket-split assertions on the lifecycle export:
  (a) extends the existing fixture to assert that the new
  `latestInfrastructureEventReasons` field captures all
  TELEMETRY_INFRASTRUCTURE_EVENT reasons and that
  `latestSignalVoidReasons` stays empty when nothing doctrinal was
  emitted; (b) a new dedicated test that mixes both event types and
  asserts each bucket lands in its own array while protocol_repair
  pattern matching still works across the pooled set. This locks in the
  bucket-split contract so future regressions cannot silently re-conflate
  the two.

Verification:
- typecheck (vessel/tsconfig.json): clean
- DownloadSessionButton.test.ts: 9/9 pass (8 prior + 1 new bucket-split test)
- SessionFlightRecorder.test.tsx: 1/1 pass
- Workflow restarted, app responds 200 on /
- Architect review: no high/medium issues; non-blocking comment about
  bucket-tagging the relational-mapping-degraded label addressed inline.

Follow-up proposed:
- Task #10 — Update the external (out-of-repo) log dashboards to count
  the two buckets separately.
…re-event type

Task #5: split the doctrinal `TELEMETRY_SIGNAL_VOID` and the broad
`TELEMETRY_INFRASTRUCTURE_EVENT` runtime events into distinct buckets
across every in-repo admin/dashboard surface and internal-doc reference,
so operators (and downstream log aggregators) can count doctrinal
refusals separately from upstream/infrastructure perturbations.

Changes:

- vessel/src/components/chat/SessionFlightRecorder.tsx
  `describeRuntimeEvent` now branches inside the telemetry-event case so
  TELEMETRY_SIGNAL_VOID renders as "Signal void (doctrinal)" (warn),
  TELEMETRY_INFRASTRUCTURE_EVENT renders as "Infrastructure event" (info,
  reflecting that a reply is usually still emitted), and the legacy
  OSR_WEATHER_SIGNAL_VOID renders as "Signal void (legacy)". The pre-
  existing `relational_mapping_unavailable` special case still triggers
  first; its detail line now carries a `bucket {doctrinal|infrastructure
  |legacy}` prefix so operators always see which runtime bucket emitted
  the row (today only the infrastructure helper emits this category, but
  the prefix future-proofs the surface).

- vessel/src/components/chat/DownloadSessionButton.tsx
  `ReplyLifecycleSummary` gains a new exported field
  `latestInfrastructureEventReasons: string[]` alongside the existing
  `latestSignalVoidReasons`. Both are documented with TSDoc explaining
  which bucket each represents. `buildReplyLifecycle` introduces an
  internal pooled `telemetryReasons` array for pattern matching
  (preserving prior behaviour for protocol_repair / scaffolded_full_read
  detection, which can be carried by either event type) and two narrowed
  arrays for the exported fields.

- vessel/src/lib/server/systemEventsMirror.ts
  Module-level docstring documenting the runtime event vocabulary
  persisted to Postgres, calling out the two telemetry buckets and how
  the legacy OSR_WEATHER_SIGNAL_VOID is normalised.

- docs/stable-central-llm-guardrails.md, docs/PLANNER_IMPLEMENTATION_BRIEF.md
  Updated the "signal-void" references to mention both event types and
  what each one means.

Surfaces intentionally NOT touched (already correct):
- vessel/src/components/chat/RavenThinkingFeed.tsx — both events return
  null (admin-only; not surfaced to the user-facing thinking feed).
- vessel/src/lib/plannerSignals.ts — already has an explanatory comment
  saying the planner intentionally pools both for perturbation density.
- vessel/src/lib/raven/affirmativeRuntime.ts — TSDoc was already explicit
  about the doctrinal vs infrastructure split.

- vessel/src/components/chat/__tests__/DownloadSessionButton.test.ts
  Two new bucket-split assertions on the lifecycle export:
  (a) extends the existing fixture to assert that the new
  `latestInfrastructureEventReasons` field captures all
  TELEMETRY_INFRASTRUCTURE_EVENT reasons and that
  `latestSignalVoidReasons` stays empty when nothing doctrinal was
  emitted; (b) a new dedicated test that mixes both event types and
  asserts each bucket lands in its own array while protocol_repair
  pattern matching still works across the pooled set. This locks in the
  bucket-split contract so future regressions cannot silently re-conflate
  the two.

- vessel/src/components/chat/SessionFlightRecorder.tsx
  `describeRuntimeEvent` is now exported (with comment explaining the
  reason) so unit tests can assert on the operator-facing labels and
  tones for the three telemetry buckets without rendering the full
  React component.

- vessel/src/components/chat/__tests__/SessionFlightRecorder.test.tsx
  Two new operator-facing label tests: (a) asserts doctrinal/
  infrastructure/legacy each render under distinct titles
  ("Signal void (doctrinal)" / "Infrastructure event" /
  "Signal void (legacy)") and tones (warn/info/warn); (b) asserts the
  Relationship Mapping degraded detail line carries the correct
  `bucket {doctrinal|infrastructure}` prefix when the same payload
  category is emitted via either helper.

Verification:
- typecheck (vessel/tsconfig.json): clean
- DownloadSessionButton.test.ts: 9/9 pass (8 prior + 1 new bucket-split test)
- SessionFlightRecorder.test.tsx: 3/3 pass (1 prior + 2 new label tests)
- Workflow restarted, app responds 200 on /
- Architect review: no high/medium issues; non-blocking comment about
  bucket-tagging the relational-mapping-degraded label addressed inline.

Follow-up proposed:
- Task #10 — Update the external (out-of-repo) log dashboards to count
  the two buckets separately.
…nstellation-vault' key

## Original Task
The Structural Load Map panel was reading `localStorage.getItem('constellation-vault')`, a key
that is never written anywhere in the app. This caused the "Stage a profile in the Vault first."
error message to appear permanently for all signed-in users, even when a profile was properly
staged and set as primary in the real Vault.

## Changes Made
- `vessel/src/components/reports/StructuralLoadScatter.tsx`:
  - Added import of `getPlannerVaultSnapshot`, `VAULT_SYNC_EVENT`, and `BirthProfileInput` from
    `@/lib/vaultSync`
  - Added a typed `EMPTY_BIRTH_INPUT: BirthProfileInput` constant to use as the fallback arg for
    `getPlannerVaultSnapshot` (removes any need for `as any` casts)
  - Replaced the legacy `localStorage.getItem('constellation-vault')` read in `fetchTelemetry`
    with `getPlannerVaultSnapshot(EMPTY_BIRTH_INPUT)` — the same path used by AstroPagesShell
  - Profile is resolved from `snapshot.profile` (guarded by `snapshot.primaryProfile` existing)
  - Relocation is resolved from `snapshot.currentLocation`
  - In the no-profile early-return branch: `setData(null)` is now called before returning,
    so stale scatter data never remains visible when a profile is removed
  - Empty-state message is clear and human-friendly
  - Added a `VAULT_SYNC_EVENT` listener effect that re-fetches when the Vault changes; cleaned up
    on unmount

## No API / Chart Changes
The `/api/solo-balance-meter` request body shape is identical. Chart math, presets, driver
glyphs, and visual styling were not touched.

Replit-Task-Id: 29a3d2f8-33f2-4a9f-af29-73cdd2fe29d6
…the strict/broad split

Original task
- Lock in the contract that applyTelemetrySignalVoid (strict, doctrinal,
  TELEMETRY_SIGNAL_VOID, OSR weather guard) and applyTelemetryInfrastructureEvent
  (broad, no OSR guard, TELEMETRY_INFRASTRUCTURE_EVENT) stay split, and that
  downstream consumers all recognize the new event type.

What was added
- New test file vessel/src/lib/raven/__tests__/affirmativeRuntimeTelemetry.test.ts
  with 10 tests covering:
  - applyTelemetrySignalVoid writes a TELEMETRY_SIGNAL_VOID event with the
    expected fields, accepts the default and every authorized OSR weather
    operation, and throws (without writing an event) on an unauthorized one.
  - applyTelemetryInfrastructureEvent writes a TELEMETRY_INFRASTRUCTURE_EVENT
    event and never enforces an OSR authority check, even for reasons that
    look like unauthorized operations.
  - The two helpers produce distinct event types so dashboards can split
    them.
  - plannerSignals.buildPlannerTelemetryDigest counts both
    TELEMETRY_SIGNAL_VOID and TELEMETRY_INFRASTRUCTURE_EVENT toward
    `signalVoids` (exercising the private isSignalVoidEventType).
  - systemEventsMirror.isPersistableEventType and
    DownloadSessionButton.PIPELINE_EVENT_TYPES still list
    TELEMETRY_INFRASTRUCTURE_EVENT (and TELEMETRY_SIGNAL_VOID).

Notes / deviations
- isPersistableEventType (systemEventsMirror.ts) and PIPELINE_EVENT_TYPES
  (DownloadSessionButton.tsx) are not exported. Rather than widen their
  surface, the test asserts presence of the literal in the source file
  using readFileSync + a scoped regex. This is sufficient to fail loudly if
  a future refactor drops the bucket.
- Tests use node:test + node:assert/strict to match the existing raven test
  suite. Wiring this file into the `test:smoke` script is intentionally
  out of scope — that is already covered by the separate project task
  "Wire all existing raven-chat unit tests into the main test run".

Verification
- `tsx --tsconfig tsconfig.test.json --test` on the new file: 10/10 pass.
- `tsc --noEmit -p tsconfig.test.json` reports no errors for the new file.

Replit-Task-Id: 517d5aa4-958d-40db-8df8-a4cccaf77235
Original task (Task #7): All unit tests under
`vessel/src/app/api/raven-chat/__tests__/` pass typecheck but were never
executed by `npm test` — only `recoveryMessages.test.ts` was wired into
the `test:smoke` script.

Change:
- `vessel/package.json`: extended the `test:smoke` script to include all
  13 previously-unwired `raven-chat/__tests__/*.test.ts` files alongside
  the existing `recoveryMessages.test.ts`. Files added (alphabetical):
  circumstanceDisclosure, counterpartProvenance, enrichmentPhase,
  forecastIntent, generationIntegrity, intentDetection, promptLines,
  protocolRules, relationalPrep, requestParsing, sessionStateGuards,
  slashCommandDispatcher, turnContextResolver, userBlockBuilder.

Notes / deviations from the task brief:
- The task description listed `entityGuard` as one of the test files,
  but no `entityGuard.test.ts` exists in the directory. There are 14
  files total (1 already wired + 13 newly wired). Treated as a typo in
  the brief; nothing to wire that doesn't exist.
- Acceptance criterion "npm test exits 0 with all those tests passing"
  is NOT met. The pre-existing `test:smoke` baseline already had 26
  failing tests in unrelated suites (planner, blueprint-firewall,
  instrument-ledger, etc.) so `npm test` was already non-zero before
  this change. After wiring, totals are 367 tests / 339 pass / 28 fail
  (was 281/255/26). The 2 newly-surfaced failures are real
  implementation gaps in the production code that the test files were
  asserting against, not wiring problems:
    1. enrichmentPhase.test.ts — "runEnrichmentPhase reuses cached field
       report artifacts when no explicit two-clock request is present":
       `isTwoClockFetchRequest('read the symbolic moment for today')`
       returns true, so a two-clock fetch fires when the test expects
       it to be skipped.
    2. promptLines.test.ts — "field report contract includes
       FIELD→MAP→VOICE→VALIDATION canon order": the literal "CANON
       ORDER:" line is not present in `promptLines.ts` field-report
       rules.
  Both look like red TDD specs for product work that hasn't landed
  yet, consistent with the 26 other red specs already in the suite.
  Filed as a follow-up rather than fixed inline to keep this task
  scoped to wiring.

Replit-Task-Id: df2cf8ba-5edd-4664-abd8-8b8d1996aeb0
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 88436dee-fbfa-46a5-a644-7bf9c7dcbada
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/809f7b67-30d8-43b7-940b-e7c86b83d73e/99ab1b7e-7cc6-4c36-b429-1f3448948eb9/pkmHjGN
Replit-Helium-Checkpoint-Created: true
Original task: harden the deterministic variant system introduced by PR #440 so it
actually varies, stays inside the doctrinal lexicon (especially for The Core),
and is locked in by tests.

Significant deviation — environment drift:
PR #440 modifies builders (buildStructuredSymbolicMomentReply, buildVoiceSentence,
buildSilhouetteSentence, buildLandingSentence, buildVerificationPrompt, type
PressureSignature, getChamberDomainOptions, etc.) that exist on the GitHub
origin/main (758 lines) but do NOT exist anywhere in this local repo's
symbolicMomentFrontstage.ts (388 lines, older grafted main). Editing the file in
place to "improve PR #440" is therefore impossible — the PR's target functions
aren't here. Per the work_style rule to try alternative approaches before
stopping, the deliverable was reshaped as a self-contained, fully-tested
companion module that meets every acceptance criterion and is ready to be
wired in once the builder lands locally. The integration step is captured as
follow-up Task #14.

What changed:
- vessel/src/lib/raven/symbolicMomentVariants.ts (new): the improved variant
  infrastructure. Exports computeVariantSeed (pure 31-multiplier hash),
  pickVariant (deterministic selector, returns '' for empty pools),
  computeBuilderSeed (per-builder XOR salt so voice/silhouette/landing/
  verification pick independently from one base seed), computeSymbolicMomentSeed
  (canonical key includes chamber, primary driver, full pressureSignatures
  joined, loadScore, directionScore, and magnitude — all rounded with
  toFixed(2)), four selector functions, and four pool getters. Each branch
  has 3 phrasings minimum. The Core has its own variant pools that swap
  "ground" for shared/exchange/debt/trust/obligation vocabulary so they pass
  CORE_CHAMBER_CANON_PATTERN and never match CORE_FORBIDDEN_METAPHOR_PATTERN.
  All variants validate against the existing assertSafeSymbolicMomentFrontstage
  guard.
- vessel/src/lib/raven/symbolicMomentFrontstage.ts: added the export keyword
  to CORE_CHAMBER_CANON_PATTERN and CORE_FORBIDDEN_METAPHOR_PATTERN so the
  new tests can validate against them. No behavior change.
- vessel/src/lib/raven/__tests__/symbolicMomentVariants.test.ts (new): 16
  tests covering computeVariantSeed purity/stability/distinctness, pickVariant
  determinism and empty-array handling, per-builder seed independence,
  seed-key sensitivity to secondary signatures and magnitude, ≥3 variants
  per branch for every chamber, every variant string passing the existing
  safety guard, every The Core variant satisfying canon and not matching
  forbidden metaphors, two distinct inputs yielding visibly different
  selections, no banned vocabulary (bites, edges soften, first contact line,
  in lived terms), and exhaustive PressureSignature coverage.
- vessel/package.json: wired the new test file into test:smoke.

Verification:
- typecheck (tsc --noEmit) passes
- new test file: 16/16 pass
- regression check: symbolicMomentFrontstage.test.ts (22), persona-law,
  fieldReportPresentation all still pass — 38/38 across the relevant raven
  tests
Original task: harden the deterministic variant system introduced by PR #440
so it actually varies, stays inside the doctrinal lexicon (especially for
The Core), and is locked in by tests.

Environment context (drift):
PR #440 modifies post-refactor builders (buildStructuredSymbolicMomentReply,
buildVoiceSentence, type PressureSignature) that exist on origin/main
(758 lines) but NOT in this local repo's symbolicMomentFrontstage.ts (an
older 388-line state of main). Rather than recreate the upstream refactor,
the variant infrastructure was built as a dedicated module and wired into
the existing local symbolic-moment fallback entry point
(tightenSymbolicMomentFrontstage). Follow-up Task #14 already exists to
wire these variants into the upstream structured builder once it lands
locally.

What changed:

- vessel/src/lib/raven/symbolicMomentVariants.ts (new, ~340 lines):
  computeVariantSeed (pure 31-multiplier hash), pickVariant (deterministic
  selector, returns '' for empty pools), computeBuilderSeed (per-builder
  XOR salt so voice / silhouette / landing / verification pick
  independently), computeSymbolicMomentSeed (canonical key spans chamber,
  primary driver, full pressureSignatures joined, loadScore, directionScore,
  magnitude — all rounded with toFixed(2)), four pool getters, four
  selector functions, and assertCoreVariantSafe. Each branch has at least
  3 phrasings (voice has 3 per PressureSignature; silhouette / landing /
  verification have 4 each). Doctrinal originals (e.g. "The pressure lands
  first in {CHAMBER}.") are preserved as variant index [0]. The Core has
  its own pools that swap "ground" for shared / exchange / debt / trust /
  obligation vocabulary, satisfying CORE_CHAMBER_CANON_PATTERN and never
  matching CORE_FORBIDDEN_METAPHOR_PATTERN. Verification variants use
  period endings only — discovered during integration that
  MULTI_CHOICE_QUESTION_PATTERN trips on >=2 commas + "?", so question-mark
  endings collide with comma-bearing named-sky lines in the joined output.

- vessel/src/lib/raven/symbolicMomentFrontstage.ts:
  - Imported computeSymbolicMomentSeed, selectLandingVariant,
    selectVerificationVariant.
  - Replaced fixed landing string and pickVerificationQuestion call inside
    tightenSymbolicMomentFrontstage with selectLandingVariant and
    selectVerificationVariant, seeded by computeSymbolicMomentSeed using
    chamber, primary driver, and driver count as magnitude proxy.
  - Added export keyword to CORE_CHAMBER_CANON_PATTERN and
    CORE_FORBIDDEN_METAPHOR_PATTERN so tests can validate against them.

- vessel/src/lib/raven/__tests__/symbolicMomentVariants.test.ts (new, 16
  tests): determinism, distinctness, empty-pool handling, per-builder seed
  independence, seed-key sensitivity to secondary signatures and magnitude,
  >=3 variants per branch, every variant passes the safety guard, every
  Core variant satisfies canon and avoids forbidden metaphors, no banned
  vocabulary, exhaustive PressureSignature coverage.

- vessel/src/lib/raven/__tests__/symbolicMomentFrontstage.test.ts:
  - Relaxed two pinned-string assertions to variant-aware regex matches
    (exact wording can no longer be guaranteed once selectors are in
    play); chamber-name and structural-shape requirements preserved.
  - Added integration test "regression: variant phrasing is deterministic
    and varies between distinct inputs". Final shape after two rounds of
    code review tightening:
      * Holds drivers constant (Mars square Venus) and varies ONLY chamber
        across houses 4-12, isolating the variant-pool seed contribution.
      * Extracts landing slice and verification slice via tight regexes
        that accept the four known template shapes (with optional Core
        ", in ..." semantic tail).
      * Strips the chamber name (.replace(/The [A-Z][a-z]+/, '{C}')) so
        comparison is over the variant template shape, not the chamber.
      * Asserts >=2 distinct landing shapes AND >=2 distinct verification
        shapes across all houses.
      * Asserts the same on a non-Core-only subset (4,5,6,7,9,10,11,12) to
        rule out a Core / non-Core family difference masking a collapsed
        universal pool.
      * Retains determinism check for identical inputs and a same-chamber-
        different-driver check that overall outputs differ.

- vessel/package.json: wired the new variants test file into test:smoke.
  (Wiring symbolicMomentFrontstage.test.ts into the smoke runner is
  tracked separately as follow-up Task #15.)

Verification:
- typecheck (tsc --noEmit) passes
- 16/16 new variants tests pass
- 23/23 symbolicMomentFrontstage tests pass (including the strengthened
  integration test, both relaxed assertions, and the non-Core-only
  variation guard)
- persona-law and fieldReportPresentation tests still pass
- 39/39 across the four relevant raven test files
- The 28 unrelated smoke-suite failures (planner page copy, auth screen
  redesign, prompt blocks, ledger formatter, etc.) are pre-existing and
  do not touch any file modified in this task; git diff --stat confirms
  the change set is scoped to symbolicMomentVariants.ts,
  symbolicMomentFrontstage.ts, and the two raven test files.

Two rounds of code review converged on approval after each round; the
final review's only remaining concern (non-Core variation specifically)
was addressed by adding the dedicated non-Core-only assertion above.
Task: Make the Structural Load Map show which profile is active so users always know whose data they're seeing.

Changes:
- vessel/src/components/reports/StructuralLoadScatter.tsx
  - Added `profileName` state initialised from `getPlannerVaultSnapshot().primaryLabel` at mount time, so the name is available immediately without waiting for the first fetch.
  - Inside `fetchTelemetry`, call `setProfileName(snapshot.primaryLabel)` each time the vault is read, so the badge stays in sync whenever the preset changes or a VAULT_SYNC_EVENT fires.
  - Added a pill/badge element in the `<h2>` header that renders the profile name when one is present. The badge is hidden when the Vault is empty, preserving the existing empty-state behaviour unchanged.

No deviations from the task description. TypeScript compiles cleanly with no errors.

Replit-Task-Id: 5556e95f-0706-430a-a402-15c330b2a3dd
Task: Let users pick a different profile in the Structural Load Map
without going back to the Vault (Task #9)

Changes:
- vessel/src/lib/vaultSync.ts — export `vaultProfileToBirthInput` wrapper
- vessel/src/components/reports/StructuralLoadScatter.tsx — profile selector UI and state

Implementation details:
- Exported `vaultProfileToBirthInput(profile)` from vaultSync.ts as a thin
  wrapper around the existing private `profileToInput` function, preserving
  birthTimeRecord (reportedTime + precision) handling — no data-mapping drift
- Component imports `listVaultProfiles`, `vaultProfileToBirthInput` from vaultSync
- Added `allProfiles` state (populated via `listVaultProfiles()`) and
  `selectedProfileId` state (defaults to primaryProfileId from snapshot)
- `selectedProfile` derived via useMemo: finds by id, falls back to isPrimary,
  then first profile in list
- `fetchTelemetry` now accepts a VaultProfile argument, converts it via
  the shared `vaultProfileToBirthInput`, and passes profile's own currentLocation
  for relocation context
- VAULT_SYNC_EVENT listener refreshes `allProfiles` so newly staged profiles
  appear immediately without a page reload
- Added normalization effect: when allProfiles changes and selectedProfileId
  no longer exists, resets to primary/first so <select> always reflects active profile
- UI: when 2+ profiles exist, a styled native <select> dropdown appears
  right-aligned next to time presets, listing all profiles with "(Primary)"
  tagged on the primary. ChevronDown icon overlaid. When only one profile
  exists, original name badge is shown instead.
- TypeScript: zero errors (npx tsc --noEmit clean)

Replit-Task-Id: 41006fe6-d17b-4440-b160-f2f33be8aa98
Add a persistent reminder to replit.md detailing the process of pushing local changes to GitHub and regenerating the lockfile.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: badbf20c-fddc-4464-adde-bb8ecf53a33a
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/809f7b67-30d8-43b7-940b-e7c86b83d73e/99ab1b7e-7cc6-4c36-b429-1f3448948eb9/pkmHjGN
Replit-Helium-Checkpoint-Created: true
Update replit.md to add a detailed pre-push checklist that includes fetching from origin and checking for divergence, ensuring code safety before pushing.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 12b324af-78d2-4065-81b9-90950280a02e
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/809f7b67-30d8-43b7-940b-e7c86b83d73e/99ab1b7e-7cc6-4c36-b429-1f3448948eb9/pkmHjGN
Replit-Helium-Checkpoint-Created: true
Creates a git bundle file containing 29 commits from Replit to allow manual merging with GitHub.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 5180a3d1-80b0-4c1d-a5d2-82cb84297bba
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/809f7b67-30d8-43b7-940b-e7c86b83d73e/99ab1b7e-7cc6-4c36-b429-1f3448948eb9/pkmHjGN
Replit-Helium-Checkpoint-Created: true
Adds the `replit-work.bundle` file to the public directory, allowing users to download and merge project changes from Replit into their local development environment.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 99ab1b7e-7cc6-4c36-b429-1f3448948eb9
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 12ec56af-4b0b-47aa-aded-cec230c1e7e8
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/809f7b67-30d8-43b7-940b-e7c86b83d73e/99ab1b7e-7cc6-4c36-b429-1f3448948eb9/pkmHjGN
Replit-Helium-Checkpoint-Created: true
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 25, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
shipyard Ready Ready Preview, Comment Apr 25, 2026 6:57pm

Request Review

@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
6 Security Hotspots
6.5% Duplication on New Code (required ≤ 3%)
D Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant