Skip to content

feat(brainbar): restore dashboard flow UI on Phase B main#251

Merged
EtanHey merged 8 commits intomainfrom
fix/brainbar-dashboard-flow
May 2, 2026
Merged

feat(brainbar): restore dashboard flow UI on Phase B main#251
EtanHey merged 8 commits intomainfrom
fix/brainbar-dashboard-flow

Conversation

@EtanHey
Copy link
Copy Markdown
Owner

@EtanHey EtanHey commented Apr 18, 2026

Summary

Verification

  • git diff --check origin/main...HEAD
  • pre-push hook on fix/brainbar-dashboard-flow: 1811 passed, 9 skipped, 75 deselected, 1 xfailed; MCP registration 3 passed; isolated eval/hook routing 32 passed; bun 1 pass; test_fts5_determinism.sh PASS

Notes

  • First push attempt exposed live DB lock contention from com.brainlayer.enrichment/com.brainlayer.watch; exact failed tests passed after pausing those write-heavy launch agents (3 passed in 21.60s), then the full pre-push hook passed.
  • W2 will add the Swift BrainBar FTS parity tests/implementation on top of this branch before final merge.

Summary by CodeRabbit

  • New Features

    • Redesigned dashboard: scrollable cards, dual-lane flow summary, detachable dashboard panel, optional queue rail; injection feed snapshotting with burst grouping
    • Agent activity monitoring showing running AI CLI presence
    • Deterministic graph atlas and revamped knowledge-graph sidebar
    • Improved search fallbacks including trigram-based matching
  • UI / UX

    • Menu-bar popover with quick-action buttons; conversation overlay, unified card styling, command-bar readiness; time-based status badges and dark/light-aware graph visuals
  • Tests

    • Extensive new unit tests across dashboard, search/FTS, injections, graph, panel controller, and agent monitoring
  • Documentation

    • New implementation plans for dashboard flow and hybrid UI

Note

Restore BrainBar dashboard flow UI with knowledge graph atlas, injection feed, and agent presence

  • Replaces the menu bar extra window with a floating NSPanel managed by BrainBarDashboardPanelController, which remembers size/position, appears on all spaces, and hosts the root dashboard view
  • Rebuilds BrainBarDashboardView into a multi-card layout with separate write/enrichment pulse lanes, a queue rail, agent presence strip, sparkline cards, and a collapsible diagnostics section driven by a new DashboardFlowSummary model
  • Overhauls InjectionFeedView into a dashboard with burst grouping, activity ribbon, side-rail session/token metrics, and an overlay conversation viewer via the new InjectionPresentation snapshot pipeline
  • Adds trigram FTS5 support to BrainDatabase with schema migration, trigger-based sync, conditional startup rebuild/backfill, KG alias query expansion, and exact chunk-id short-circuit lookup
  • Expands T1_T2_SOURCES in enrichment_tiers.py to include codex_cli, cursor_cli, and gemini_cli, updating session discovery SQL and tier source filters accordingly
  • Introduces AgentActivityMonitor to sample live agent CLI processes and publish per-family presence counts to the dashboard via StatsCollector
  • Rebuilds the knowledge graph canvas with importance-based altitude filtering, entity-type region backdrops, seeded initial positions via KGAtlasLayout, and a redesigned sidebar; dark/light mode support added throughout
  • Risk: trigram FTS backfill is skipped at startup when chunk count exceeds the threshold, leaving the trigram index empty until a manual rebuild

Macroscope summarized b8588f4.


Note

Medium Risk
Moderate risk because it changes BrainBar’s primary windowing behavior and adds new SQLite schema/migration + FTS search paths (including conditional trigram index rebuild/backfill) that could affect startup time and query correctness on existing databases.

Overview
BrainBar’s menu-bar surface is reworked: the MenuBarExtra switches from a window-style surface to a menu with explicit actions (open dashboard/search/capture), and the main UI now opens in a floating NSPanel via BrainBarDashboardPanelController (with autosaved sizing/position and escape-to-dismiss).

Dashboard & tabs are redesigned for “flow” visibility: BrainBarWindowRootView refactors tab rendering/lifecycle (lazy activation + shared command bar VM), and BrainBarDashboardView becomes a scrollable, card-based layout with separate write/enrichment lane charts, queue/diagnostics sections, and a new agent-presence strip fed by AgentActivityMonitor.

Search and schema/migrations expand: BrainDatabase adds a trigram FTS table (chunks_fts_trigram) with triggers, analysis, and startup repair logic that may skip synchronous backfill on large desynced DBs; search now short-circuits exact chunk-id matches, expands queries via lexical/KG alias replacements (new kg_entity_aliases table + kg_entities.importance column), and falls back to trigram matches with deduping.

Graph and injection UIs are overhauled: KGCanvasView gains an atlas-style presentation (importance filtering, region backdrops, deterministic seeding via KGAtlasLayout, dark/light rendering tweaks, and updated sidebar styling), and InjectionFeedView becomes a richer, snapshot-driven feed (burst grouping, activity ribbon, side-rail metrics, and overlay-based conversation drilldown).

Reviewed by Cursor Bugbot for commit b8588f4. Bugbot is set up for automated code reviews on this repo. Configure here.

Stability addendum

  • Added commit 59b4025d30bc59ff9917973b4537a2f365df67f1: fix(brainbar): skip synchronous trigram backfill on large desynced DBs.
  • Root cause: BrainBar startup could synchronously rebuild chunks_fts_trigram on the live ~360K-chunk database before /tmp/brainbar.sock opened, cascading lock pressure into enrichment and MCP clients.
  • Runtime guard: synchronousTrigramBackfillChunkLimit = 10_000; large desynced trigram tables now keep schema/triggers current and defer heavy backfill instead of blocking startup.
  • Verification: RED→GREEN testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild; swift test --filter DatabaseTests; bash scripts/run_tests.sh; swift test.
  • Follow-ups tracked in docs.local/plans/2026-05-02-brainlayer-reliability-hardening/: PR B explicit trigram maintenance command, PR D boot health contract.

Command-bar readiness fix for dashboard NSPanel surface

  • Added commit b8588f4f2f31f2650a6a663a25c50789c25f0717: fix(brainbar): restore command bar readiness on already-set runtime database.
  • Root cause: runtime.database could be installed while the NSPanel/root view was hidden, leaving the command bar local view-model state nil and rendering Warming memory… until a later activation refreshed SwiftUI.
  • Fix shape: command bar readiness now derives through a stable provider from the current runtime.database, covering already-set values and later readiness changes without relying only on a missed subscription edge.
  • Verification: RED→GREEN testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden; swift test --filter BrainBarDashboardPanelControllerTests; full swift test (316/316); bash scripts/run_tests.sh pre-push gate PASS.
  • Scope: first-letter search latency and graph tab stalls are deferred to polish-pass Phase 1 / reliability-hardening Phase 2, not folded into this lifecycle fix.

Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 18, 2026

📝 Walkthrough

Walkthrough

Adds a dashboard NSPanel and menu popover; refactors root view/tab rendering and dashboard into scrollable cards; adds trigram FTS with alias expansion and exact-id fast path; extends dashboard stats with epoch timestamps and time-window semantics; introduces deterministic KG atlas/presentation; expands enrichment CLI sources; adds agent-activity sampling and many tests.

Changes

App & Windowing

Layer / File(s) Summary
Core API / Types
brain-bar/Sources/BrainBar/BrainBarWindowState.swift
BrainBarLivePresentation.derive now accepts now: Date and uses last-event timestamps for live/badge text.
Controller / Panel
brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift
New BrainBarDashboardPanel and BrainBarDashboardPanelController (NSPanel-backed) with toggle, show, dismiss, autosave, default/min sizes.
App Wiring
brain-bar/Sources/BrainBar/BrainBarApp.swift
AppDelegate gains dashboardPanel property and routes menu-bar behavior to dashboard panel; menu UI now shows a .menu popover that can open dashboard/search/quick-capture.
Root View
brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
init(..., managesWindowFrame: Bool = true) added; tab rendering switched to layered ZStack with activation flags; opacity/window-attachment gated by managesWindowFrame; quick-action handling adjusted and command bar VM lazily cached.
Tests
brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift, BrainBarWindowStateTests.swift
Panel controller tests added; window-state/live-presentation tests updated to inject deterministic now.

Dashboard UI & Metrics

Layer / File(s) Summary
Data Shape / Formatting
brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift
Adds rateString, lastEventString, windowLabel/shortWindowLabel; lastCompletionString now accepts optional lastEventAt and now.
Flow Summary Model
brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift
PipelineIndicators.derive/PipelineState.derive gain now; new DashboardFlowSummary, lane and queue models encapsulate derived texts, rates, and statuses.
Collector / State
brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift, StatusPopoverView.swift
StatsCollector adds agentActivity and default window/bucket constants; StatusPopover uses DashboardFlowSummary (sparkline source -> summary.enrichment.values).
UI Composition
brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift, brain-bar/Sources/BrainBar/Views/Components/ChunkConversationSheet.swift
Dashboard rebuilt into scrollable, componentized cards (overview, flow lanes, sparklines, agent strip, optional queue rail, diagnostics); ChunkConversationSheet gains onClose and ChunkConversationOverlay.
Visuals / Sparkline
brain-bar/Sources/BrainBar/.../BrainBarHeroSparkline.swift, .../ring
Sparkline now renders its own NSImage from values and accentColor; ring/endpoint drawing bridges NSColorColor; live pulse split into independent write vs enrichment revisions.

Agent Activity

Layer / File(s) Summary
New Monitor
brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift
New AgentActivityMonitor that snapshots /bin/ps, parses processes, filters noise, detects AgentFamily matches, and returns AgentActivitySnapshot/AgentPresence.
Integration
brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift
StatsCollector now accepts AgentActivityMonitor and publishes agentActivity, sampling it each refresh.
Tests
brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift
Unit tests validating parsing, counts, summary text, and snapshot-run behavior.

Injection Feed & Presentation

Layer / File(s) Summary
Presentation Helper
brain-bar/Sources/BrainBar/InjectionPresentation.swift
New InjectionPresentation builds Snapshot from InjectionEvents: timestamp parsing, windowing, ribbon buckets, burst grouping, session summaries, and aggregated counts.
View Refactor
brain-bar/Sources/BrainBar/InjectionFeedView.swift
Feed refactored to snapshot-backed, burst-grouped UI with header, time-bucket ribbon, burst cards, expandable hits, right-side rail, and ChunkConversationOverlay overlay presentation.
Store Behavior
brain-bar/Sources/BrainBar/InjectionStore.swift
start(sessionID:limit:) avoids forcing refresh when parameters unchanged.
Tests
brain-bar/Tests/BrainBarTests/InjectionPresentationTests.swift
Tests for snapshot aggregation, bursts, ribbon buckets, and filtering behavior.

Knowledge Graph — Layout & Presentation

Layer / File(s) Summary
Layout Seeding
brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift
New KGAtlasLayout.seededNodes deterministically seeds node positions grouped by entityType, sorted by importance/name, placing nodes in ring layouts around group centers.
Presentation Snapshot
brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift
New KGAtlasPresentation.snapshot filters by minimumImportance/selection, groups regions in deterministic order, computes visibleNodes/visibleEdges, and resolves selectedRegion.
Canvas Integration
brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift, KGViewModel.swift
Canvas driven by KGAtlasPresentation.Snapshot; seeded nodes computed once and reseeded on size changes; hit-testing limited to visible nodes; viewModel uses seeded nodes during load.
Tests
brain-bar/Tests/BrainBarTests/KGAtlasLayoutTests.swift, KGAtlasPresentationTests.swift
Deterministic layout and snapshot tests added.

Knowledge Graph — Views & Theming

Layer / File(s) Summary
Rendering Theme
brain-bar/Sources/BrainBar/KnowledgeGraph/KGNodeView.swift, KGEdgeView.swift
Node and edge renderers read environment.colorScheme to select colors for dark/light modes and adjust non-selected styling.
Sidebar UI
brain-bar/Sources/BrainBar/KnowledgeGraph/KGSidebarView.swift
Sidebar redesigned into card/section layout with chips, synopsis, chunk cards, consistent card styling, and empty-state view.
Tests
brain-bar/Tests/BrainBarTests/KnowledgeGraphTests.swift
DB-backed KG importance sourcing test added.

Database & Search

Layer / File(s) Summary
Schema / Migrations
brain-bar/Sources/BrainBar/BrainDatabase.swift
Adds kg_entities.importance (DEFAULT 0.5), new kg_entity_aliases table + indexes, trigram FTS table chunks_fts_trigram, and associated triggers; migration wiring to create/rebuild trigram table.
Stats / Dashboard
brain-bar/Sources/BrainBar/BrainDatabase.swift
dashboardStats extended with activityWindowMinutes, bucketCount, liveWindowMinutes, lastWriteAt, lastEnrichedAt; dashboardLastEvents() added; recent bucket computations use epoch-normalized SQL.
Search Flow
brain-bar/Sources/BrainBar/BrainDatabase.swift
Search now trims/expands queries (lexical replacements + KG alias expansion), short-circuits exact chunk_id lookups, runs chunks_fts then falls back to chunks_fts_trigram if needed, centralizes FTS via runFTSSearch(...), deduplicates by chunk_id, and tracks maxRowID for unread watermark advancement.
FTS Maintenance
brain-bar/Sources/BrainBar/BrainDatabase.swift
Trigram FTS maintenance: rebuildTrigramFTSTableIfNeeded, backfill logic, and startup repair-decision heuristics.
Tests
brain-bar/Tests/BrainBarTests/DatabaseTests.swift
New tests for trigram startup decision and multiple FTS/search behaviors (blank queries, exact-id short-circuit, project scoping, trigram substring, alias expansion, dot-normalization).

Backend Enrichment & Session Discovery (brainlayer)

Layer / File(s) Summary
Constants
src/brainlayer/pipeline/enrichment_tiers.py
T1_T2_SOURCES expanded from {"claude_code"} to include {"claude_code","codex_cli","cursor_cli","gemini_cli"}.
Session Discovery SQL
src/brainlayer/pipeline/session_enrichment.py
list_sessions_for_enrichment SQL now filters chunks.source using a placeholder-driven IN (...) list based on T1_T2_SOURCES.
Tests
tests/test_enrichment_tiers_agent_sources.py, tests/test_session_enrichment.py
Tests added to validate tier classification and session discovery for the new CLI sources.

Tests, Docs & Miscellaneous

Layer / File(s) Summary
Tests
brain-bar/Tests/BrainBarTests/*, tests/*
Broad suite of new/updated tests: AgentActivityMonitor, InjectionPresentation, KGAtlasLayout/Presentation, Dashboard/Flow/Pipeline tests, Database search/FTS tests, enrichment tier/session tests, panel controller tests, and window-state tests; many tests inject deterministic now.
Docs / Plans
docs/plans/2026-04-18-*.md
Adds three planning docs: dashboard flow plan, injections+graph hybrid plan, and collaboration plan/spec.
Small Behavior
brain-bar/Sources/BrainBar/InjectionStore.swift
start(sessionID:limit:) no longer forces refresh when params unchanged.

Sequence Diagram(s)

(Skipped — changes span multiple independent flows and features rather than a single concentrated multi-component sequence that benefits from one diagram.)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 I counted agents, seeded nodes by hand,

Panels pop, cards scroll across the land,
Trigrams hum, sparklines pulse in tune,
Bursts and ribbons hop beneath the moon,
Tests all pass — the dashboard feels just grand!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 4.44% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat(brainbar): restore dashboard flow UI on Phase B main' directly and clearly summarizes the main change—restoring the BrainBar dashboard flow UI to the main branch, which is the primary objective of this substantial pull request.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/brainbar-dashboard-flow

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c23bd8b9d7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".


func refresh(force: Bool = false) {
let nextDaemon = daemonMonitor.sample()
agentActivity = agentActivityMonitor.sample()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Move agent snapshot sampling off the main actor

StatsCollector is @MainActor and refresh() runs every second, but this line invokes AgentActivityMonitor.sample(), which synchronously shells out to /bin/ps and waits for completion in runSnapshotCommand. On systems with large process tables, this blocks the UI thread on each poll and can cause visible stutter or delayed interaction; sampling should happen off-main and publish back to the actor.

Useful? React with 👍 / 👎.

Comment thread brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
Comment thread brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift
Comment on lines +36 to +38
defer { self.simulationTask = nil }

while self.timerActive && !Task.isCancelled {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Low KnowledgeGraph/KGSimulationController.swift:36

defer { self.simulationTask = nil } at line 36 nils simulationTask unconditionally when the task completes. If stop() is called (cancelling task A and niling the property) followed immediately by start() (creating task B), task A's defer block still executes and sets simulationTask = nil, discarding the reference to task B. Subsequent stop() calls then fail to cancel task B. Consider removing the defer block and letting stop() exclusively manage the property lifecycle, or capture task identity in the defer to avoid overwriting a newer task.

-        simulationTask = Task { @MainActor [weak self] in
-            guard let self else { return }
-            defer { self.simulationTask = nil }
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGSimulationController.swift around lines 36-38:

`defer { self.simulationTask = nil }` at line 36 nils `simulationTask` unconditionally when the task completes. If `stop()` is called (cancelling task A and niling the property) followed immediately by `start()` (creating task B), task A's defer block still executes and sets `simulationTask = nil`, discarding the reference to task B. Subsequent `stop()` calls then fail to cancel task B. Consider removing the `defer` block and letting `stop()` exclusively manage the property lifecycle, or capture task identity in the defer to avoid overwriting a newer task.

Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGSimulationController.swift (REVIEWED_COMMIT):
- Line 34-45: `start()` creates Task and assigns to `simulationTask`
- Line 36: `defer { self.simulationTask = nil }` inside the Task closure
- Line 38: `await self.sleep(...)` - suspension point where Task A yields control
- Lines 48-51: `stop()` cancels task and nils `simulationTask`
- The defer block does not capture or check task identity, so it will nil any value in `simulationTask` including a newer Task B created after Task A was cancelled.

Comment thread brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift
EtanHey added 3 commits May 1, 2026 17:35
…ments dominant

- Move agent presence strip below chart cards so Writes/Enrichments sit
  directly under the hero card as the visually dominant section.
- Demote the queue rail into a compact single-line header (icon + status
  pill) with only Backlog and Coverage metrics; drop the redundant
  Direction metric and the verbose secondary description.
- Preserve the disclosure-collapsed runtime/details panel so the dashboard
  stays calm by default and scales gracefully on narrow windows.

Runs swift test --filter BrainBarUXLogicTests (16 tests, all passing).
@EtanHey EtanHey force-pushed the fix/brainbar-dashboard-flow branch from c23bd8b to 2f28c06 Compare May 1, 2026 14:55
Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@EtanHey EtanHey changed the title fix(brainbar): dashboard flow + post-merge deploy check feat(brainbar): restore dashboard flow UI on Phase B main May 1, 2026
accentColor: livePresentation.accentColor
)
private var flowSummary: DashboardFlowSummary {
DashboardFlowSummary.derive(daemon: collector.daemon, stats: collector.stats)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Computed flowSummary recalculated many times per render

Medium Severity

The flowSummary computed property calls DashboardFlowSummary.derive(daemon:stats:) on every access, and it's accessed dozens of times during a single body evaluation — from overviewCard, overviewNarrative, overviewBadgeRow, overviewCaption, writesCard, enrichmentsCard, queueRail, and diagnostics. Each access triggers a full re-derive including status computation and string formatting. Storing the result in a local let at the top of body would eliminate the redundant work.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 2f28c06. Configure here.

let indexInRing = offset - (ring * ring)
let countInRing = max(ring * 6, 1)
let angle = (Double(indexInRing) / Double(countInRing)) * Double.pi * 2
let radius = CGFloat(42 + ring * 30)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Atlas ring layout allows more nodes than positions

Low Severity

The ring formula assigns offsets 1–8 to ring 1 (since ceil((sqrt(9)-1)/2) still equals 1), but countInRing for ring 1 is 1 * 6 = 6. Nodes at offsets 7 and 8 get indexInRing values of 6 and 7, producing angles ≥ 2π that wrap around and overlap with earlier nodes in the same ring, causing visual collisions in the knowledge atlas.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 2f28c06. Configure here.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2f28c06161

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

WHEN \(column) IS NULL OR TRIM(\(column)) = '' THEN NULL
WHEN substr(\(column), -1) = 'Z' THEN unixepoch(\(column))
WHEN substr(\(column), -6, 1) IN ('+', '-') THEN unixepoch(\(column))
WHEN instr(\(column), 'T') > 0 THEN unixepoch(\(column), 'utc')
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Parse timezone-naive timestamps without applying utc

The new instr(...,'T') branch converts naive ISO timestamps with unixepoch(column, 'utc'), but in SQLite the utc modifier assumes the input is local time and shifts it by the machine timezone. On non-UTC hosts, legacy/naive values like 2026-04-18T09:58:00 are moved by several hours, which mis-buckets dashboard activity and can make lastWriteAt/lastEnrichedAt appear in the future (causing false “live now” status via eventIsLive).

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift (1)

99-115: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Don’t silently keep the last healthy stats after a database read failure.

When refresh(force: false) throws, the collector reuses the previous stats and still derives state from them. That can leave the dashboard showing a healthy pipeline even though every current DB read is failing. Surface the failure explicitly or force a degraded/unavailable state instead of masking it with stale data. Based on learnings: do not allow BrainBar database failures to degrade into silent defaults because that hides real startup/runtime issues.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@brain-bar/Sources/BrainBar/BrainBarWindowState.swift`:
- Around line 57-74: The status copy in derive(stats:) hardcodes "60s" instead
of using the new live-window metadata; update the BrainBarLivePresentation
initializers to interpolate stats.liveWindowMinutes into the badge/status text
(e.g., replace "Enrichments in the last 60s" and "No enrichments in the last
60s" with strings that use stats.liveWindowMinutes) and keep using
DashboardMetricFormatter.lastEventString for badgeText so the displayed window
matches the logic that checks stats.eventIsLive.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 44-45: The default for activity window is still 30 minutes in the
BrainDatabase initializer signature (activityWindowMinutes: Int = 30); update
that default to 60 so callers that omit the parameter get a 60-minute dashboard
window, and make the same change for the other matching occurrence of the
initializer/parameter elsewhere in the file (the second activityWindowMinutes
default at the noted location).

In `@brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift`:
- Around line 123-137: The child process can deadlock because stderr is set to a
separate Pipe that is never read; change the code to redirect stderr into the
same Pipe used for stdout (replace the separate Pipe assignment for
process.standardError with process.standardError = output) so both streams are
read via output.fileHandleForReading before calling process.waitUntilExit()
(refer to the process, output, and fileHandleForReading symbols and ensure
process.standardError is not left as an unread Pipe).

In `@brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift`:
- Around line 84-86: The refresh(force:) method is sampling
daemonMonitor.sample() and agentActivityMonitor.sample() on the UI actor causing
jank; move the potentially blocking sampling off the `@MainActor` by performing
the calls in a background task (e.g., Task.detached or DispatchQueue.global) and
then marshal only the state assignment back to the MainActor. Concretely: in
refresh(force:) call Task.detached { let next = daemonMonitor.sample(); let
activity = agentActivityMonitor.sample(); await MainActor.run { self.nextDaemon
= next; self.agentActivity = activity } } (use the actual property names used in
this file) so sampling happens off-main and only the minimal UI updates run on
the main actor.

In `@brain-bar/Sources/BrainBar/InjectionFeedView.swift`:
- Around line 220-223: The Button handlers currently swallow errors by using
try? when calling store.expandedConversation(chunkID:), so database/read
failures become silent no-ops; change both occurrences (the Button action around
selectedConversation = try? store.expandedConversation(...) and the similar code
at the 257-258 site) to perform a do/catch: call try
store.expandedConversation(chunkID:) inside do, assign selectedConversation on
success, and in catch set an explicit error state (e.g., set a `@State`
errorMessage / showError flag or update an error enum) so the UI can surface a
visible failure (alert, overlay, or error view) instead of failing silently.
Ensure you reference the same selectedConversation and
store.expandedConversation(chunkID:) symbols so the handlers report the caught
error details to the UI.

In `@brain-bar/Sources/BrainBar/InjectionPresentation.swift`:
- Around line 84-95: parseDate is failing to parse non-fractional SQLite
timestamps because sqliteTimestampFormatter() only matches "yyyy-MM-dd
HH:mm:ss.SSSSSS" while BrainDatabase.sqliteDateFormatter emits "yyyy-MM-dd
HH:mm:ss"; add a new DateFormatter (e.g., sqliteNonFractionalFormatter or extend
sqliteTimestampFormatter to accept optional fractional seconds) that uses the
format "yyyy-MM-dd HH:mm:ss" and update parseDate to try this formatter (place
it before/after the fractional one as appropriate) so non-fractional sqlite
timestamps are parsed correctly; reference functions: parseDate(_:) and
sqliteTimestampFormatter().

In `@brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift`:
- Around line 57-60: The ring calculation is wrong (it keeps offsets 7 and 8 in
ring 1) causing overlapping nodes; replace the current ring/index math
(variables ring, indexInRing, countInRing, angle) with logic that computes the
smallest ring r such that offset < totalCellsUpToRing(r) where
totalCellsUpToRing(r) = 1 + 3*r*(r+1), then set indexInRing = offset -
totalCellsUpToRing(r-1) (with totalCellsUpToRing(-1)=0) and countInRing = max(6
* r, 1); keep angle = (Double(indexInRing) / Double(countInRing)) * 2*Double.pi.
This guarantees the first overflow ring starts at offset 7 and prevents angle
reuse/overlap.

In `@brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift`:
- Around line 32-56: The current grouping builds regions only from
orderedEntityTypes so any nodes whose entityType is not listed there get dropped
from the Snapshot; update the construction of regions (the use of grouped and
orderedEntityTypes) to also include Region entries for any remaining
grouped.keys not present in orderedEntityTypes (preserving the same node sort
logic and using title(for:) to generate the title), then ensure
Snapshot.visibleNodes and selectedRegion logic use the complete regions array;
mirror the same change in the other occurrence around lines 73-82 where regions
are built.

In `@brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift`:
- Around line 56-67: The code is seeding KGAtlasLayout with geo.size (the outer
container which includes the sidebar) causing layout/hit-test drift when the
inspector is visible; change both occurrences (the onAppear/onChange block using
geo.size and the similar block at lines ~109-114) to use the actual inner canvas
size that setCanvas(size:) stores (e.g., the view's stored canvasSize or
whatever property setCanvas updates) when calling KGAtlasLayout.seededNodes and
when initially setting the canvas; update references from geo.size to that inner
canvasSize so seeding and hit-testing use the displayed viewport rather than the
outer container.

In `@brain-bar/Tests/BrainBarTests/KGAtlasLayoutTests.swift`:
- Around line 5-26: Update the test
testSeededNodesPlaceEntityTypesIntoStableRegions to include a regression case
with at least eight nodes of the same entityType (e.g., add more KGNode entries
with entityType "person") before calling KGAtlasLayout.seededNodes so the first
ring-to-second-ring transition is exercised; after seeding, assert there are >=8
person nodes and validate their x-positions do not overlap at the ring boundary
by checking ordering relative to project/tool nodes (use personNodes indices and
projectNode/toolNode from the same test) and also assert pairwise horizontal
spacing between adjacent personNodes exceeds a small positive threshold (to
catch overlap when the ring changes). Ensure references to
KGAtlasLayout.seededNodes and testSeededNodesPlaceEntityTypesIntoStableRegions
are used so the new case lives in the same test.

In `@docs/plans/2026-04-18-brainbar-dashboard-flow-collab.md`:
- Around line 3-36: The markdown violates MD022/MD058 because headings and the
task table are not separated by blank lines; update the document by inserting a
single blank line before and after each top-level section heading (e.g., "##
Goal", "## Agents", "## Constraints", "## Task Board", "## Gates", "## Notes")
and ensure the task table has a blank line above and below it so there is a
blank separator between the preceding heading and the table and between the
table and the following heading.

In `@docs/plans/2026-04-18-brainbar-dashboard-flow.md`:
- Around line 13-163: The document uses H3 (###) for top-level Task headings
which skips H2 and fails MD001; update each task heading text like "Task 1: Add
explicit dashboard time semantics", "Task 2: Replace the single enum with a
richer flow model", etc., to use H2 (##) instead of H3 so heading levels are
hierarchical and sequential, keeping the rest of the content unchanged; scan the
file for lines starting with "### Task" and replace them with "## Task" to
correct all task-level headings.

In `@docs/plans/2026-04-18-brainbar-injections-graph-hybrid.md`:
- Around line 102-106: Update the doc to match the implemented helper name:
replace the outdated symbol `KGPresentation` with `KGAtlasPresentation` in the
listed plan bullet (and any nearby references in this file) so the plan points
to the existing helper; ensure the description line remains the same (groups
nodes into atlas regions, computes altitude-filtered visible nodes, preserves
selected node visibility) but uses the `KGAtlasPresentation` identifier.

In `@src/brainlayer/pipeline/session_enrichment.py`:
- Around line 361-369: The enrichment flow that selects chunks (see
source_placeholders, source_query and T1_T2_SOURCES) currently drops
build_log/dir_listing chunks and frames prompts as a Claude Code conversation;
update the reconstruction/prompting logic that consumes rows from
cursor.execute(...) so that: (1) ai_code, stack_trace, and user_message chunk
types are preserved verbatim; (2) noise chunk types are skipped entirely; (3)
build_log and dir_listing chunks are not discarded but summarized to a
structural outline only (e.g., file names, sizes, key commands/output lines),
and (4) sessions from Codex/Cursor/Gemini are recognized and use a
neutral/generic multi-model prompt template instead of the Claude-specific
prompt so the enriched context is accurate across models. Ensure the
filtering/transforming happens immediately after fetching rows from the chunks
query so downstream enrichment receives the corrected chunk list.

In `@tests/test_session_enrichment.py`:
- Around line 529-559: In test_agent_cli_sessions_are_discovered_from_chunks,
add a negative case that inserts a chunk using an unrecognized source (e.g.,
"unknown_cli" or "bad_source") and then call list_sessions_for_enrichment(store)
and assert that the corresponding session id is NOT in the returned session_ids;
this ensures the allow-list logic in list_sessions_for_enrichment is enforced
and prevents accidental admission of arbitrary sources.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 51ca8f04-fa23-4288-8c0a-f5fdc598250f

📥 Commits

Reviewing files that changed from the base of the PR and between 3cc52be and 2f28c06.

📒 Files selected for processing (34)
  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowState.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
  • brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift
  • brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift
  • brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift
  • brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift
  • brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift
  • brain-bar/Sources/BrainBar/InjectionFeedView.swift
  • brain-bar/Sources/BrainBar/InjectionPresentation.swift
  • brain-bar/Sources/BrainBar/InjectionStore.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGEdgeView.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGNodeView.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGSidebarView.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGViewModel.swift
  • brain-bar/Sources/BrainBar/Views/Components/ChunkConversationSheet.swift
  • brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift
  • brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift
  • brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift
  • brain-bar/Tests/BrainBarTests/DashboardTests.swift
  • brain-bar/Tests/BrainBarTests/InjectionPresentationTests.swift
  • brain-bar/Tests/BrainBarTests/KGAtlasLayoutTests.swift
  • brain-bar/Tests/BrainBarTests/KGAtlasPresentationTests.swift
  • brain-bar/Tests/BrainBarTests/KnowledgeGraphTests.swift
  • docs/plans/2026-04-18-brainbar-dashboard-flow-collab.md
  • docs/plans/2026-04-18-brainbar-dashboard-flow.md
  • docs/plans/2026-04-18-brainbar-injections-graph-hybrid.md
  • src/brainlayer/pipeline/enrichment_tiers.py
  • src/brainlayer/pipeline/session_enrichment.py
  • tests/test_enrichment_tiers_agent_sources.py
  • tests/test_session_enrichment.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.13)
  • GitHub Check: Macroscope - Correctness Check
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Flag risky DB or concurrency changes explicitly and do not hand-wave lock behavior
Enforce one-write-at-a-time concurrency constraint; reads are safe but brain_digest is write-heavy and must not run in parallel with other MCP work
Run pytest before claiming behavior changed safely; current test suite has 929 tests

**/*.py: Use paths.py:get_db_path() for all database path resolution; all scripts and CLI must use this function rather than hardcoding paths
When performing bulk database operations: stop enrichment workers first, checkpoint WAL before and after, drop FTS triggers before bulk deletes, batch deletes in 5-10K chunks, and checkpoint every 3 batches

Files:

  • src/brainlayer/pipeline/session_enrichment.py
  • tests/test_enrichment_tiers_agent_sources.py
  • tests/test_session_enrichment.py
  • src/brainlayer/pipeline/enrichment_tiers.py
src/brainlayer/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/brainlayer/**/*.py: Use retry logic on SQLITE_BUSY errors; each worker must use its own database connection to handle concurrency safely
Classification must preserve ai_code, stack_trace, and user_message verbatim; skip noise entries entirely and summarize build_log and dir_listing entries (structure only)
Use AST-aware chunking via tree-sitter; never split stack traces; mask large tool output
For enrichment backend selection: use Groq as primary backend (cloud, configured in launchd plist), Gemini as fallback via enrichment_controller.py, and Ollama as offline last-resort; allow override via BRAINLAYER_ENRICH_BACKEND env var
Configure enrichment rate via BRAINLAYER_ENRICH_RATE environment variable (default 0.2 = 12 RPM)
Implement chunk lifecycle columns: superseded_by, aggregated_into, archived_at on chunks table; exclude lifecycle-managed chunks from default search; allow include_archived=True to show history
Implement brain_supersede with safety gate for personal data (journals, notes, health/finance); use soft-delete for brain_archive with timestamp
Add supersedes parameter to brain_store for atomic store-and-replace operations
Run linting and formatting with: ruff check src/ && ruff format src/
Run tests with pytest
Use PRAGMA wal_checkpoint(FULL) before and after bulk database operations to prevent WAL bloat

Files:

  • src/brainlayer/pipeline/session_enrichment.py
  • src/brainlayer/pipeline/enrichment_tiers.py
🧠 Learnings (11)
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.

Applied to files:

  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGViewModel.swift
  • brain-bar/Sources/BrainBar/InjectionStore.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGNodeView.swift
  • brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift
  • brain-bar/Sources/BrainBar/Views/Components/ChunkConversationSheet.swift
  • brain-bar/Sources/BrainBar/InjectionPresentation.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowState.swift
  • brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift
  • brain-bar/Sources/BrainBar/InjectionFeedView.swift
  • brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift
  • brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift
  • brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGSidebarView.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
  • brain-bar/Sources/BrainBar/KnowledgeGraph/KGEdgeView.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.

Applied to files:

  • brain-bar/Sources/BrainBar/InjectionStore.swift
  • brain-bar/Sources/BrainBar/InjectionPresentation.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowState.swift
  • brain-bar/Sources/BrainBar/InjectionFeedView.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
📚 Learning: 2026-04-11T23:47:49.756Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-04-11T23:47:49.756Z
Learning: Applies to `src/brainlayer/enrichment_controller.py`: `service_tier="flex"` is the intentional default for all Gemini enrichment calls. Pass-2 enrichment is asynchronous backlog work where 1–15 minute latency is acceptable, and the 50% Gemini Flex Inference discount materially reduces backlog cost. This is locked by R84b design (§8 Q2). The `BRAINLAYER_GEMINI_SERVICE_TIER` environment variable is purely an operational escape hatch (e.g. `standard`), not the intended runtime default. Do not flag `service_tier="flex"` as a concern on this code path.

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-01T01:24:44.281Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-01T01:24:44.281Z
Learning: Applies to src/brainlayer/*enrichment*.py : Enrichment backend priority: Groq (primary/cloud) → Gemini (fallback) → Ollama (offline last-resort), configurable via `BRAINLAYER_ENRICH_BACKEND` environment variable

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-02T23:32:14.543Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-02T23:32:14.543Z
Learning: Applies to src/brainlayer/*enrichment*.py : Enrichment rate configurable via `BRAINLAYER_ENRICH_RATE` environment variable (default 0.2 = 12 RPM)

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : Classification must preserve `ai_code`, `stack_trace`, and `user_message` verbatim; skip `noise` entries entirely and summarize `build_log` and `dir_listing` entries (structure only)

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : For enrichment backend selection: use Groq as primary backend (cloud, configured in launchd plist), Gemini as fallback via `enrichment_controller.py`, and Ollama as offline last-resort; allow override via `BRAINLAYER_ENRICH_BACKEND` env var

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-11T16:54:45.631Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-04-11T16:54:45.631Z
Learning: Applies to `src/brainlayer/enrichment_controller.py` and `src/brainlayer/pipeline/rate_limiter.py`: Gemini API calls in the enrichment pipeline are gated by a token bucket rate limiter. The rate is controlled by `BRAINLAYER_ENRICH_RATE` (default `5/s`, burst `10`) to keep throughput inside the Gemini Flex intended envelope. This default supersedes the earlier 0.2 (12 RPM) default for the Gemini Flex integration path.

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Pipeline architecture: Extract → Classify → Chunk → Embed → Index, with post-processing for enrichment, brain graph, and Obsidian export

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-13T15:04:00.631Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-13T15:04:00.631Z
Learning: Extract → Classify → Chunk → Embed → Index pipeline order must be preserved

Applied to files:

  • src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Use LaunchAgent `com.brainlayer.watch.plist` with KeepAlive=true and Nice=10 for persistent watcher process

Applied to files:

  • brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift
🪛 LanguageTool
docs/plans/2026-04-18-brainbar-injections-graph-hybrid.md

[style] ~61-~61: This phrase is redundant. Consider writing “space”.
Context: ...ession ID, query text, and chunk IDs. - Empty space is not acceptable; the rail should coll...

(EMPTY_HOLE)

🪛 markdownlint-cli2 (0.22.1)
docs/plans/2026-04-18-brainbar-dashboard-flow-collab.md

[warning] 3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 6-6: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 18-18: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 24-24: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 25-25: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


[warning] 32-32: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 36-36: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)

docs/plans/2026-04-18-brainbar-dashboard-flow.md

[warning] 13-13: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

🪛 SwiftLint (0.63.2)
brain-bar/Tests/BrainBarTests/InjectionPresentationTests.swift

[Warning] 4-4: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Tests/BrainBarTests/KGAtlasPresentationTests.swift

[Warning] 4-4: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Tests/BrainBarTests/KGAtlasLayoutTests.swift

[Warning] 4-4: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift

[Warning] 4-4: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift

[Warning] 71-71: Classes should have an explicit deinit method

(required_deinit)

🔇 Additional comments (15)
src/brainlayer/pipeline/enrichment_tiers.py (1)

47-47: Looks aligned with the expanded agent-session set.

The broadened source constant and the matching tier filters stay consistent with the new CLI families and the updated session-discovery tests.

Also applies to: 133-135

tests/test_enrichment_tiers_agent_sources.py (1)

6-33: Good coverage for the expanded agent CLI sources.

The recent/old split and the source-filter assertion cover the new tiering behavior cleanly.

brain-bar/Sources/BrainBar/Views/Components/ChunkConversationSheet.swift (1)

74-80: Clean close-path unification across sheet and overlay.

The onClose fallback design is solid and integrates cleanly with overlay-driven presentation.

Also applies to: 94-123

brain-bar/Sources/BrainBar/KnowledgeGraph/KGViewModel.swift (1)

37-51: Good integration of atlas seeding into graph load path.

Mapping raw entities to KGNode and then seeding positions in one place keeps initialization deterministic and easier to reason about.

brain-bar/Sources/BrainBar/KnowledgeGraph/KGSidebarView.swift (1)

74-93: Nice section decomposition and reusable styling primitives.

Breaking the inspector into focused section builders + shared card/chip helpers makes this view much easier to maintain.

Also applies to: 95-161, 163-249

brain-bar/Sources/BrainBar/KnowledgeGraph/KGEdgeView.swift (1)

13-35: Looks good.

The edge stroke and label colors now follow the canvas colorScheme and stay consistent with the rest of the KnowledgeGraph rendering pipeline.

brain-bar/Sources/BrainBar/InjectionStore.swift (1)

34-42: Looks good.

The new parameter check avoids unnecessary forced refreshes while preserving the initial refresh on first start.

brain-bar/Tests/BrainBarTests/KnowledgeGraphTests.swift (1)

8-8: Looks good.

Adding SQLite3 here is appropriate, and the new test explicitly covers the kg_entities.importance fallback path that the database now relies on.

Also applies to: 72-89

brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift (1)

341-365: Looks good.

The popover now reads from DashboardFlowSummary for the headline and sparkline, and the enrichment activity copy correctly follows the configured window.

Also applies to: 396-401

brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift (1)

1-57: Looks good.

This test suite covers the key parsing and snapshot-execution paths well, including helper noise filtering and the large-stdout case.

brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift (1)

34-52: Looks good.

Injecting a fixed now makes these assertions deterministic, and the expected badge/status text now matches the live-presentation behavior.

Also applies to: 54-74

brain-bar/Sources/BrainBar/KnowledgeGraph/KGNodeView.swift (1)

12-41: Looks good.

The node renderer now uses the canvas colorScheme consistently for its selected and unselected palette.

brain-bar/Tests/BrainBarTests/InjectionPresentationTests.swift (1)

1-86: Looks good.

These tests cover the snapshot grouping, bucketization, and filtering behavior that the new presentation model depends on.

brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift (1)

178-376: Good centralization of flow-state derivation.

Consolidating headline/detail/lane/queue derivation into DashboardFlowSummary with injectable now improves determinism and keeps UI state logic coherent.

brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift (1)

59-91: Timestamp-based age formatting looks solid.

lastEventString + window label helpers cleanly align the UI with real event timestamps instead of inferred bucket positions.

Comment on lines +57 to 74
static func derive(stats: BrainDatabase.DashboardStats, now: Date = Date()) -> BrainBarLivePresentation {
if stats.eventIsLive(stats.lastEnrichedAt, now: now) {
return BrainBarLivePresentation(
sparklineStyle: .active,
badgeText: DashboardMetricFormatter.liveBadgeString(ratePerMinute: stats.enrichmentRatePerMinute),
statusText: "Live enrichment stream"
badgeText: "live now",
statusText: "Enrichments in the last 60s"
)
}

return BrainBarLivePresentation(
sparklineStyle: .idle,
badgeText: "idle",
statusText: "Idle — no enrichment in last 60s"
badgeText: DashboardMetricFormatter.lastEventString(
lastEventAt: stats.lastEnrichedAt,
activityWindowMinutes: stats.activityWindowMinutes,
now: now
),
statusText: "No enrichments in the last 60s"
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Use stats.liveWindowMinutes in the live/idle status copy.

The PR added explicit live-window metadata, but this still hardcodes 60s. If the database changes that window, the badge text stays stale while the logic uses the new value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainBarWindowState.swift` around lines 57 - 74,
The status copy in derive(stats:) hardcodes "60s" instead of using the new
live-window metadata; update the BrainBarLivePresentation initializers to
interpolate stats.liveWindowMinutes into the badge/status text (e.g., replace
"Enrichments in the last 60s" and "No enrichments in the last 60s" with strings
that use stats.liveWindowMinutes) and keep using
DashboardMetricFormatter.lastEventString for badgeText so the displayed window
matches the logic that checks stats.eventIsLive.

Comment on lines +44 to +45
activityWindowMinutes: Int = 30,
bucketCount: Int = 12,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Default activity window still set to 30 minutes.

Both defaults remain 30, so callers that omit activityWindowMinutes won’t get the intended 60-minute dashboard window behavior.

💡 Suggested fix
-            activityWindowMinutes: Int = 30,
+            activityWindowMinutes: Int = 60,
@@
-    func dashboardStats(activityWindowMinutes: Int = 30, bucketCount: Int = 12) throws -> DashboardStats {
+    func dashboardStats(activityWindowMinutes: Int = 60, bucketCount: Int = 12) throws -> DashboardStats {

Also applies to: 1014-1014

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 44 - 45, The
default for activity window is still 30 minutes in the BrainDatabase initializer
signature (activityWindowMinutes: Int = 30); update that default to 60 so
callers that omit the parameter get a 60-minute dashboard window, and make the
same change for the other matching occurrence of the initializer/parameter
elsewhere in the file (the second activityWindowMinutes default at the noted
location).

Comment on lines +123 to +137
let output = Pipe()
process.standardOutput = output
process.standardError = Pipe()

do {
try process.run()
} catch {
return nil
}

// Drain stdout before waiting so verbose process lists cannot fill the pipe and deadlock tests.
let data = output.fileHandleForReading.readDataToEndOfFile()
process.waitUntilExit()
guard process.terminationStatus == 0 else { return nil }
return String(data: data, encoding: .utf8)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
# Verify the current wiring and drain behavior in runSnapshotCommand.
rg -n -A20 -B6 'static func runSnapshotCommand' brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift

Repository: EtanHey/brainlayer

Length of output: 1072


Redirect stderr to the same pipe as stdout to prevent deadlock.

The code creates a pipe for stderr but never reads from it. When the child process writes more than the pipe buffer can hold (~64KB), it blocks, causing waitUntilExit() to hang.

Suggested fix
         let output = Pipe()
         process.standardOutput = output
-        process.standardError = Pipe()
+        process.standardError = output
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift` around lines
123 - 137, The child process can deadlock because stderr is set to a separate
Pipe that is never read; change the code to redirect stderr into the same Pipe
used for stdout (replace the separate Pipe assignment for process.standardError
with process.standardError = output) so both streams are read via
output.fileHandleForReading before calling process.waitUntilExit() (refer to the
process, output, and fileHandleForReading symbols and ensure
process.standardError is not left as an unread Pipe).

Comment on lines 84 to +86
func refresh(force: Bool = false) {
let nextDaemon = daemonMonitor.sample()
agentActivity = agentActivityMonitor.sample()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Move agent-process sampling off the @MainActor refresh path.

With the default AgentActivityMonitor(), this now runs process-snapshot work synchronously once per second on the UI actor. Even small ps/process-enumeration latency will translate into visible dashboard jank.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift` around lines 84 -
86, The refresh(force:) method is sampling daemonMonitor.sample() and
agentActivityMonitor.sample() on the UI actor causing jank; move the potentially
blocking sampling off the `@MainActor` by performing the calls in a background
task (e.g., Task.detached or DispatchQueue.global) and then marshal only the
state assignment back to the MainActor. Concretely: in refresh(force:) call
Task.detached { let next = daemonMonitor.sample(); let activity =
agentActivityMonitor.sample(); await MainActor.run { self.nextDaemon = next;
self.agentActivity = activity } } (use the actual property names used in this
file) so sampling happens off-main and only the minimal UI updates run on the
main actor.

Comment on lines +220 to +223
Button {
if let firstChunk = event.chunkIDs.first {
selectedConversation = try? store.expandedConversation(chunkID: firstChunk)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Don’t swallow conversation-load failures behind a dead “Open” action.

Both handlers use try?, so any database/read error turns into a no-op and the user gets no feedback. This is especially painful during startup or DB churn because the new overlay path looks clickable but silently does nothing. Surface an explicit error state instead. Based on learnings: do not silently default on BrainBar database access failures; expose them explicitly so startup/runtime issues are visible.

Also applies to: 257-258

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/InjectionFeedView.swift` around lines 220 - 223,
The Button handlers currently swallow errors by using try? when calling
store.expandedConversation(chunkID:), so database/read failures become silent
no-ops; change both occurrences (the Button action around selectedConversation =
try? store.expandedConversation(...) and the similar code at the 257-258 site)
to perform a do/catch: call try store.expandedConversation(chunkID:) inside do,
assign selectedConversation on success, and in catch set an explicit error state
(e.g., set a `@State` errorMessage / showError flag or update an error enum) so
the UI can surface a visible failure (alert, overlay, or error view) instead of
failing silently. Ensure you reference the same selectedConversation and
store.expandedConversation(chunkID:) symbols so the handlers report the caught
error details to the UI.

Comment on lines +3 to +36
## Goal
Finish the second-pass BrainBar dashboard refactor without mixing `live agent CLIs` into `indexed writes` or letting the queue/runtime cards dominate the layout.

## Agents
- `main-codex`
Ownership:
- `brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift`
- `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift`
- `src/brainlayer/pipeline/enrichment_tiers.py`
- `brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift`
- `tests/test_enrichment_tiers_agent_sources.py`
- `clean-claude-ui`
Ownership:
- `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`

## Constraints
- Do not edit files outside your ownership.
- Do not revert or restage another agent's work.
- `clean-claude-ui` must `git add` only its owned file before commit.
- `clean-claude-ui` may commit and push its owned UI change after tests pass.

## Task Board
| Task | Owner | Status |
|------|-------|--------|
| Add process-based agent family monitor | main-codex | done |
| Extend enrichment tiering to non-Claude agent sources | main-codex | done |
| Refactor dashboard layout to add agent strip and compact queue rail | clean-claude-ui | in_progress |
| Collapse/demote runtime details behind disclosure | clean-claude-ui | in_progress |

## Gates
- `main-codex`: `swift test --filter AgentActivityMonitorTests`, `swift test --filter BrainBarUXLogicTests`, `pytest tests/test_enrichment_tiers.py tests/test_enrichment_tiers_agent_sources.py -q`
- `clean-claude-ui`: `swift test --filter BrainBarUXLogicTests` before commit

## Notes
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add required blank lines around headings and the task table.

This document currently violates MD022/MD058 in several places; adding blank separators will clear lint warnings.

🧰 Tools
🪛 markdownlint-cli2 (0.22.1)

[warning] 3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 6-6: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 18-18: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 24-24: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 25-25: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


[warning] 32-32: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 36-36: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/2026-04-18-brainbar-dashboard-flow-collab.md` around lines 3 - 36,
The markdown violates MD022/MD058 because headings and the task table are not
separated by blank lines; update the document by inserting a single blank line
before and after each top-level section heading (e.g., "## Goal", "## Agents",
"## Constraints", "## Task Board", "## Gates", "## Notes") and ensure the task
table has a blank line above and below it so there is a blank separator between
the preceding heading and the table and between the table and the following
heading.

Comment on lines +13 to +163
### Task 1: Add explicit dashboard time semantics

**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainDatabase.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`

**Step 1: Write the failing tests**

Add tests that assert `DashboardStats` carries:
- `activityWindowMinutes`
- `bucketCount`
- `liveWindowMinutes`
- `lastWriteAt`
- `lastEnrichedAt`

Also add a database test proving:
- a chunk written now and enriched 90 seconds ago reports `lastWriteAt != nil`
- `lastEnrichedAt != nil`
- `enrichmentRatePerMinute == 0`
- enrichment trend still contains the older completion

**Step 2: Run test to verify it fails**

Run: `swift test --filter DashboardTests`
Expected: compile failure or assertion failure because `DashboardStats` does not yet expose the new time fields.

**Step 3: Write minimal implementation**

Update `DashboardStats` and `dashboardStats(...)` to include the new metadata and fetch `MAX(created_at)` / `MAX(enriched_at)` timestamps using the same SQLite date parsing already used for bucket generation.

**Step 4: Run test to verify it passes**

Run: `swift test --filter DashboardTests`
Expected: the new stats tests pass and existing dashboard database tests still pass.

### Task 2: Replace the single enum with a richer flow model

**Files:**
- Modify: `brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift`
- Modify: `brain-bar/Sources/BrainBar/BrainBarWindowState.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift`

**Step 1: Write the failing tests**

Add tests for a derived flow summary that can represent:
- writes live + enrichments live at the same time
- writes idle + enrichments draining backlog
- writes idle + enrichments idle + backlog present
- daemon unavailable

Add formatter tests that require:
- real last-event strings from actual timestamps, not bucket position
- a stable “live now” badge that only reflects the configured live window

**Step 2: Run test to verify it fails**

Run: `swift test --filter BrainBarUXLogicTests`
Run: `swift test --filter BrainBarWindowStateTests`
Expected: failures because the richer flow summary and timestamp-driven formatting do not exist yet.

**Step 3: Write minimal implementation**

Introduce a derived dashboard flow model with:
- a write lane state
- an enrichment lane state
- a backlog/queue state
- a top-level narrative for the hero

Keep `PipelineState` available for legacy consumers, but make it derive from the richer summary instead of directly from raw buckets.

Replace bucket-inferred `lastCompletionString(...)` with a timestamp-based formatter and add a generic “time ago” helper for both writes and enrichments.

**Step 4: Run test to verify it passes**

Run: `swift test --filter BrainBarUXLogicTests`
Run: `swift test --filter BrainBarWindowStateTests`
Expected: new derivation and formatter tests pass without breaking the existing layout/token tests.

### Task 3: Rebuild the dashboard into a dual-lane flow board

**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swift`
- Possibly modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift`

**Step 1: Write the failing tests**

Add logic-level tests for any new copy helpers or layout-state helpers needed by the dashboard, including:
- synchronized window labels
- lane summaries using the same window value
- idle copy that no longer contradicts a recent-trend graph

**Step 2: Run test to verify it fails**

Run: `swift test --filter BrainBarUXLogicTests`
Expected: failures because the new helpers or derived labels are not implemented yet.

**Step 3: Write minimal implementation**

Rebuild `BrainBarDashboardView` so the hero becomes a flow board with:
- a left lane for writes
- a center queue/backlog card
- a right lane for enrichments
- shared window labeling
- real last-write / last-enriched timestamps
- live badges that are explicitly “now”
- charts with stronger fill, endpoint emphasis, and separate colors for ingress vs enrichment

Reuse the same derived flow summary everywhere in the window instead of mixing independent heuristics.

If `StatusPopoverView` still shows contradictory wording after the model change, update it to consume the same derived summary.

**Step 4: Run test to verify it passes**

Run: `swift test --filter BrainBarUXLogicTests`
Expected: the helper/copy tests pass and the dashboard remains readable under compact layout tests.

### Task 4: Keep legacy/menu surfaces consistent

**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainBarApp.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`

**Step 1: Write the failing tests**

Add or extend tests that verify the status popover still loads and that the live/menu surfaces can continue rendering with the updated state/formatter interfaces.

**Step 2: Run test to verify it fails**

Run: `swift test --filter DashboardTests`
Expected: failures if any legacy consumer still relies on removed copy or raw bucket inference.

**Step 3: Write minimal implementation**

Adapt menu-bar and popover consumers to:
- use timestamp-safe formatting
- use the derived summary or derived compatibility state
- keep their visual contract simple and backward-safe

**Step 4: Run test to verify it passes**

Run: `swift test --filter DashboardTests`
Expected: dashboard and popover tests pass together.

### Task 5: Full verification and finish
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Heading level increments skip from H1 to H3.

Task headings should be ## (not ###) to satisfy MD001 and keep structure consistent.

Suggested fix
-### Task 1: Add explicit dashboard time semantics
+## Task 1: Add explicit dashboard time semantics
...
-### Task 2: Replace the single enum with a richer flow model
+## Task 2: Replace the single enum with a richer flow model
...
-### Task 3: Rebuild the dashboard into a dual-lane flow board
+## Task 3: Rebuild the dashboard into a dual-lane flow board
...
-### Task 4: Keep legacy/menu surfaces consistent
+## Task 4: Keep legacy/menu surfaces consistent
...
-### Task 5: Full verification and finish
+## Task 5: Full verification and finish
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Task 1: Add explicit dashboard time semantics
**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainDatabase.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`
**Step 1: Write the failing tests**
Add tests that assert `DashboardStats` carries:
- `activityWindowMinutes`
- `bucketCount`
- `liveWindowMinutes`
- `lastWriteAt`
- `lastEnrichedAt`
Also add a database test proving:
- a chunk written now and enriched 90 seconds ago reports `lastWriteAt != nil`
- `lastEnrichedAt != nil`
- `enrichmentRatePerMinute == 0`
- enrichment trend still contains the older completion
**Step 2: Run test to verify it fails**
Run: `swift test --filter DashboardTests`
Expected: compile failure or assertion failure because `DashboardStats` does not yet expose the new time fields.
**Step 3: Write minimal implementation**
Update `DashboardStats` and `dashboardStats(...)` to include the new metadata and fetch `MAX(created_at)` / `MAX(enriched_at)` timestamps using the same SQLite date parsing already used for bucket generation.
**Step 4: Run test to verify it passes**
Run: `swift test --filter DashboardTests`
Expected: the new stats tests pass and existing dashboard database tests still pass.
### Task 2: Replace the single enum with a richer flow model
**Files:**
- Modify: `brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift`
- Modify: `brain-bar/Sources/BrainBar/BrainBarWindowState.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift`
**Step 1: Write the failing tests**
Add tests for a derived flow summary that can represent:
- writes live + enrichments live at the same time
- writes idle + enrichments draining backlog
- writes idle + enrichments idle + backlog present
- daemon unavailable
Add formatter tests that require:
- real last-event strings from actual timestamps, not bucket position
- a stable live now badge that only reflects the configured live window
**Step 2: Run test to verify it fails**
Run: `swift test --filter BrainBarUXLogicTests`
Run: `swift test --filter BrainBarWindowStateTests`
Expected: failures because the richer flow summary and timestamp-driven formatting do not exist yet.
**Step 3: Write minimal implementation**
Introduce a derived dashboard flow model with:
- a write lane state
- an enrichment lane state
- a backlog/queue state
- a top-level narrative for the hero
Keep `PipelineState` available for legacy consumers, but make it derive from the richer summary instead of directly from raw buckets.
Replace bucket-inferred `lastCompletionString(...)` with a timestamp-based formatter and add a generic time ago helper for both writes and enrichments.
**Step 4: Run test to verify it passes**
Run: `swift test --filter BrainBarUXLogicTests`
Run: `swift test --filter BrainBarWindowStateTests`
Expected: new derivation and formatter tests pass without breaking the existing layout/token tests.
### Task 3: Rebuild the dashboard into a dual-lane flow board
**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swift`
- Possibly modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift`
**Step 1: Write the failing tests**
Add logic-level tests for any new copy helpers or layout-state helpers needed by the dashboard, including:
- synchronized window labels
- lane summaries using the same window value
- idle copy that no longer contradicts a recent-trend graph
**Step 2: Run test to verify it fails**
Run: `swift test --filter BrainBarUXLogicTests`
Expected: failures because the new helpers or derived labels are not implemented yet.
**Step 3: Write minimal implementation**
Rebuild `BrainBarDashboardView` so the hero becomes a flow board with:
- a left lane for writes
- a center queue/backlog card
- a right lane for enrichments
- shared window labeling
- real last-write / last-enriched timestamps
- live badges that are explicitly now
- charts with stronger fill, endpoint emphasis, and separate colors for ingress vs enrichment
Reuse the same derived flow summary everywhere in the window instead of mixing independent heuristics.
If `StatusPopoverView` still shows contradictory wording after the model change, update it to consume the same derived summary.
**Step 4: Run test to verify it passes**
Run: `swift test --filter BrainBarUXLogicTests`
Expected: the helper/copy tests pass and the dashboard remains readable under compact layout tests.
### Task 4: Keep legacy/menu surfaces consistent
**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainBarApp.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`
**Step 1: Write the failing tests**
Add or extend tests that verify the status popover still loads and that the live/menu surfaces can continue rendering with the updated state/formatter interfaces.
**Step 2: Run test to verify it fails**
Run: `swift test --filter DashboardTests`
Expected: failures if any legacy consumer still relies on removed copy or raw bucket inference.
**Step 3: Write minimal implementation**
Adapt menu-bar and popover consumers to:
- use timestamp-safe formatting
- use the derived summary or derived compatibility state
- keep their visual contract simple and backward-safe
**Step 4: Run test to verify it passes**
Run: `swift test --filter DashboardTests`
Expected: dashboard and popover tests pass together.
### Task 5: Full verification and finish
## Task 1: Add explicit dashboard time semantics
**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainDatabase.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`
**Step 1: Write the failing tests**
Add tests that assert `DashboardStats` carries:
- `activityWindowMinutes`
- `bucketCount`
- `liveWindowMinutes`
- `lastWriteAt`
- `lastEnrichedAt`
Also add a database test proving:
- a chunk written now and enriched 90 seconds ago reports `lastWriteAt != nil`
- `lastEnrichedAt != nil`
- `enrichmentRatePerMinute == 0`
- enrichment trend still contains the older completion
**Step 2: Run test to verify it fails**
Run: `swift test --filter DashboardTests`
Expected: compile failure or assertion failure because `DashboardStats` does not yet expose the new time fields.
**Step 3: Write minimal implementation**
Update `DashboardStats` and `dashboardStats(...)` to include the new metadata and fetch `MAX(created_at)` / `MAX(enriched_at)` timestamps using the same SQLite date parsing already used for bucket generation.
**Step 4: Run test to verify it passes**
Run: `swift test --filter DashboardTests`
Expected: the new stats tests pass and existing dashboard database tests still pass.
## Task 2: Replace the single enum with a richer flow model
**Files:**
- Modify: `brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift`
- Modify: `brain-bar/Sources/BrainBar/BrainBarWindowState.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift`
**Step 1: Write the failing tests**
Add tests for a derived flow summary that can represent:
- writes live + enrichments live at the same time
- writes idle + enrichments draining backlog
- writes idle + enrichments idle + backlog present
- daemon unavailable
Add formatter tests that require:
- real last-event strings from actual timestamps, not bucket position
- a stable "live now" badge that only reflects the configured live window
**Step 2: Run test to verify it fails**
Run: `swift test --filter BrainBarUXLogicTests`
Run: `swift test --filter BrainBarWindowStateTests`
Expected: failures because the richer flow summary and timestamp-driven formatting do not exist yet.
**Step 3: Write minimal implementation**
Introduce a derived dashboard flow model with:
- a write lane state
- an enrichment lane state
- a backlog/queue state
- a top-level narrative for the hero
Keep `PipelineState` available for legacy consumers, but make it derive from the richer summary instead of directly from raw buckets.
Replace bucket-inferred `lastCompletionString(...)` with a timestamp-based formatter and add a generic "time ago" helper for both writes and enrichments.
**Step 4: Run test to verify it passes**
Run: `swift test --filter BrainBarUXLogicTests`
Run: `swift test --filter BrainBarWindowStateTests`
Expected: new derivation and formatter tests pass without breaking the existing layout/token tests.
## Task 3: Rebuild the dashboard into a dual-lane flow board
**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swift`
- Possibly modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift`
- Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift`
**Step 1: Write the failing tests**
Add logic-level tests for any new copy helpers or layout-state helpers needed by the dashboard, including:
- synchronized window labels
- lane summaries using the same window value
- idle copy that no longer contradicts a recent-trend graph
**Step 2: Run test to verify it fails**
Run: `swift test --filter BrainBarUXLogicTests`
Expected: failures because the new helpers or derived labels are not implemented yet.
**Step 3: Write minimal implementation**
Rebuild `BrainBarDashboardView` so the hero becomes a flow board with:
- a left lane for writes
- a center queue/backlog card
- a right lane for enrichments
- shared window labeling
- real last-write / last-enriched timestamps
- live badges that are explicitly "now"
- charts with stronger fill, endpoint emphasis, and separate colors for ingress vs enrichment
Reuse the same derived flow summary everywhere in the window instead of mixing independent heuristics.
If `StatusPopoverView` still shows contradictory wording after the model change, update it to consume the same derived summary.
**Step 4: Run test to verify it passes**
Run: `swift test --filter BrainBarUXLogicTests`
Expected: the helper/copy tests pass and the dashboard remains readable under compact layout tests.
## Task 4: Keep legacy/menu surfaces consistent
**Files:**
- Modify: `brain-bar/Sources/BrainBar/BrainBarApp.swift`
- Modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift`
- Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift`
**Step 1: Write the failing tests**
Add or extend tests that verify the status popover still loads and that the live/menu surfaces can continue rendering with the updated state/formatter interfaces.
**Step 2: Run test to verify it fails**
Run: `swift test --filter DashboardTests`
Expected: failures if any legacy consumer still relies on removed copy or raw bucket inference.
**Step 3: Write minimal implementation**
Adapt menu-bar and popover consumers to:
- use timestamp-safe formatting
- use the derived summary or derived compatibility state
- keep their visual contract simple and backward-safe
**Step 4: Run test to verify it passes**
Run: `swift test --filter DashboardTests`
Expected: dashboard and popover tests pass together.
## Task 5: Full verification and finish
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)

[warning] 13-13: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/2026-04-18-brainbar-dashboard-flow.md` around lines 13 - 163, The
document uses H3 (###) for top-level Task headings which skips H2 and fails
MD001; update each task heading text like "Task 1: Add explicit dashboard time
semantics", "Task 2: Replace the single enum with a richer flow model", etc., to
use H2 (##) instead of H3 so heading levels are hierarchical and sequential,
keeping the rest of the content unchanged; scan the file for lines starting with
"### Task" and replace them with "## Task" to correct all task-level headings.

Comment on lines +102 to +106
- `KGPresentation`
- groups nodes into atlas regions
- computes altitude-filtered visible nodes
- preserves selected node visibility

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Rename KGPresentation to KGAtlasPresentation here.

The implementation and tests in this PR use KGAtlasPresentation, so this plan currently sends follow-up work toward a nonexistent helper name.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/2026-04-18-brainbar-injections-graph-hybrid.md` around lines 102 -
106, Update the doc to match the implemented helper name: replace the outdated
symbol `KGPresentation` with `KGAtlasPresentation` in the listed plan bullet
(and any nearby references in this file) so the plan points to the existing
helper; ensure the description line remains the same (groups nodes into atlas
regions, computes altitude-filtered visible nodes, preserves selected node
visibility) but uses the `KGAtlasPresentation` identifier.

Comment on lines +361 to +369
source_placeholders = ",".join("?" for _ in T1_T2_SOURCES)
source_query = f"""
SELECT DISTINCT source_file, project, COUNT(*) as cnt
FROM chunks
WHERE source IS NULL OR source = 'claude_code'
WHERE source IS NULL OR source IN ({source_placeholders})
GROUP BY source_file
HAVING cnt >= 3
"""
for row in cursor.execute(source_query):
for row in cursor.execute(source_query, tuple(T1_T2_SOURCES)):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Update reconstruction/prompting before broadening discovery.

This now admits Codex/Cursor/Gemini sessions, but the downstream path still drops build_log/dir_listing chunks and the prompt still frames the input as a Claude Code conversation. Those sessions will be enriched with incomplete context.

As per coding guidelines, Classification must preserve ai_code, stack_trace, and user_message verbatim; skip noise entries entirely and summarize build_log and dir_listing entries (structure only).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/brainlayer/pipeline/session_enrichment.py` around lines 361 - 369, The
enrichment flow that selects chunks (see source_placeholders, source_query and
T1_T2_SOURCES) currently drops build_log/dir_listing chunks and frames prompts
as a Claude Code conversation; update the reconstruction/prompting logic that
consumes rows from cursor.execute(...) so that: (1) ai_code, stack_trace, and
user_message chunk types are preserved verbatim; (2) noise chunk types are
skipped entirely; (3) build_log and dir_listing chunks are not discarded but
summarized to a structural outline only (e.g., file names, sizes, key
commands/output lines), and (4) sessions from Codex/Cursor/Gemini are recognized
and use a neutral/generic multi-model prompt template instead of the
Claude-specific prompt so the enriched context is accurate across models. Ensure
the filtering/transforming happens immediately after fetching rows from the
chunks query so downstream enrichment receives the corrected chunk list.

Comment on lines +529 to +559
@pytest.mark.parametrize("source", ["codex_cli", "cursor_cli", "gemini_cli"])
def test_agent_cli_sessions_are_discovered_from_chunks(self, store, source):
"""Agent-session sources beyond Claude are eligible for session enrichment."""
from brainlayer.pipeline.session_enrichment import list_sessions_for_enrichment

cursor = store.conn.cursor()
session_id = f"{source}-session"
source_file = f"/tmp/{session_id}.jsonl"

for idx in range(3):
cursor.execute(
"""INSERT INTO chunks
(id, content, metadata, source_file, project, content_type,
value_type, char_count, source, created_at)
VALUES (?, ?, '{}', ?, ?, 'assistant_text', 'HIGH', ?, ?, ?)""",
(
f"{session_id}:c{idx}",
f"{source} chunk {idx}",
source_file,
"brainlayer",
len(f"{source} chunk {idx}"),
source,
f"2026-04-18T0{idx}:00:00Z",
),
)

sessions = list_sessions_for_enrichment(store)
session_ids = [session[0] for session in sessions]

assert session_id in session_ids

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

Add a negative assertion for an unrecognized source.

This test proves the new allow-list admits codex_cli/cursor_cli/gemini_cli, but it does not protect against accidentally admitting every source. One exclusion case would lock the SQL filter down and catch regressions earlier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_session_enrichment.py` around lines 529 - 559, In
test_agent_cli_sessions_are_discovered_from_chunks, add a negative case that
inserts a chunk using an unrecognized source (e.g., "unknown_cli" or
"bad_source") and then call list_sessions_for_enrichment(store) and assert that
the corresponding session id is NOT in the returned session_ids; this ensures
the allow-list logic in list_sessions_for_enrichment is enforced and prevents
accidental admission of arbitrary sources.

Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@EtanHey
Copy link
Copy Markdown
Owner Author

EtanHey commented May 1, 2026

W2 update: ported PR #263 recall hardening into BrainBar Swift on commit c9c011e.\n\nImplemented:\n- exact chunk-id short-circuit before FTS\n- trigram FTS fallback for identifier substrings\n- lexical-defense + KG alias expansion with multi-word semantics preserved\n- repair-only trigram FTS migration for existing DBs\n\nVerification receipts:\n- CodeRabbit local review run: 18 branch findings; fixed the two W2 BrainDatabase findings (blank query handling, alias normalization consistency); remaining findings are W1 UI/doc scope.\n- swift test --filter targeted W2 tests: 5 tests, 0 failures.\n- swift test: 312 tests, 0 failures.\n- pre-push hook: 1811 passed, 9 skipped, 75 deselected, 1 xfailed; MCP registration 3 passed; isolated eval/hook routing 32 passed; bun 1 pass; test_fts5_determinism.sh PASS.\n\nLaunchAgents paused only during pre-push to avoid the known SQLite lock issue, then restored: com.brainlayer.enrichment PID 38157, com.brainlayer.watch PID 38159.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c9c011e2f0

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

let sql = """
SELECT e.id, e.name, e.entity_type, e.description,
COALESCE(CAST(json_extract(e.metadata, '$.importance') AS REAL), 5.0) AS importance
COALESCE(e.importance, CAST(json_extract(e.metadata, '$.importance') AS REAL), 5.0) AS importance
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve metadata importance in KG entity ranking

fetchKGEntities now prioritizes e.importance before metadata importance, but this commit also adds importance REAL DEFAULT 0.5 and insertEntity still does not write the column. That makes e.importance non-null for virtually all rows, so metadata-derived importance is effectively ignored, which flattens ranking and can hide most nodes behind the atlas default filter (minimumImportance = 3). This is a behavior regression for existing and newly inserted entities that only carry importance in metadata.

Useful? React with 👍 / 👎.

Comment on lines +33 to +34
let regions: [Region] = orderedEntityTypes.compactMap { entityType in
guard let values = grouped[entityType], !values.isEmpty else { return nil }
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep unknown entity types visible in atlas regions

The atlas snapshot only builds regions from orderedEntityTypes, then derives visibleNodes from those regions, so any node whose entityType is not in that hardcoded list is dropped entirely (and its edges disappear) even if it passes the importance filter. In datasets with additional KG types, this silently removes valid graph data from the UI and makes those entities unselectable.

Useful? React with 👍 / 👎.

Comment on lines +29 to +30
let visibleIDs = Set(visibleNodes.map(\.id))
let visibleEdges = edges.filter { visibleIDs.contains($0.sourceId) && visibleIDs.contains($0.targetId) }
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Low KnowledgeGraph/KGAtlasPresentation.swift:29

When a node has an entityType not in orderedEntityTypes, it is excluded from regions (line 33-35) and therefore from snapshot.visibleNodes (line 53). However, visibleEdges (line 30) still includes edges referencing that node because it was filtered only by minimumImportance, not by entity type. This causes the snapshot to report edges whose endpoints are missing from visibleNodes, making the edge count misleading and potentially causing rendering issues when the UI tries to draw edges to non-existent nodes.

-        let visibleIDs = Set(visibleNodes.map(\.id))
-        let visibleEdges = edges.filter { visibleIDs.contains($0.sourceId) && visibleIDs.contains($0.targetId) }
+        let regionVisibleIDs = Set(regions.flatMap { $0.nodes }.map(\.id))
+        let visibleEdges = edges.filter { regionVisibleIDs.contains($0.sourceId) && regionVisibleIDs.contains($0.targetId) }
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift around lines 29-30:

When a node has an `entityType` not in `orderedEntityTypes`, it is excluded from `regions` (line 33-35) and therefore from `snapshot.visibleNodes` (line 53). However, `visibleEdges` (line 30) still includes edges referencing that node because it was filtered only by `minimumImportance`, not by entity type. This causes the snapshot to report edges whose endpoints are missing from `visibleNodes`, making the edge count misleading and potentially causing rendering issues when the UI tries to draw edges to non-existent nodes.

Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift lines 27-54 (visibleNodes filtering, visibleEdges filtering, regions built from orderedEntityTypes, Snapshot construction); brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift lines 77-85 (orderedEntityTypes list); brain-bar/Sources/BrainBar/KnowledgeGraph/KGNode.swift line 6 (entityType is free-form String), lines 18-29 (default color case); brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift line 80 (nodeIndex built from snapshot.visibleNodes), line 87 (guard+continue on missing node), line 158 (visibleEdges.count displayed in UI)

return "Just now"
}
if secondsAgo < 3600 {
let minutesAgo = Int((secondsAgo / 60).rounded(.up))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High Dashboard/DashboardMetricFormatter.swift:73

In lastEventString, the ceiling rounding .rounded(.up) causes off-by-one errors in time-ago display: at 61 seconds it returns "2m ago" instead of "1m ago", and at 3601 seconds it returns "2h ago" instead of "1h ago". Consider using truncation (Int(secondsAgo / 60)) instead so the minute/hour thresholds align with intuitive time display.

-            let minutesAgo = Int((secondsAgo / 60).rounded(.up))
+            let minutesAgo = Int(secondsAgo / 60)
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift around line 73:

In `lastEventString`, the ceiling rounding `.rounded(.up)` causes off-by-one errors in time-ago display: at 61 seconds it returns "2m ago" instead of "1m ago", and at 3601 seconds it returns "2h ago" instead of "1h ago". Consider using truncation (`Int(secondsAgo / 60)`) instead so the minute/hour thresholds align with intuitive time display.

Evidence trail:
brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift lines 68-79 at REVIEWED_COMMIT: line 73 `let minutesAgo = Int((secondsAgo / 60).rounded(.up))` and line 77 `let hoursAgo = Int((secondsAgo / 3600).rounded(.up))`. Verified: 61/60=1.0167 rounds up to 2 ("2m ago"); 3601/3600=1.00028 rounds up to 2 ("2h ago").

hasLoadedGraph = true
}
}
.onChange(of: geo.size) { _, newSize in
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium KnowledgeGraph/KGCanvasView.swift:63

When the sidebar is visible, onChange(of: geo.size) passes the full view size to KGAtlasLayout.seededNodes(), including the sidebar width. This causes nodes to be positioned outside the visible canvas area since the layout assumes more horizontal space than actually available. Consider using the canvas size from the inner GeometryReader (lines 109–114) instead of the outer GeometryReader's size.

🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift around line 63:

When the sidebar is visible, `onChange(of: geo.size)` passes the full view size to `KGAtlasLayout.seededNodes()`, including the sidebar width. This causes nodes to be positioned outside the visible canvas area since the layout assumes more horizontal space than actually available. Consider using the canvas size from the inner `GeometryReader` (lines 109–114) instead of the outer `GeometryReader`'s size.

Evidence trail:
KGCanvasView.swift lines 26-67: outer GeometryReader wraps the full HStack (canvas + sidebar), and onChange(of: geo.size) passes newSize to KGAtlasLayout.seededNodes(). Inner GeometryReader at lines 109-114 captures actual canvas size but is not used for seededNodes call. KGAtlasLayout.swift lines 37-50: makeRegionCenters positions nodes at fractional multiples of canvasSize.width (up to 0.78), meaning nodes would be placed at positions that include sidebar width.

Comment on lines +329 to +339
private func nodeAt(point: CGPoint, visibleNodes: [KGNode]) -> KGNode? {
for node in visibleNodes {
let dx = point.x - node.position.x
let dy = point.y - node.position.y
let dist = sqrt(dx * dx + dy * dy)
if dist <= node.radius + 4 {
return node
}
}
return nil
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium KnowledgeGraph/KGCanvasView.swift:329

When nodes overlap, nodeAt returns the first matching node in iteration order, but graphCanvas draws nodes in the same order so later nodes render on top. Tapping an overlapping area selects the visually-hidden node underneath instead of the topmost visible node. Consider iterating visibleNodes in reverse order so the topmost (last-drawn) node is selected first.

    private func nodeAt(point: CGPoint, visibleNodes: [KGNode]) -> KGNode? {
-        for node in visibleNodes {
+        for node in visibleNodes.reversed() {
            let dx = point.x - node.position.x
            let dy = point.y - node.position.y
            let dist = sqrt(dx * dx + dy * dy)
            if dist <= node.radius + 4 {
                return node
            }
        }
        return nil
    }
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift around lines 329-339:

When nodes overlap, `nodeAt` returns the first matching node in iteration order, but `graphCanvas` draws nodes in the same order so later nodes render on top. Tapping an overlapping area selects the visually-hidden node underneath instead of the topmost visible node. Consider iterating `visibleNodes` in reverse order so the topmost (last-drawn) node is selected first.

Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift lines 101-107 (draw loop iterates forward, later nodes on top), lines 329-337 (nodeAt iterates forward, returns first match = bottom-most node), line 289 (tapGesture calls nodeAt with snapshot.visibleNodes)

Comment on lines +57 to +58
let ring = Int(ceil((sqrt(Double(offset + 1)) - 1) / 2))
let indexInRing = offset - (ring * ring)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium KnowledgeGraph/KGAtlasLayout.swift:57

The position(for:total:center:) formula incorrectly keeps offset=7 and offset=8 in ring 1, causing nodes to overlap. For offset=7, ring = ceil((sqrt(8)-1)/2) = 1 and indexInRing = 7 - 1*1 = 6 with countInRing = 6, yielding angle = 2π — identical to offset=1's angle of 0. Ring 1 should only hold offsets 1-6, but the quadratic indexing calculation breaks down beyond that.

-        let ring = Int(ceil((sqrt(Double(offset + 1)) - 1) / 2))
-        let indexInRing = offset - (ring * ring)
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift around lines 57-58:

The `position(for:total:center:)` formula incorrectly keeps `offset=7` and `offset=8` in ring 1, causing nodes to overlap. For `offset=7`, `ring = ceil((sqrt(8)-1)/2) = 1` and `indexInRing = 7 - 1*1 = 6` with `countInRing = 6`, yielding `angle = 2π` — identical to `offset=1`'s angle of `0`. Ring 1 should only hold offsets 1-6, but the quadratic indexing calculation breaks down beyond that.

Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift lines 57-61 (REVIEWED_COMMIT): ring calculation `Int(ceil((sqrt(Double(offset + 1)) - 1) / 2))`, indexInRing = `offset - (ring * ring)`, countInRing = `max(ring * 6, 1)`, angle = `(Double(indexInRing) / Double(countInRing)) * 2π`. For offset=7: ring=1, indexInRing=6, countInRing=6, angle=2π (same as offset=1's angle=0). For offset=8: ring=1, indexInRing=7, countInRing=6, angle=7π/3 (same as offset=2's angle=π/3).

Comment on lines +252 to +256
let enrichmentStatus: DashboardFlowLaneStatus
if enrichmentsLive {
enrichmentStatus = .live
} else if backlogCount > 0 {
enrichmentStatus = stats.recentEnrichmentCount > 0 ? .recent : .queued
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium Dashboard/PipelineState.swift:252

enrichmentStatus is assigned .live, .recent, .queued, or .idle, but never .draining. However, line 288 checks enrichmentStatus == .draining and line 306-307 returns text for .draining — this case is unreachable dead code. If .draining is meant to represent enrichment outpacing writes while backlog exists, the assignment logic at lines 252-261 needs to detect this (e.g., enrichmentsLive && backlogCount > 0 && !writesLive). If .draining shouldn't exist, remove the dead switch case and condition check.

         let enrichmentStatus: DashboardFlowLaneStatus
         if enrichmentsLive {
-            enrichmentStatus = .live
+            if backlogCount > 0 && !writesLive {
+                enrichmentStatus = .draining
+            } else {
+                enrichmentStatus = .live
+            }
         } else if backlogCount > 0 {
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift around lines 252-256:

`enrichmentStatus` is assigned `.live`, `.recent`, `.queued`, or `.idle`, but never `.draining`. However, line 288 checks `enrichmentStatus == .draining` and line 306-307 returns text for `.draining` — this case is unreachable dead code. If `.draining` is meant to represent enrichment outpacing writes while backlog exists, the assignment logic at lines 252-261 needs to detect this (e.g., `enrichmentsLive && backlogCount > 0 && !writesLive`). If `.draining` shouldn't exist, remove the dead switch case and condition check.

Evidence trail:
brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift lines 95-103 (enum DashboardFlowLaneStatus with .draining case), lines 252-261 (enrichmentStatus assignment logic — never assigns .draining), lines 287-289 (enrichmentStatus == .draining always false in condition), lines 304-307 (.draining switch case unreachable with text 'Recent enrichments are draining backlog'). All at REVIEWED_COMMIT.

Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (2)
src/brainlayer/pipeline/session_enrichment.py (1)

361-369: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Broadened source intake is ahead of reconstruction/prompt parity.

Line 361–369 now admits Codex/Cursor/Gemini sessions, but downstream handling still drops build_log/dir_listing content and frames the prompt as a Claude-only conversation. This will enrich new source sessions with incomplete/misframed context.

As per coding guidelines, "Classification must preserve ai_code, stack_trace, and user_message verbatim; skip noise entries entirely and summarize build_log and dir_listing entries (structure only)."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/brainlayer/pipeline/session_enrichment.py` around lines 361 - 369, The
new source inclusion around T1_T2_SOURCES via source_query/cursor.execute
broadens accepted session sources (Codex/Cursor/Gemini) but downstream
enrichment still assumes Claude-only framing and drops or mishandles
build_log/dir_listing; update the session enrichment/classification and prompt
construction code so that when processing chunks for these sources you: preserve
ai_code, stack_trace, and user_message verbatim (do not alter or summarize
them), skip any chunk labeled noise entirely, and replace build_log and
dir_listing content with a structural summary only (no verbatim logs); also make
the prompt construction logic (the routine that assembles the conversation for
model calls) respect the source type instead of forcing Claude-style roles so
non-Claude sessions keep correct role labels and included summarized build/dir
structure.
brain-bar/Sources/BrainBar/BrainDatabase.swift (1)

49-50: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

The dashboard default window is still 30 minutes.

Both defaults still use 30, so callers that omit activityWindowMinutes will not get the intended 60-minute dashboard window. Please update both call sites together.

Also applies to: 1105-1105

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 49 - 50, The
default for the dashboard window is still set to 30 minutes: update the default
parameter value for activityWindowMinutes from 30 to 60 wherever it’s
declared/used so callers that omit activityWindowMinutes get a 60-minute window;
ensure you change both call sites together (look for the activityWindowMinutes
parameter in the BrainDatabase initializer/signature and the other occurrence
mentioned) and leave bucketCount unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift`:
- Around line 18-19: initialFrame() currently centers a 1_348×1_078 frame on
NSScreen.main without clamping to visible bounds or choosing the screen under
the mouse, so on small displays or multi-monitor setups the panel can land
off-screen or on the wrong display; update initialFrame() to build the desired
rect from BrainBarDashboardPanelController.defaultSize/minSize, pick the screen
using BrainBarApp.orderedScreensByMouseLocation() (or the same mouse-aware
ordering used elsewhere), then pass the candidate rect through
BrainBarWindowPlacement.clamp() (or resolvedFrame()) to constrain it to the
screen's visibleFrame before returning; do the same adjustment for centerPanel()
(lines noted) so both creation and recenters use the mouse-aware screen
selection and clamped frame.

In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`:
- Around line 990-1001: The expensive SparklineRenderer.render call is happening
inside the animated SwiftUI view body and re-rasterizes on every frame; change
this so the sparkline image is computed once and reused by memoizing by (values,
renderSize, accentColor) or moving the render into a non-animating layer.
Concretely, compute/cache the rendered NSImage outside of body (e.g., in a
`@State/`@StateObject or a computed cached property keyed on values, renderSize
and accentColor) and reference that cached image in body while keeping ringScale
and ringOpacity animations as a separate overlay so only the ring animates.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 497-542: The unread-only delivery frontier is advanced incorrectly
across multiple expansion passes because maxRowID is taken from each pass (and
uses max(limit, 1)) rather than from the final rowid-ordered union of returned
rows; update the logic in the loop that calls
expandedSearchQueries/runFTSSearch/sanitizeTrigramFTS5Query and appendDeduped so
that when unreadOnly is true you (a) do not force max(limit, 1) on runFTSSearch
calls, (b) collect all returned rows from expansion + trigram passes, then
compute the watermark by globally sorting the merged results by c.rowid ASC and
taking the last row's rowid as maxRowID (or nil if no rows), and only then call
markDelivered(agentID: subscriberID, seq: maxRowID); this preserves c.rowid ASC
frontier semantics and prevents advancing the cursor past unread results the
caller never received.

In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift`:
- Around line 224-340: Add a legacy-migration test that simulates an upgraded DB
missing the trigram FTS by using db.exec to remove or rename chunks_fts_trigram
and its triggers, then call ensureMigrations() (or the migration entrypoint) to
trigger repair, insert a chunk with id "trigram-hit" via db.insertChunk and
verify db.search(query: "alker-go", limit: 10) returns that chunk; reference the
ensureMigrations() call, db.exec, db.insertChunk(id:), and
db.search(query:limit:) to locate where to implement this new test case.
- Around line 239-260: The test testSearchExactChunkIDShortCircuitsFTS currently
only asserts the top-ranked result, so add an assertion that the search results
length is exactly one to prove the fast-path short-circuited the FTS;
specifically, after calling let results = try db.search(query:
"brainbar-5ac50aa1-ed5", limit: 10) add XCTAssertEqual(results.count, 1) (then
keep the existing XCTAssertEqual on results.first?["chunk_id"]) to ensure the
decoy did not get returned or appended.

---

Duplicate comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 49-50: The default for the dashboard window is still set to 30
minutes: update the default parameter value for activityWindowMinutes from 30 to
60 wherever it’s declared/used so callers that omit activityWindowMinutes get a
60-minute window; ensure you change both call sites together (look for the
activityWindowMinutes parameter in the BrainDatabase initializer/signature and
the other occurrence mentioned) and leave bucketCount unchanged.

In `@src/brainlayer/pipeline/session_enrichment.py`:
- Around line 361-369: The new source inclusion around T1_T2_SOURCES via
source_query/cursor.execute broadens accepted session sources
(Codex/Cursor/Gemini) but downstream enrichment still assumes Claude-only
framing and drops or mishandles build_log/dir_listing; update the session
enrichment/classification and prompt construction code so that when processing
chunks for these sources you: preserve ai_code, stack_trace, and user_message
verbatim (do not alter or summarize them), skip any chunk labeled noise
entirely, and replace build_log and dir_listing content with a structural
summary only (no verbatim logs); also make the prompt construction logic (the
routine that assembles the conversation for model calls) respect the source type
instead of forcing Claude-style roles so non-Claude sessions keep correct role
labels and included summarized build/dir structure.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: a50114ed-725e-49db-ad9e-4db2d90f9147

📥 Commits

Reviewing files that changed from the base of the PR and between 2f28c06 and 2d82873.

📒 Files selected for processing (7)
  • brain-bar/Sources/BrainBar/BrainBarApp.swift
  • brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
  • brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift
  • brain-bar/Tests/BrainBarTests/DatabaseTests.swift
  • src/brainlayer/pipeline/session_enrichment.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: Macroscope - Correctness Check
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Flag risky DB or concurrency changes explicitly and do not hand-wave lock behavior
Enforce one-write-at-a-time concurrency constraint; reads are safe but brain_digest is write-heavy and must not run in parallel with other MCP work
Run pytest before claiming behavior changed safely; current test suite has 929 tests

**/*.py: Use paths.py:get_db_path() for all database path resolution; all scripts and CLI must use this function rather than hardcoding paths
When performing bulk database operations: stop enrichment workers first, checkpoint WAL before and after, drop FTS triggers before bulk deletes, batch deletes in 5-10K chunks, and checkpoint every 3 batches

Files:

  • src/brainlayer/pipeline/session_enrichment.py
src/brainlayer/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/brainlayer/**/*.py: Use retry logic on SQLITE_BUSY errors; each worker must use its own database connection to handle concurrency safely
Classification must preserve ai_code, stack_trace, and user_message verbatim; skip noise entries entirely and summarize build_log and dir_listing entries (structure only)
Use AST-aware chunking via tree-sitter; never split stack traces; mask large tool output
For enrichment backend selection: use Groq as primary backend (cloud, configured in launchd plist), Gemini as fallback via enrichment_controller.py, and Ollama as offline last-resort; allow override via BRAINLAYER_ENRICH_BACKEND env var
Configure enrichment rate via BRAINLAYER_ENRICH_RATE environment variable (default 0.2 = 12 RPM)
Implement chunk lifecycle columns: superseded_by, aggregated_into, archived_at on chunks table; exclude lifecycle-managed chunks from default search; allow include_archived=True to show history
Implement brain_supersede with safety gate for personal data (journals, notes, health/finance); use soft-delete for brain_archive with timestamp
Add supersedes parameter to brain_store for atomic store-and-replace operations
Run linting and formatting with: ruff check src/ && ruff format src/
Run tests with pytest
Use PRAGMA wal_checkpoint(FULL) before and after bulk database operations to prevent WAL bloat

Files:

  • src/brainlayer/pipeline/session_enrichment.py
🧠 Learnings (12)
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : Classification must preserve `ai_code`, `stack_trace`, and `user_message` verbatim; skip `noise` entries entirely and summarize `build_log` and `dir_listing` entries (structure only)

Applied to files:

  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Pipeline architecture: Extract → Classify → Chunk → Embed → Index, with post-processing for enrichment, brain graph, and Obsidian export

Applied to files:

  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-04T15:16:13.883Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-04T15:16:13.883Z
Learning: Applies to src/brainlayer/hooks/dedup_coordination.py : Session dedup coordination: SessionStart hook writes injected chunk_ids to `/tmp/brainlayer_session_{id}.json`; UserPromptSubmit hook skips already-injected chunks; skip auto-search on handoff prompts

Applied to files:

  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-13T15:04:00.631Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-13T15:04:00.631Z
Learning: Applies to src/brainlayer/*classify*.py : Classification must preserve verbatim: `ai_code`, `stack_trace`, `user_message` content

Applied to files:

  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/hooks/dedup_coordination.py : Use `/tmp/brainlayer_session_{id}.json` for SessionStart and UserPromptSubmit hook coordination; SessionStart writes injected chunk_ids, UserPromptSubmit skips already-injected; detect handoffs with 'handoff' or 'session-handoff' keywords

Applied to files:

  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift
  • brain-bar/Sources/BrainBar/BrainBarApp.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift
  • brain-bar/Sources/BrainBar/BrainBarApp.swift
  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-03-29T18:45:42.532Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:42.532Z
Learning: In `brain-bar/Sources/BrainBar/BrainDatabase.swift` (Swift, BrainBar module), the `search()` function uses `maxRowID` as a delivery watermark cursor in the `unreadOnly` path. The result set **must** be ordered by `c.rowid ASC` when `unreadOnly=true`, so that `maxRowID` correctly represents the contiguous delivery frontier. Using a relevance-based ordering (e.g., `ORDER BY f.rank`) in the `unreadOnly` path would cause gaps in the watermark and mark unseen rows as delivered. The correct pattern is: `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`. Flag any future change to the `ORDER BY` clause in this function that applies relevance sorting unconditionally.

Applied to files:

  • brain-bar/Tests/BrainBarTests/DatabaseTests.swift
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : Implement chunk lifecycle columns: `superseded_by`, `aggregated_into`, `archived_at` on chunks table; exclude lifecycle-managed chunks from default search; allow `include_archived=True` to show history

Applied to files:

  • brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-04-04T23:24:03.159Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-04T23:24:03.159Z
Learning: Applies to src/brainlayer/{vector_store,search}*.py : Chunk lifecycle: implement columns `superseded_by`, `aggregated_into`, `archived_at` on chunks table; exclude lifecycle-managed chunks from default search

Applied to files:

  • brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-04-03T11:43:08.915Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-03T11:43:08.915Z
Learning: Applies to src/brainlayer/*bulk*.py : Before bulk database operations: stop enrichment workers, checkpoint WAL with `PRAGMA wal_checkpoint(FULL)`, drop FTS triggers before bulk deletes

Applied to files:

  • brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to **/*.py : When performing bulk database operations: stop enrichment workers first, checkpoint WAL before and after, drop FTS triggers before bulk deletes, batch deletes in 5-10K chunks, and checkpoint every 3 batches

Applied to files:

  • brain-bar/Sources/BrainBar/BrainDatabase.swift
🪛 SwiftLint (0.63.2)
brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift

[Warning] 6-6: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift

[Warning] 4-4: Classes should have an explicit deinit method

(required_deinit)


[Warning] 16-16: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/BrainDatabase.swift

[Warning] 1691-1691: Prefer empty collection over optional collection

(discouraged_optional_collection)

🔇 Additional comments (1)
src/brainlayer/pipeline/session_enrichment.py (1)

27-27: Good use of centralized source policy.

Importing T1_T2_SOURCES here keeps session discovery aligned with tier classification rules.

Comment on lines +18 to +19
static let defaultSize = NSSize(width: 1_348, height: 1_078)
static let minSize = NSSize(width: 760, height: 560)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift

Repository: EtanHey/brainlayer

Length of output: 3883


🏁 Script executed:

rg -i "screen.*placement\|window.*placement\|BrainBarWindowPlacement" --type swift -A 3 -B 1

Repository: EtanHey/brainlayer

Length of output: 44


🏁 Script executed:

rg "NSScreen" --type swift -B 2 -A 2

Repository: EtanHey/brainlayer

Length of output: 9090


🏁 Script executed:

sed -n '1,100p' brain-bar/Sources/BrainBar/BrainBarWindowState.swift

Repository: EtanHey/brainlayer

Length of output: 2673


🏁 Script executed:

sed -n '100,250p' brain-bar/Sources/BrainBar/BrainBarWindowState.swift

Repository: EtanHey/brainlayer

Length of output: 5282


🏁 Script executed:

sed -n '230,300p' brain-bar/Sources/BrainBar/BrainBarWindowState.swift

Repository: EtanHey/brainlayer

Length of output: 2465


🏁 Script executed:

sed -n '280,350p' brain-bar/Sources/BrainBar/BrainBarWindowState.swift

Repository: EtanHey/brainlayer

Length of output: 2516


🏁 Script executed:

sed -n '1,50p' brain-bar/Sources/BrainBar/SearchPanelController.swift

Repository: EtanHey/brainlayer

Length of output: 1477


🏁 Script executed:

sed -n '40,90p' brain-bar/Sources/BrainBar/SearchPanelController.swift

Repository: EtanHey/brainlayer

Length of output: 1792


🏁 Script executed:

sed -n '80,110p' brain-bar/Sources/BrainBar/SearchPanelController.swift

Repository: EtanHey/brainlayer

Length of output: 1071


Clamp the initial panel frame to the visible screen bounds and use mouse-aware screen selection.

The initialFrame() method centers a 1348×1078 frame on NSScreen.main without constraining to the visible bounds. On displays smaller than that size, the panel will open partly off-screen. On multi-monitor setups, the panel may appear on a different screen than where the user clicked the menu bar.

Use BrainBarWindowPlacement.clamp() or resolvedFrame() to constrain the frame to available screen space, and apply BrainBarApp's orderedScreensByMouseLocation() pattern to select the appropriate display context.

Also applies to: lines 98–100 (centerPanel())

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift` around
lines 18 - 19, initialFrame() currently centers a 1_348×1_078 frame on
NSScreen.main without clamping to visible bounds or choosing the screen under
the mouse, so on small displays or multi-monitor setups the panel can land
off-screen or on the wrong display; update initialFrame() to build the desired
rect from BrainBarDashboardPanelController.defaultSize/minSize, pick the screen
using BrainBarApp.orderedScreensByMouseLocation() (or the same mouse-aware
ordering used elsewhere), then pass the candidate rect through
BrainBarWindowPlacement.clamp() (or resolvedFrame()) to constrain it to the
screen's visibleFrame before returning; do the same adjustment for centerPanel()
(lines noted) so both creation and recenters use the mouse-aware screen
selection and clamped frame.

Comment on lines 990 to +1001
var body: some View {
GeometryReader { proxy in
let renderSize = NSSize(
width: max(proxy.size.width.rounded(.up), 1),
height: max(proxy.size.height.rounded(.up), 1)
)
let image = SparklineRenderer.render(
state: .idle,
values: values,
size: renderSize,
accentColor: accentColor
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check the file and examine the context around the mentioned lines
wc -l brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift

Repository: EtanHey/brainlayer

Length of output: 123


🏁 Script executed:

# Read the context around lines 990-1001 and 1034-1039
sed -n '980,1010p' brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift

Repository: EtanHey/brainlayer

Length of output: 1009


🏁 Script executed:

# Read the context around lines 1030-1045
sed -n '1030,1045p' brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift

Repository: EtanHey/brainlayer

Length of output: 427


🏁 Script executed:

# Look for `@State` declarations and animation-related code to understand the animation mechanism
rg -n "@State|\.animation\(|withAnimation" brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift

Repository: EtanHey/brainlayer

Length of output: 987


🏁 Script executed:

# Search for SparklineRenderer to understand all usages
rg -n "SparklineRenderer" brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift -A 3 -B 1

Repository: EtanHey/brainlayer

Length of output: 418


🏁 Script executed:

# Read more context of the body implementation to see the full structure
sed -n '988,1050p' brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift

Repository: EtanHey/brainlayer

Length of output: 2090


🏁 Script executed:

# Also check what SparklineRenderer.render does to understand the cost
fd SparklineRenderer -t f

Repository: EtanHey/brainlayer

Length of output: 123


🏁 Script executed:

# Look for the implementation of SparklineRenderer
rg -l "class SparklineRenderer|struct SparklineRenderer" -t swift

Repository: EtanHey/brainlayer

Length of output: 44


🏁 Script executed:

# Examine the SparklineRenderer implementation to understand rendering cost
cat brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swift

Repository: EtanHey/brainlayer

Length of output: 5028


Cache the sparkline image outside the animated body.

SparklineRenderer.render(...) performs expensive rasterization (CGContext setup, path drawing, antialiasing). Calling it at line 996 inside body causes it to re-execute on every animation frame during the 0.75-second pulse. The sparkline content itself never changes—only the ring overlay's ringScale and ringOpacity animate—yet the image regenerates ~45 times per pulse across multiple cards. Memoize by (values, renderSize, accentColor) to eliminate redundant rendering, or refactor the ring into a separate animated layer so the sparkline rasterizes once.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` around lines 990 -
1001, The expensive SparklineRenderer.render call is happening inside the
animated SwiftUI view body and re-rasterizes on every frame; change this so the
sparkline image is computed once and reused by memoizing by (values, renderSize,
accentColor) or moving the render into a non-animating layer. Concretely,
compute/cache the rendered NSImage outside of body (e.g., in a
`@State/`@StateObject or a computed cached property keyed on values, renderSize
and accentColor) and reference that cached image in body while keeping ringScale
and ringOpacity animations as a separate overlay so only the ring animates.

Comment on lines +497 to +542
let expandedQueries = expandedSearchQueries(trimmedQuery)
var results: [[String: Any]] = []
var seenChunkIDs = Set<String>()
var maxRowID: Int64 = 0

for expandedQuery in expandedQueries {
let searchResult = try runFTSSearch(
tableName: "chunks_fts",
matchQuery: sanitizeFTS5Query(expandedQuery),
limit: max(limit, 1),
project: project,
tag: tag,
importanceMin: importanceMin,
subscribedTags: subscribedTags,
ackFloor: ackFloor,
unreadOnly: unreadOnly
)
maxRowID = max(maxRowID, searchResult.maxRowID)
appendDeduped(searchResult.rows, to: &results, seenChunkIDs: &seenChunkIDs, limit: limit)
if results.count >= limit { break }
}

if results.count < limit {
for expandedQuery in expandedQueries {
let trigramQuery = sanitizeTrigramFTS5Query(expandedQuery)
guard !trigramQuery.isEmpty else { continue }
let searchResult = try runFTSSearch(
tableName: "chunks_fts_trigram",
matchQuery: trigramQuery,
limit: max(limit, 1),
project: project,
tag: tag,
importanceMin: importanceMin,
subscribedTags: subscribedTags,
ackFloor: ackFloor,
unreadOnly: unreadOnly
)
maxRowID = max(maxRowID, searchResult.maxRowID)
appendDeduped(searchResult.rows, to: &results, seenChunkIDs: &seenChunkIDs, limit: limit)
if results.count >= limit { break }
}
}

if unreadOnly, let subscriberID, maxRowID > 0 {
try markDelivered(agentID: subscriberID, seq: maxRowID)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Preserve the unread delivery frontier across expansion passes.

In unreadOnly mode, this now runs multiple expanded/trigram searches, applies limit per pass, and advances markDelivered to the highest rowid seen. Those rows are not globally unioned and re-sorted by c.rowid ASC before the watermark is derived, so a later alias/trigram pass can move the cursor past unread matches the caller never received. The forced max(limit, 1) also means limit <= 0 can still advance the cursor. Either keep unread delivery on a single rowid-ordered query or compute the watermark from the final returned row after a global rowid merge. Based on learnings, the unread-only search() path uses maxRowID as a contiguous delivery watermark and must preserve c.rowid ASC frontier semantics.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 497 - 542, The
unread-only delivery frontier is advanced incorrectly across multiple expansion
passes because maxRowID is taken from each pass (and uses max(limit, 1)) rather
than from the final rowid-ordered union of returned rows; update the logic in
the loop that calls expandedSearchQueries/runFTSSearch/sanitizeTrigramFTS5Query
and appendDeduped so that when unreadOnly is true you (a) do not force
max(limit, 1) on runFTSSearch calls, (b) collect all returned rows from
expansion + trigram passes, then compute the watermark by globally sorting the
merged results by c.rowid ASC and taking the last row's rowid as maxRowID (or
nil if no rows), and only then call markDelivered(agentID: subscriberID, seq:
maxRowID); this preserves c.rowid ASC frontier semantics and prevents advancing
the cursor past unread results the caller never received.

Comment on lines +224 to +340
func testSearchReturnsEmptyForBlankQuery() throws {
try db.insertChunk(
id: "blank-query-decoy",
content: "Blank searches should not hit arbitrary content.",
sessionId: "session-blank",
project: "brainlayer",
contentType: "assistant_text",
importance: 5
)

let results = try db.search(query: " ", limit: 10)

XCTAssertTrue(results.isEmpty)
}

func testSearchExactChunkIDShortCircuitsFTS() throws {
try db.insertChunk(
id: "brainbar-5ac50aa1-ed5",
content: "Exact chunk lookup should not require the identifier to appear in content.",
sessionId: "session-exact",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)
try db.insertChunk(
id: "brainbar-d01",
content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.",
sessionId: "session-decoy",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)

let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10)

XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5")
}

func testSearchExactChunkIDRespectsProjectScope() throws {
try db.insertChunk(
id: "brainbar-scoped-001",
content: "Project-scoped exact id lookup should return only matching project chunks.",
sessionId: "session-scope",
project: "brainlayer",
contentType: "assistant_text",
importance: 8
)

let scoped = try db.search(query: "brainbar-scoped-001", limit: 10, project: "brainlayer")
let wrongScope = try db.search(query: "brainbar-scoped-001", limit: 10, project: "voicelayer")

XCTAssertEqual(scoped.first?["chunk_id"] as? String, "brainbar-scoped-001")
XCTAssertTrue(wrongScope.isEmpty)
}

func testSearchUsesTrigramFTSForIdentifierSubstrings() throws {
try db.insertChunk(
id: "trigram-hit",
content: "stalker-golem queue note",
sessionId: "session-trigram",
project: "brainlayer",
contentType: "assistant_text",
importance: 7
)

let results = try db.search(query: "alker-go", limit: 10)

XCTAssertEqual(results.first?["chunk_id"] as? String, "trigram-hit")
}

func testSearchAliasExpansionPreservesMultiwordSemantics() throws {
try db.insertChunk(
id: "alias-good",
content: "Hershkovits reviewed the release plan yesterday.",
sessionId: "session-alias-good",
project: "brainlayer",
contentType: "assistant_text",
importance: 8
)
try db.insertChunk(
id: "alias-bad",
content: "Met with Hershkovits yesterday.",
sessionId: "session-alias-bad",
project: "brainlayer",
contentType: "assistant_text",
importance: 8
)

let results = try db.search(query: "Hershkovitz release plan", limit: 10)
let resultIDs = results.compactMap { $0["chunk_id"] as? String }

XCTAssertTrue(resultIDs.contains("alias-good"))
XCTAssertFalse(resultIDs.contains("alias-bad"))
}

func testSearchAliasExpansionNormalizesDotsConsistently() throws {
db.exec("""
INSERT INTO kg_entities (id, entity_type, name)
VALUES ('entity-openai', 'org', 'Open.AI');
""")
db.exec("""
INSERT INTO kg_entity_aliases (alias, entity_id)
VALUES ('OpenAI', 'entity-openai');
""")
try db.insertChunk(
id: "alias-dot",
content: "Open.AI roadmap review notes.",
sessionId: "session-alias-dot",
project: "brainlayer",
contentType: "assistant_text",
importance: 8
)

let results = try db.search(query: "OpenAI roadmap", limit: 10)

XCTAssertEqual(results.first?["chunk_id"] as? String, "alias-dot")
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

Please add a legacy-database migration case for trigram FTS.

These new search cases only exercise fresh databases from setUp(), but the risky rollout path is opening an existing chunks DB and letting ensureMigrations() repair chunks_fts_trigram plus its maintenance triggers. A regression there would slip past this suite and only fail for upgraded users.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 224 - 340,
Add a legacy-migration test that simulates an upgraded DB missing the trigram
FTS by using db.exec to remove or rename chunks_fts_trigram and its triggers,
then call ensureMigrations() (or the migration entrypoint) to trigger repair,
insert a chunk with id "trigram-hit" via db.insertChunk and verify
db.search(query: "alker-go", limit: 10) returns that chunk; reference the
ensureMigrations() call, db.exec, db.insertChunk(id:), and
db.search(query:limit:) to locate where to implement this new test case.

Comment on lines +239 to +260
func testSearchExactChunkIDShortCircuitsFTS() throws {
try db.insertChunk(
id: "brainbar-5ac50aa1-ed5",
content: "Exact chunk lookup should not require the identifier to appear in content.",
sessionId: "session-exact",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)
try db.insertChunk(
id: "brainbar-d01",
content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.",
sessionId: "session-decoy",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)

let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10)

XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5")
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

Assert the short-circuit, not just the top rank.

This still passes if the exact-ID fast path falls through to FTS and the decoy is appended after the exact hit. Add a count assertion so the test proves the fast path actually skipped the broader search.

Suggested assertion
         let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10)
 
+        XCTAssertEqual(results.count, 1)
         XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func testSearchExactChunkIDShortCircuitsFTS() throws {
try db.insertChunk(
id: "brainbar-5ac50aa1-ed5",
content: "Exact chunk lookup should not require the identifier to appear in content.",
sessionId: "session-exact",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)
try db.insertChunk(
id: "brainbar-d01",
content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.",
sessionId: "session-decoy",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)
let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10)
XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5")
}
func testSearchExactChunkIDShortCircuitsFTS() throws {
try db.insertChunk(
id: "brainbar-5ac50aa1-ed5",
content: "Exact chunk lookup should not require the identifier to appear in content.",
sessionId: "session-exact",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)
try db.insertChunk(
id: "brainbar-d01",
content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.",
sessionId: "session-decoy",
project: "brainlayer",
contentType: "assistant_text",
importance: 9
)
let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10)
XCTAssertEqual(results.count, 1)
XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5")
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 239 - 260,
The test testSearchExactChunkIDShortCircuitsFTS currently only asserts the
top-ranked result, so add an assertion that the search results length is exactly
one to prove the fast-path short-circuited the FTS; specifically, after calling
let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10) add
XCTAssertEqual(results.count, 1) (then keep the existing XCTAssertEqual on
results.first?["chunk_id"]) to ensure the decoy did not get returned or
appended.

BrainBar could block opening /tmp/brainbar.sock while synchronously rebuilding chunks_fts_trigram on a live ~360K-chunk database. That startup rebuild cascaded into lock pressure for enrichment and MCP clients because the canonical Unix socket was not ready while the DB was busy.

Add a testable startup repair decision and cap synchronous trigram backfill to small databases with synchronousTrigramBackfillChunkLimit = 10_000. Large desynced trigram indexes now keep schema/triggers current and defer the heavy backfill to explicit maintenance.

Verification: RED->GREEN testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild; swift test --filter DatabaseTests; bash scripts/run_tests.sh; swift test.

Follow-ups tracked in docs.local/plans/2026-05-02-brainlayer-reliability-hardening/: PR B explicit trigram maintenance command, PR D boot health contract.
Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

let columns = Array(
repeating: GridItem(.flexible(minimum: 130), spacing: layout.gridSpacing, alignment: .leading),
count: 2
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Layout column count hardcoded instead of using computed value

Medium Severity

The overviewMetaRow function hardcodes count: 2 for the grid columns, ignoring the layout.overviewMetricColumns property that's computed to be 4 on wide layouts (≥980px). This defeats the responsive design intent — the overview stats section will always display in a 2-column grid regardless of available width.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 59b4025. Configure here.

height: max(proxy.size.height.rounded(.up), 1)
)
let image = SparklineRenderer.render(
state: .idle,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sparkline always renders with idle state ignoring actual pipeline

Low Severity

BrainBarHeroSparkline always passes .idle as the state parameter to SparklineRenderer.render, regardless of the actual pipeline state. The lane-level sparklines will never show the active/indexing visual treatment even when the pipeline is actively processing.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 59b4025. Configure here.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
brain-bar/Tests/BrainBarTests/DatabaseTests.swift (2)

268-270: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Assert the short-circuit, not just the top hit.

This still passes if the exact-ID lookup falls through to FTS and the decoy is appended after the exact match. Add XCTAssertEqual(results.count, 1) before checking first.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 268 - 270,
The test currently only asserts the top hit but not that the exact-ID
short-circuit prevented FTS results from being appended; update the test around
the db.search call and the results variable by inserting an assertion that
results.count == 1 before checking results.first?["chunk_id"], so that the test
fails if any additional FTS matches were included (referencing
db.search(query:limit:), results, and the "chunk_id" check).

189-198: 🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

Add a legacy-schema repair test, not just the decision helper.

This only proves .skipBackfill for one large desync case. The risky rollout path is opening an existing database with a stale chunks_fts_trigram definition and letting the migration entrypoint repair it. Without a fixture that creates the old trigram table/triggers and verifies substring search after migration, a broken repair path will still ship unnoticed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 189 - 198,
Add an integration-style test that actually exercises the migration entrypoint
rather than only the helper decision: create a test (e.g., alongside
testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild) that constructs a
database using the legacy/stale trigram FTS schema (old chunks_fts_trigram table
and associated triggers), then call the real migration path that BrainDatabase
uses on open (the same entrypoint that would run startup repairs), and finally
verify substring search works against the migrated DB and that the trigram
table/triggers have been replaced or updated as expected; use the BrainDatabase
open/migrate functions and existing query helpers to locate where to hook the
test.
brain-bar/Sources/BrainBar/BrainDatabase.swift (1)

509-549: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Preserve the unread delivery frontier across expansion passes.

unreadOnly still runs multiple expansion/trigram searches with limit: max(limit, 1), updates maxRowID from each pass, and then marks that maximum as delivered. Because the rows are deduped and capped incrementally instead of being globally merged and re-sorted by c.rowid ASC first, a later pass can advance last_delivered_seq past rows that were never returned. limit <= 0 can also advance the cursor without returning anything.

Based on learnings: when unreadOnly=true, the search() path must preserve contiguous c.rowid ASC frontier semantics and derive the watermark from the final returned set only.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 509 - 549, The
current unread-only flow updates maxRowID during each expansion/trigram pass
(using runFTSSearch and appendDeduped) and then marks delivered, which can
advance the unread frontier past rows that were never returned; instead, when
unreadOnly is true, collect rows from all passes (expandedQueries and trigram
pass) without updating maxRowID per pass, perform global dedup/sort by c.rowid
ASC and enforce the final limit, then compute maxRowID from the final returned
set only (e.g., highest rowid in results) and call markDelivered(subscriberID,
seq: maxRowID) only if that computed maxRowID > 0; keep runFTSSearch and
sanitizeTrigramFTS5Query calls but remove per-pass maxRowID updates and avoid
using max(limit, 1) to advance cursor when no rows are returned.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 1908-1912: rebuildTrigramFTSTable currently only calls
ensureTrigramFTSSchemaAndTriggers() which uses CREATE VIRTUAL TABLE IF NOT
EXISTS, leaving an old chunks_fts_trigram with stale schema/tokenizer; modify
rebuildTrigramFTSTable (and the same repair path invoked by
rebuildSynchronously) to DROP TABLE IF EXISTS chunks_fts_trigram before calling
ensureTrigramFTSSchemaAndTriggers() so the virtual table is recreated cleanly
and backfillTrigramFTSTable() inserts match the new schema. Ensure you reference
rebuildTrigramFTSTable(), ensureTrigramFTSSchemaAndTriggers(),
backfillTrigramFTSTable(), and the repair path via rebuildSynchronously when
applying the change.

---

Duplicate comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 509-549: The current unread-only flow updates maxRowID during each
expansion/trigram pass (using runFTSSearch and appendDeduped) and then marks
delivered, which can advance the unread frontier past rows that were never
returned; instead, when unreadOnly is true, collect rows from all passes
(expandedQueries and trigram pass) without updating maxRowID per pass, perform
global dedup/sort by c.rowid ASC and enforce the final limit, then compute
maxRowID from the final returned set only (e.g., highest rowid in results) and
call markDelivered(subscriberID, seq: maxRowID) only if that computed maxRowID >
0; keep runFTSSearch and sanitizeTrigramFTS5Query calls but remove per-pass
maxRowID updates and avoid using max(limit, 1) to advance cursor when no rows
are returned.

In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift`:
- Around line 268-270: The test currently only asserts the top hit but not that
the exact-ID short-circuit prevented FTS results from being appended; update the
test around the db.search call and the results variable by inserting an
assertion that results.count == 1 before checking results.first?["chunk_id"], so
that the test fails if any additional FTS matches were included (referencing
db.search(query:limit:), results, and the "chunk_id" check).
- Around line 189-198: Add an integration-style test that actually exercises the
migration entrypoint rather than only the helper decision: create a test (e.g.,
alongside testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild) that
constructs a database using the legacy/stale trigram FTS schema (old
chunks_fts_trigram table and associated triggers), then call the real migration
path that BrainDatabase uses on open (the same entrypoint that would run startup
repairs), and finally verify substring search works against the migrated DB and
that the trigram table/triggers have been replaced or updated as expected; use
the BrainDatabase open/migrate functions and existing query helpers to locate
where to hook the test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 4309d710-18a7-44c0-899d-cddbe5a67457

📥 Commits

Reviewing files that changed from the base of the PR and between 2d82873 and 59b4025.

📒 Files selected for processing (2)
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
  • brain-bar/Tests/BrainBarTests/DatabaseTests.swift
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
  • GitHub Check: Macroscope - Correctness Check
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainDatabase.swift
🪛 SwiftLint (0.63.2)
brain-bar/Sources/BrainBar/BrainDatabase.swift

[Warning] 1698-1698: Prefer empty collection over optional collection

(discouraged_optional_collection)

Comment on lines +1908 to +1912
private func rebuildTrigramFTSTable() throws {
try ensureTrigramFTSSchemaAndTriggers()
try execute("DELETE FROM chunks_fts_trigram")
try backfillTrigramFTSTable()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Drop the stale trigram table before running the “repair” path.

When schemaIsValid is false, .rebuildSynchronously currently calls rebuildTrigramFTSTable(), but that method only does CREATE VIRTUAL TABLE IF NOT EXISTS. If an upgraded DB already has an old chunks_fts_trigram, the old schema/tokenizer survives and the backfill either keeps the broken layout or fails when inserting summary / resolved_query. That makes the legacy repair path unable to repair some existing databases and can fail startup entirely.

Suggested fix
 private func rebuildTrigramFTSTable() throws {
+        try execute("DROP TRIGGER IF EXISTS chunks_fts_trigram_insert")
+        try execute("DROP TRIGGER IF EXISTS chunks_fts_trigram_delete")
+        try execute("DROP TRIGGER IF EXISTS chunks_fts_trigram_update")
+        try execute("DROP TABLE IF EXISTS chunks_fts_trigram")
         try ensureTrigramFTSSchemaAndTriggers()
-        try execute("DELETE FROM chunks_fts_trigram")
         try backfillTrigramFTSTable()
     }

Also applies to: 1954-1968

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 1908 - 1912,
rebuildTrigramFTSTable currently only calls ensureTrigramFTSSchemaAndTriggers()
which uses CREATE VIRTUAL TABLE IF NOT EXISTS, leaving an old chunks_fts_trigram
with stale schema/tokenizer; modify rebuildTrigramFTSTable (and the same repair
path invoked by rebuildSynchronously) to DROP TABLE IF EXISTS chunks_fts_trigram
before calling ensureTrigramFTSSchemaAndTriggers() so the virtual table is
recreated cleanly and backfillTrigramFTSTable() inserts match the new schema.
Ensure you reference rebuildTrigramFTSTable(),
ensureTrigramFTSSchemaAndTriggers(), backfillTrigramFTSTable(), and the repair
path via rebuildSynchronously when applying the change.

…atabase

Fix the SwiftUI lifecycle gap where BrainBarRuntime.database could be installed while the NSPanel/root view was hidden, leaving the command bar's local @State view model nil and rendering Warming memory until a later activation refreshed the view.

Replace the subscription-dependent local @State assignment with a stable provider that derives the QuickCaptureViewModel from the current runtime database, so already-set database values are handled on first render as well as later readiness changes.

Audit reference: docs.local/plans/2026-05-02-brainlayer-reliability-hardening/phase-1/findings.md § WARMING-MEMORY AUDIT. New RED→GREEN coverage: BrainBarDashboardPanelControllerTests/testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden. Search input latency and graph tab stalls stay deferred to polish-pass Phase 1 / reliability-hardening Phase 2.
Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

There are 5 total unresolved issues (including 4 from previous reviews).

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit b8588f4. Configure here.

return .rebuildSynchronously
}
return .skipBackfill
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trigram backfill ignores chunk limit on first migration

High Severity

trigramStartupRepairDecision returns .rebuildSynchronously whenever tableExists is false, regardless of chunkCount. This means the first time an existing large database (e.g., 360K chunks) gets this migration via ensureMigrations, the synchronousTrigramBackfillChunkLimit guard is bypassed and a full synchronous backfill occurs — the exact startup-blocking scenario the stability addendum describes fixing.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit b8588f4. Configure here.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b8588f4f2f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

ackFloor: ackFloor,
unreadOnly: unreadOnly
)
maxRowID = max(maxRowID, searchResult.maxRowID)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Advance unread watermark only for delivered rows

In unread mode, maxRowID is updated from every runFTSSearch result set before dedupe/limit truncation, so alias-expanded queries can mark rows as delivered that were never returned to the caller. A concrete case is results.count == limit - 1: the next expanded query fetches up to limit rows, appendDeduped adds only one row, but markDelivered still advances to that query’s highest rowid, skipping unseen unread chunks on subsequent polls.

Useful? React with 👍 / 👎.

Comment on lines +1984 to +1985
if !tableExists || !schemaIsValid {
return .rebuildSynchronously
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid synchronous trigram rebuild on first large migration

When chunks_fts_trigram is missing (tableExists == false), startup always chooses .rebuildSynchronously regardless of database size. On large existing installs upgrading to this build, that forces a full trigram backfill during app startup, which can hold long write locks and delay availability (the same class of lock-pressure issue this patch is trying to mitigate for desynced tables).

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`:
- Around line 129-130: The current implementation keeps returning a cached
QuickCaptureViewModel from commandBarProvider.viewModel(database:
runtime.database) even after runtime.database changes; update the code so the
commandBarViewModel is recreated when runtime.database is replaced or cleared —
either make commandBarViewModel a purely computed property that calls
commandBarProvider.viewModel(database: runtime.database) each access, or add
explicit invalidation logic that compares the cached QuickCaptureViewModel's
database to runtime.database and discards it when they differ; update the same
pattern referenced around the other occurrence (lines ~160-165) so no stale view
model is returned after a database reopen/swap.

In `@brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift`:
- Around line 44-84: The test
testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden should
stop using fixed 50ms spins (runMainRunLoop) and instead poll until the
KeyHandlingCommandBarField appears or a timeout elapses; add a helper like
waitForCommandBarField(timeout:) that repeatedly runs the main run loop a short
interval and checks controller.panelForTesting.contentView (using
findSubview(ofType: KeyHandlingCommandBarField.self, in:)) and return the field
when found (or nil after timeout), then replace the explicit runMainRunLoop
calls and the final direct lookup with a call to this helper and XCTAssertNotNil
on its result so the test becomes robust under slower CI scheduling.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 0f7cec59-61fb-49ff-a81c-8dfef1e401e4

📥 Commits

Reviewing files that changed from the base of the PR and between 59b4025 and b8588f4.

📒 Files selected for processing (2)
  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
  • brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: Cursor Bugbot
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
🪛 SwiftLint (0.63.2)
brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift

[Warning] 6-6: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift

[Warning] 156-156: Classes should have an explicit deinit method

(required_deinit)

Comment on lines +129 to +130
private var commandBarViewModel: QuickCaptureViewModel? {
commandBarProvider.viewModel(database: runtime.database)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Recreate the command-bar view model when the database instance changes.

This provider only initializes QuickCaptureViewModel once, then keeps returning that same instance even if runtime.database is later replaced or cleared. That leaves the command bar bound to a stale database handle after a reopen/swap.

Suggested fix
 `@MainActor`
 private final class BrainBarCommandBarViewModelProvider {
     private let panelState = QuickCapturePanelState()
+    private var currentDatabaseID: ObjectIdentifier?
     private var currentViewModel: QuickCaptureViewModel?
 
     func viewModel(database: BrainDatabase?) -> QuickCaptureViewModel? {
-        guard let database else { return currentViewModel }
-        if currentViewModel == nil {
+        guard let database else {
+            currentDatabaseID = nil
+            currentViewModel = nil
+            return nil
+        }
+
+        let databaseID = ObjectIdentifier(database)
+        if currentDatabaseID != databaseID {
+            currentDatabaseID = databaseID
             currentViewModel = QuickCaptureViewModel(db: database, panelState: panelState)
         }
         return currentViewModel
     }
 }

Based on learnings: when a critical dependency like the database is nil due to startup ordering, do not silently return defaults that mask the lifecycle problem.

Also applies to: 160-165

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` around lines 129 -
130, The current implementation keeps returning a cached QuickCaptureViewModel
from commandBarProvider.viewModel(database: runtime.database) even after
runtime.database changes; update the code so the commandBarViewModel is
recreated when runtime.database is replaced or cleared — either make
commandBarViewModel a purely computed property that calls
commandBarProvider.viewModel(database: runtime.database) each access, or add
explicit invalidation logic that compares the cached QuickCaptureViewModel's
database to runtime.database and discards it when they differ; update the same
pattern referenced around the other occurrence (lines ~160-165) so no stale view
model is returned after a database reopen/swap.

Comment on lines +44 to +84
func testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden() {
let runtime = BrainBarRuntime()
let controller = BrainBarDashboardPanelController(runtime: runtime)
let tempDBPath = NSTemporaryDirectory() + "brainbar-commandbar-ready-\(UUID().uuidString).db"
let db = BrainDatabase(path: tempDBPath)
let collector = StatsCollector(
dbPath: tempDBPath,
daemonMonitor: DaemonHealthMonitor(targetPID: getpid())
)
defer {
collector.stop()
db.close()
controller.dismiss()
try? FileManager.default.removeItem(atPath: tempDBPath)
try? FileManager.default.removeItem(atPath: tempDBPath + "-wal")
try? FileManager.default.removeItem(atPath: tempDBPath + "-shm")
}

// Match launch order: AppDelegate creates the hidden NSPanel before the
// async database install lands, then the user opens BrainBar later.
_ = controller.panelForTesting.contentViewController?.view
runMainRunLoop()

runtime.install(collector: collector, injectionStore: nil, database: db)
runMainRunLoop()

controller.show()
runMainRunLoop()

let field = controller.panelForTesting.contentView.flatMap {
findSubview(ofType: KeyHandlingCommandBarField.self, in: $0)
}
XCTAssertNotNil(
field,
"Command bar should create its ready text field when runtime.database was installed before the panel became visible."
)
}

private func runMainRunLoop() {
RunLoop.main.run(until: Date().addingTimeInterval(0.05))
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Replace the fixed 50 ms run-loop sleeps with a timeout-based wait.

This test currently assumes AppKit/SwiftUI finishes panel and field setup within three 50 ms spins. That is likely to flake under slower CI scheduling. Poll for KeyHandlingCommandBarField until a reasonable timeout instead of relying on fixed delays.

Suggested fix
-        runMainRunLoop()
+        waitUntil {
+            controller.panelForTesting.contentView.flatMap {
+                findSubview(ofType: KeyHandlingCommandBarField.self, in: $0)
+            } != nil
+        }
 
         let field = controller.panelForTesting.contentView.flatMap {
             findSubview(ofType: KeyHandlingCommandBarField.self, in: $0)
         }
         XCTAssertNotNil(
@@
-    private func runMainRunLoop() {
-        RunLoop.main.run(until: Date().addingTimeInterval(0.05))
+    private func waitUntil(
+        timeout: TimeInterval = 1.0,
+        condition: () -> Bool
+    ) {
+        let deadline = Date().addingTimeInterval(timeout)
+        while !condition(), Date() < deadline {
+            RunLoop.main.run(until: Date().addingTimeInterval(0.01))
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift`
around lines 44 - 84, The test
testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden should
stop using fixed 50ms spins (runMainRunLoop) and instead poll until the
KeyHandlingCommandBarField appears or a timeout elapses; add a helper like
waitForCommandBarField(timeout:) that repeatedly runs the main run loop a short
interval and checks controller.panelForTesting.contentView (using
findSubview(ofType: KeyHandlingCommandBarField.self, in:)) and return the field
when found (or nil after timeout), then replace the explicit runMainRunLoop
calls and the final direct lookup with a call to this helper and XCTAssertNotNil
on its result so the test becomes robust under slower CI scheduling.

@EtanHey EtanHey merged commit 74693e5 into main May 2, 2026
7 checks passed
EtanHey pushed a commit that referenced this pull request May 2, 2026
- Recent Hardening section traces each claim to a merged PR
- BrainBar build-script guards (#264, #265) called out at the install step
- Phase B preventive infra block (orchestrator#58, #60) connects deploy registry
  to the BrainBar build-stamp + canonical-build refuse layer
- In-flight PR #251 entry documents NSPanel revival + trigram FTS5 startup-safety
  guard (10K-chunk threshold) + preserved /tmp/brainbar.sock pub/sub plane

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
EtanHey pushed a commit that referenced this pull request May 3, 2026
- Recent Hardening section traces each claim to a merged PR
- BrainBar build-script guards (#264, #265) called out at the install step
- Phase B preventive infra block (orchestrator#58, #60) connects deploy registry
  to the BrainBar build-stamp + canonical-build refuse layer
- In-flight PR #251 entry documents NSPanel revival + trigram FTS5 startup-safety
  guard (10K-chunk threshold) + preserved /tmp/brainbar.sock pub/sub plane

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
EtanHey added a commit that referenced this pull request May 3, 2026
…267)

- Recent Hardening section traces each claim to a merged PR
- BrainBar build-script guards (#264, #265) called out at the install step
- Phase B preventive infra block (orchestrator#58, #60) connects deploy registry
  to the BrainBar build-stamp + canonical-build refuse layer
- In-flight PR #251 entry documents NSPanel revival + trigram FTS5 startup-safety
  guard (10K-chunk threshold) + preserved /tmp/brainbar.sock pub/sub plane

Co-authored-by: Test User <test@example.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant