feat(brainbar): restore dashboard flow UI on Phase B main#251
Conversation
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
📝 WalkthroughWalkthroughAdds a dashboard NSPanel and menu popover; refactors root view/tab rendering and dashboard into scrollable cards; adds trigram FTS with alias expansion and exact-id fast path; extends dashboard stats with epoch timestamps and time-window semantics; introduces deterministic KG atlas/presentation; expands enrichment CLI sources; adds agent-activity sampling and many tests. ChangesApp & Windowing
Dashboard UI & Metrics
Agent Activity
Injection Feed & Presentation
Knowledge Graph — Layout & Presentation
Knowledge Graph — Views & Theming
Database & Search
Backend Enrichment & Session Discovery (brainlayer)
Tests, Docs & Miscellaneous
Sequence Diagram(s)(Skipped — changes span multiple independent flows and features rather than a single concentrated multi-component sequence that benefits from one diagram.) Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c23bd8b9d7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| func refresh(force: Bool = false) { | ||
| let nextDaemon = daemonMonitor.sample() | ||
| agentActivity = agentActivityMonitor.sample() |
There was a problem hiding this comment.
Move agent snapshot sampling off the main actor
StatsCollector is @MainActor and refresh() runs every second, but this line invokes AgentActivityMonitor.sample(), which synchronously shells out to /bin/ps and waits for completion in runSnapshotCommand. On systems with large process tables, this blocks the UI thread on each poll and can cause visible stutter or delayed interaction; sampling should happen off-main and publish back to the actor.
Useful? React with 👍 / 👎.
| defer { self.simulationTask = nil } | ||
|
|
||
| while self.timerActive && !Task.isCancelled { |
There was a problem hiding this comment.
🟢 Low KnowledgeGraph/KGSimulationController.swift:36
defer { self.simulationTask = nil } at line 36 nils simulationTask unconditionally when the task completes. If stop() is called (cancelling task A and niling the property) followed immediately by start() (creating task B), task A's defer block still executes and sets simulationTask = nil, discarding the reference to task B. Subsequent stop() calls then fail to cancel task B. Consider removing the defer block and letting stop() exclusively manage the property lifecycle, or capture task identity in the defer to avoid overwriting a newer task.
- simulationTask = Task { @MainActor [weak self] in
- guard let self else { return }
- defer { self.simulationTask = nil }🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGSimulationController.swift around lines 36-38:
`defer { self.simulationTask = nil }` at line 36 nils `simulationTask` unconditionally when the task completes. If `stop()` is called (cancelling task A and niling the property) followed immediately by `start()` (creating task B), task A's defer block still executes and sets `simulationTask = nil`, discarding the reference to task B. Subsequent `stop()` calls then fail to cancel task B. Consider removing the `defer` block and letting `stop()` exclusively manage the property lifecycle, or capture task identity in the defer to avoid overwriting a newer task.
Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGSimulationController.swift (REVIEWED_COMMIT):
- Line 34-45: `start()` creates Task and assigns to `simulationTask`
- Line 36: `defer { self.simulationTask = nil }` inside the Task closure
- Line 38: `await self.sleep(...)` - suspension point where Task A yields control
- Lines 48-51: `stop()` cancels task and nils `simulationTask`
- The defer block does not capture or check task identity, so it will nil any value in `simulationTask` including a newer Task B created after Task A was cancelled.
…ments dominant - Move agent presence strip below chart cards so Writes/Enrichments sit directly under the hero card as the visually dominant section. - Demote the queue rail into a compact single-line header (icon + status pill) with only Backlog and Coverage metrics; drop the redundant Direction metric and the verbose secondary description. - Preserve the disclosure-collapsed runtime/details panel so the dashboard stays calm by default and scales gracefully on narrow windows. Runs swift test --filter BrainBarUXLogicTests (16 tests, all passing).
c23bd8b to
2f28c06
Compare
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
| accentColor: livePresentation.accentColor | ||
| ) | ||
| private var flowSummary: DashboardFlowSummary { | ||
| DashboardFlowSummary.derive(daemon: collector.daemon, stats: collector.stats) |
There was a problem hiding this comment.
Computed flowSummary recalculated many times per render
Medium Severity
The flowSummary computed property calls DashboardFlowSummary.derive(daemon:stats:) on every access, and it's accessed dozens of times during a single body evaluation — from overviewCard, overviewNarrative, overviewBadgeRow, overviewCaption, writesCard, enrichmentsCard, queueRail, and diagnostics. Each access triggers a full re-derive including status computation and string formatting. Storing the result in a local let at the top of body would eliminate the redundant work.
Reviewed by Cursor Bugbot for commit 2f28c06. Configure here.
| let indexInRing = offset - (ring * ring) | ||
| let countInRing = max(ring * 6, 1) | ||
| let angle = (Double(indexInRing) / Double(countInRing)) * Double.pi * 2 | ||
| let radius = CGFloat(42 + ring * 30) |
There was a problem hiding this comment.
Atlas ring layout allows more nodes than positions
Low Severity
The ring formula assigns offsets 1–8 to ring 1 (since ceil((sqrt(9)-1)/2) still equals 1), but countInRing for ring 1 is 1 * 6 = 6. Nodes at offsets 7 and 8 get indexInRing values of 6 and 7, producing angles ≥ 2π that wrap around and overlap with earlier nodes in the same ring, causing visual collisions in the knowledge atlas.
Reviewed by Cursor Bugbot for commit 2f28c06. Configure here.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2f28c06161
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| WHEN \(column) IS NULL OR TRIM(\(column)) = '' THEN NULL | ||
| WHEN substr(\(column), -1) = 'Z' THEN unixepoch(\(column)) | ||
| WHEN substr(\(column), -6, 1) IN ('+', '-') THEN unixepoch(\(column)) | ||
| WHEN instr(\(column), 'T') > 0 THEN unixepoch(\(column), 'utc') |
There was a problem hiding this comment.
Parse timezone-naive timestamps without applying
utc
The new instr(...,'T') branch converts naive ISO timestamps with unixepoch(column, 'utc'), but in SQLite the utc modifier assumes the input is local time and shifts it by the machine timezone. On non-UTC hosts, legacy/naive values like 2026-04-18T09:58:00 are moved by several hours, which mis-buckets dashboard activity and can make lastWriteAt/lastEnrichedAt appear in the future (causing false “live now” status via eventIsLive).
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift (1)
99-115:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon’t silently keep the last healthy stats after a database read failure.
When
refresh(force: false)throws, the collector reuses the previousstatsand still derivesstatefrom them. That can leave the dashboard showing a healthy pipeline even though every current DB read is failing. Surface the failure explicitly or force a degraded/unavailable state instead of masking it with stale data. Based on learnings: do not allow BrainBar database failures to degrade into silent defaults because that hides real startup/runtime issues.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@brain-bar/Sources/BrainBar/BrainBarWindowState.swift`:
- Around line 57-74: The status copy in derive(stats:) hardcodes "60s" instead
of using the new live-window metadata; update the BrainBarLivePresentation
initializers to interpolate stats.liveWindowMinutes into the badge/status text
(e.g., replace "Enrichments in the last 60s" and "No enrichments in the last
60s" with strings that use stats.liveWindowMinutes) and keep using
DashboardMetricFormatter.lastEventString for badgeText so the displayed window
matches the logic that checks stats.eventIsLive.
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 44-45: The default for activity window is still 30 minutes in the
BrainDatabase initializer signature (activityWindowMinutes: Int = 30); update
that default to 60 so callers that omit the parameter get a 60-minute dashboard
window, and make the same change for the other matching occurrence of the
initializer/parameter elsewhere in the file (the second activityWindowMinutes
default at the noted location).
In `@brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift`:
- Around line 123-137: The child process can deadlock because stderr is set to a
separate Pipe that is never read; change the code to redirect stderr into the
same Pipe used for stdout (replace the separate Pipe assignment for
process.standardError with process.standardError = output) so both streams are
read via output.fileHandleForReading before calling process.waitUntilExit()
(refer to the process, output, and fileHandleForReading symbols and ensure
process.standardError is not left as an unread Pipe).
In `@brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift`:
- Around line 84-86: The refresh(force:) method is sampling
daemonMonitor.sample() and agentActivityMonitor.sample() on the UI actor causing
jank; move the potentially blocking sampling off the `@MainActor` by performing
the calls in a background task (e.g., Task.detached or DispatchQueue.global) and
then marshal only the state assignment back to the MainActor. Concretely: in
refresh(force:) call Task.detached { let next = daemonMonitor.sample(); let
activity = agentActivityMonitor.sample(); await MainActor.run { self.nextDaemon
= next; self.agentActivity = activity } } (use the actual property names used in
this file) so sampling happens off-main and only the minimal UI updates run on
the main actor.
In `@brain-bar/Sources/BrainBar/InjectionFeedView.swift`:
- Around line 220-223: The Button handlers currently swallow errors by using
try? when calling store.expandedConversation(chunkID:), so database/read
failures become silent no-ops; change both occurrences (the Button action around
selectedConversation = try? store.expandedConversation(...) and the similar code
at the 257-258 site) to perform a do/catch: call try
store.expandedConversation(chunkID:) inside do, assign selectedConversation on
success, and in catch set an explicit error state (e.g., set a `@State`
errorMessage / showError flag or update an error enum) so the UI can surface a
visible failure (alert, overlay, or error view) instead of failing silently.
Ensure you reference the same selectedConversation and
store.expandedConversation(chunkID:) symbols so the handlers report the caught
error details to the UI.
In `@brain-bar/Sources/BrainBar/InjectionPresentation.swift`:
- Around line 84-95: parseDate is failing to parse non-fractional SQLite
timestamps because sqliteTimestampFormatter() only matches "yyyy-MM-dd
HH:mm:ss.SSSSSS" while BrainDatabase.sqliteDateFormatter emits "yyyy-MM-dd
HH:mm:ss"; add a new DateFormatter (e.g., sqliteNonFractionalFormatter or extend
sqliteTimestampFormatter to accept optional fractional seconds) that uses the
format "yyyy-MM-dd HH:mm:ss" and update parseDate to try this formatter (place
it before/after the fractional one as appropriate) so non-fractional sqlite
timestamps are parsed correctly; reference functions: parseDate(_:) and
sqliteTimestampFormatter().
In `@brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift`:
- Around line 57-60: The ring calculation is wrong (it keeps offsets 7 and 8 in
ring 1) causing overlapping nodes; replace the current ring/index math
(variables ring, indexInRing, countInRing, angle) with logic that computes the
smallest ring r such that offset < totalCellsUpToRing(r) where
totalCellsUpToRing(r) = 1 + 3*r*(r+1), then set indexInRing = offset -
totalCellsUpToRing(r-1) (with totalCellsUpToRing(-1)=0) and countInRing = max(6
* r, 1); keep angle = (Double(indexInRing) / Double(countInRing)) * 2*Double.pi.
This guarantees the first overflow ring starts at offset 7 and prevents angle
reuse/overlap.
In `@brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift`:
- Around line 32-56: The current grouping builds regions only from
orderedEntityTypes so any nodes whose entityType is not listed there get dropped
from the Snapshot; update the construction of regions (the use of grouped and
orderedEntityTypes) to also include Region entries for any remaining
grouped.keys not present in orderedEntityTypes (preserving the same node sort
logic and using title(for:) to generate the title), then ensure
Snapshot.visibleNodes and selectedRegion logic use the complete regions array;
mirror the same change in the other occurrence around lines 73-82 where regions
are built.
In `@brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift`:
- Around line 56-67: The code is seeding KGAtlasLayout with geo.size (the outer
container which includes the sidebar) causing layout/hit-test drift when the
inspector is visible; change both occurrences (the onAppear/onChange block using
geo.size and the similar block at lines ~109-114) to use the actual inner canvas
size that setCanvas(size:) stores (e.g., the view's stored canvasSize or
whatever property setCanvas updates) when calling KGAtlasLayout.seededNodes and
when initially setting the canvas; update references from geo.size to that inner
canvasSize so seeding and hit-testing use the displayed viewport rather than the
outer container.
In `@brain-bar/Tests/BrainBarTests/KGAtlasLayoutTests.swift`:
- Around line 5-26: Update the test
testSeededNodesPlaceEntityTypesIntoStableRegions to include a regression case
with at least eight nodes of the same entityType (e.g., add more KGNode entries
with entityType "person") before calling KGAtlasLayout.seededNodes so the first
ring-to-second-ring transition is exercised; after seeding, assert there are >=8
person nodes and validate their x-positions do not overlap at the ring boundary
by checking ordering relative to project/tool nodes (use personNodes indices and
projectNode/toolNode from the same test) and also assert pairwise horizontal
spacing between adjacent personNodes exceeds a small positive threshold (to
catch overlap when the ring changes). Ensure references to
KGAtlasLayout.seededNodes and testSeededNodesPlaceEntityTypesIntoStableRegions
are used so the new case lives in the same test.
In `@docs/plans/2026-04-18-brainbar-dashboard-flow-collab.md`:
- Around line 3-36: The markdown violates MD022/MD058 because headings and the
task table are not separated by blank lines; update the document by inserting a
single blank line before and after each top-level section heading (e.g., "##
Goal", "## Agents", "## Constraints", "## Task Board", "## Gates", "## Notes")
and ensure the task table has a blank line above and below it so there is a
blank separator between the preceding heading and the table and between the
table and the following heading.
In `@docs/plans/2026-04-18-brainbar-dashboard-flow.md`:
- Around line 13-163: The document uses H3 (###) for top-level Task headings
which skips H2 and fails MD001; update each task heading text like "Task 1: Add
explicit dashboard time semantics", "Task 2: Replace the single enum with a
richer flow model", etc., to use H2 (##) instead of H3 so heading levels are
hierarchical and sequential, keeping the rest of the content unchanged; scan the
file for lines starting with "### Task" and replace them with "## Task" to
correct all task-level headings.
In `@docs/plans/2026-04-18-brainbar-injections-graph-hybrid.md`:
- Around line 102-106: Update the doc to match the implemented helper name:
replace the outdated symbol `KGPresentation` with `KGAtlasPresentation` in the
listed plan bullet (and any nearby references in this file) so the plan points
to the existing helper; ensure the description line remains the same (groups
nodes into atlas regions, computes altitude-filtered visible nodes, preserves
selected node visibility) but uses the `KGAtlasPresentation` identifier.
In `@src/brainlayer/pipeline/session_enrichment.py`:
- Around line 361-369: The enrichment flow that selects chunks (see
source_placeholders, source_query and T1_T2_SOURCES) currently drops
build_log/dir_listing chunks and frames prompts as a Claude Code conversation;
update the reconstruction/prompting logic that consumes rows from
cursor.execute(...) so that: (1) ai_code, stack_trace, and user_message chunk
types are preserved verbatim; (2) noise chunk types are skipped entirely; (3)
build_log and dir_listing chunks are not discarded but summarized to a
structural outline only (e.g., file names, sizes, key commands/output lines),
and (4) sessions from Codex/Cursor/Gemini are recognized and use a
neutral/generic multi-model prompt template instead of the Claude-specific
prompt so the enriched context is accurate across models. Ensure the
filtering/transforming happens immediately after fetching rows from the chunks
query so downstream enrichment receives the corrected chunk list.
In `@tests/test_session_enrichment.py`:
- Around line 529-559: In test_agent_cli_sessions_are_discovered_from_chunks,
add a negative case that inserts a chunk using an unrecognized source (e.g.,
"unknown_cli" or "bad_source") and then call list_sessions_for_enrichment(store)
and assert that the corresponding session id is NOT in the returned session_ids;
this ensures the allow-list logic in list_sessions_for_enrichment is enforced
and prevents accidental admission of arbitrary sources.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 51ca8f04-fa23-4288-8c0a-f5fdc598250f
📒 Files selected for processing (34)
brain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftbrain-bar/Sources/BrainBar/BrainBarWindowState.swiftbrain-bar/Sources/BrainBar/BrainDatabase.swiftbrain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swiftbrain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swiftbrain-bar/Sources/BrainBar/Dashboard/PipelineState.swiftbrain-bar/Sources/BrainBar/Dashboard/StatsCollector.swiftbrain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swiftbrain-bar/Sources/BrainBar/InjectionFeedView.swiftbrain-bar/Sources/BrainBar/InjectionPresentation.swiftbrain-bar/Sources/BrainBar/InjectionStore.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGEdgeView.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGNodeView.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGSidebarView.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGViewModel.swiftbrain-bar/Sources/BrainBar/Views/Components/ChunkConversationSheet.swiftbrain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swiftbrain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swiftbrain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swiftbrain-bar/Tests/BrainBarTests/DashboardTests.swiftbrain-bar/Tests/BrainBarTests/InjectionPresentationTests.swiftbrain-bar/Tests/BrainBarTests/KGAtlasLayoutTests.swiftbrain-bar/Tests/BrainBarTests/KGAtlasPresentationTests.swiftbrain-bar/Tests/BrainBarTests/KnowledgeGraphTests.swiftdocs/plans/2026-04-18-brainbar-dashboard-flow-collab.mddocs/plans/2026-04-18-brainbar-dashboard-flow.mddocs/plans/2026-04-18-brainbar-injections-graph-hybrid.mdsrc/brainlayer/pipeline/enrichment_tiers.pysrc/brainlayer/pipeline/session_enrichment.pytests/test_enrichment_tiers_agent_sources.pytests/test_session_enrichment.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Cursor Bugbot
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: Macroscope - Correctness Check
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Flag risky DB or concurrency changes explicitly and do not hand-wave lock behavior
Enforce one-write-at-a-time concurrency constraint; reads are safe but brain_digest is write-heavy and must not run in parallel with other MCP work
Run pytest before claiming behavior changed safely; current test suite has 929 tests
**/*.py: Usepaths.py:get_db_path()for all database path resolution; all scripts and CLI must use this function rather than hardcoding paths
When performing bulk database operations: stop enrichment workers first, checkpoint WAL before and after, drop FTS triggers before bulk deletes, batch deletes in 5-10K chunks, and checkpoint every 3 batches
Files:
src/brainlayer/pipeline/session_enrichment.pytests/test_enrichment_tiers_agent_sources.pytests/test_session_enrichment.pysrc/brainlayer/pipeline/enrichment_tiers.py
src/brainlayer/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
src/brainlayer/**/*.py: Use retry logic onSQLITE_BUSYerrors; each worker must use its own database connection to handle concurrency safely
Classification must preserveai_code,stack_trace, anduser_messageverbatim; skipnoiseentries entirely and summarizebuild_loganddir_listingentries (structure only)
Use AST-aware chunking via tree-sitter; never split stack traces; mask large tool output
For enrichment backend selection: use Groq as primary backend (cloud, configured in launchd plist), Gemini as fallback viaenrichment_controller.py, and Ollama as offline last-resort; allow override viaBRAINLAYER_ENRICH_BACKENDenv var
Configure enrichment rate viaBRAINLAYER_ENRICH_RATEenvironment variable (default 0.2 = 12 RPM)
Implement chunk lifecycle columns:superseded_by,aggregated_into,archived_aton chunks table; exclude lifecycle-managed chunks from default search; allowinclude_archived=Trueto show history
Implementbrain_supersedewith safety gate for personal data (journals, notes, health/finance); use soft-delete forbrain_archivewith timestamp
Addsupersedesparameter tobrain_storefor atomic store-and-replace operations
Run linting and formatting with:ruff check src/ && ruff format src/
Run tests withpytest
UsePRAGMA wal_checkpoint(FULL)before and after bulk database operations to prevent WAL bloat
Files:
src/brainlayer/pipeline/session_enrichment.pysrc/brainlayer/pipeline/enrichment_tiers.py
🧠 Learnings (11)
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.
Applied to files:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGViewModel.swiftbrain-bar/Sources/BrainBar/InjectionStore.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGNodeView.swiftbrain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swiftbrain-bar/Sources/BrainBar/Views/Components/ChunkConversationSheet.swiftbrain-bar/Sources/BrainBar/InjectionPresentation.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swiftbrain-bar/Sources/BrainBar/BrainBarWindowState.swiftbrain-bar/Sources/BrainBar/Dashboard/StatsCollector.swiftbrain-bar/Sources/BrainBar/InjectionFeedView.swiftbrain-bar/Sources/BrainBar/Dashboard/PipelineState.swiftbrain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swiftbrain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGSidebarView.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swiftbrain-bar/Sources/BrainBar/BrainDatabase.swiftbrain-bar/Sources/BrainBar/KnowledgeGraph/KGEdgeView.swiftbrain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.
Applied to files:
brain-bar/Sources/BrainBar/InjectionStore.swiftbrain-bar/Sources/BrainBar/InjectionPresentation.swiftbrain-bar/Sources/BrainBar/BrainBarWindowState.swiftbrain-bar/Sources/BrainBar/InjectionFeedView.swiftbrain-bar/Sources/BrainBar/BrainDatabase.swiftbrain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
📚 Learning: 2026-04-11T23:47:49.756Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-04-11T23:47:49.756Z
Learning: Applies to `src/brainlayer/enrichment_controller.py`: `service_tier="flex"` is the intentional default for all Gemini enrichment calls. Pass-2 enrichment is asynchronous backlog work where 1–15 minute latency is acceptable, and the 50% Gemini Flex Inference discount materially reduces backlog cost. This is locked by R84b design (§8 Q2). The `BRAINLAYER_GEMINI_SERVICE_TIER` environment variable is purely an operational escape hatch (e.g. `standard`), not the intended runtime default. Do not flag `service_tier="flex"` as a concern on this code path.
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-01T01:24:44.281Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-01T01:24:44.281Z
Learning: Applies to src/brainlayer/*enrichment*.py : Enrichment backend priority: Groq (primary/cloud) → Gemini (fallback) → Ollama (offline last-resort), configurable via `BRAINLAYER_ENRICH_BACKEND` environment variable
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-02T23:32:14.543Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-02T23:32:14.543Z
Learning: Applies to src/brainlayer/*enrichment*.py : Enrichment rate configurable via `BRAINLAYER_ENRICH_RATE` environment variable (default 0.2 = 12 RPM)
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : Classification must preserve `ai_code`, `stack_trace`, and `user_message` verbatim; skip `noise` entries entirely and summarize `build_log` and `dir_listing` entries (structure only)
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : For enrichment backend selection: use Groq as primary backend (cloud, configured in launchd plist), Gemini as fallback via `enrichment_controller.py`, and Ollama as offline last-resort; allow override via `BRAINLAYER_ENRICH_BACKEND` env var
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-11T16:54:45.631Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-04-11T16:54:45.631Z
Learning: Applies to `src/brainlayer/enrichment_controller.py` and `src/brainlayer/pipeline/rate_limiter.py`: Gemini API calls in the enrichment pipeline are gated by a token bucket rate limiter. The rate is controlled by `BRAINLAYER_ENRICH_RATE` (default `5/s`, burst `10`) to keep throughput inside the Gemini Flex intended envelope. This default supersedes the earlier 0.2 (12 RPM) default for the Gemini Flex integration path.
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Pipeline architecture: Extract → Classify → Chunk → Embed → Index, with post-processing for enrichment, brain graph, and Obsidian export
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-13T15:04:00.631Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-13T15:04:00.631Z
Learning: Extract → Classify → Chunk → Embed → Index pipeline order must be preserved
Applied to files:
src/brainlayer/pipeline/enrichment_tiers.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Use LaunchAgent `com.brainlayer.watch.plist` with KeepAlive=true and Nice=10 for persistent watcher process
Applied to files:
brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift
🪛 LanguageTool
docs/plans/2026-04-18-brainbar-injections-graph-hybrid.md
[style] ~61-~61: This phrase is redundant. Consider writing “space”.
Context: ...ession ID, query text, and chunk IDs. - Empty space is not acceptable; the rail should coll...
(EMPTY_HOLE)
🪛 markdownlint-cli2 (0.22.1)
docs/plans/2026-04-18-brainbar-dashboard-flow-collab.md
[warning] 3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 6-6: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 18-18: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 24-24: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 25-25: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
[warning] 32-32: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 36-36: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
docs/plans/2026-04-18-brainbar-dashboard-flow.md
[warning] 13-13: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
🪛 SwiftLint (0.63.2)
brain-bar/Tests/BrainBarTests/InjectionPresentationTests.swift
[Warning] 4-4: Classes should have an explicit deinit method
(required_deinit)
brain-bar/Tests/BrainBarTests/KGAtlasPresentationTests.swift
[Warning] 4-4: Classes should have an explicit deinit method
(required_deinit)
brain-bar/Tests/BrainBarTests/KGAtlasLayoutTests.swift
[Warning] 4-4: Classes should have an explicit deinit method
(required_deinit)
brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift
[Warning] 4-4: Classes should have an explicit deinit method
(required_deinit)
brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift
[Warning] 71-71: Classes should have an explicit deinit method
(required_deinit)
🔇 Additional comments (15)
src/brainlayer/pipeline/enrichment_tiers.py (1)
47-47: Looks aligned with the expanded agent-session set.The broadened source constant and the matching tier filters stay consistent with the new CLI families and the updated session-discovery tests.
Also applies to: 133-135
tests/test_enrichment_tiers_agent_sources.py (1)
6-33: Good coverage for the expanded agent CLI sources.The recent/old split and the source-filter assertion cover the new tiering behavior cleanly.
brain-bar/Sources/BrainBar/Views/Components/ChunkConversationSheet.swift (1)
74-80: Clean close-path unification across sheet and overlay.The
onClosefallback design is solid and integrates cleanly with overlay-driven presentation.Also applies to: 94-123
brain-bar/Sources/BrainBar/KnowledgeGraph/KGViewModel.swift (1)
37-51: Good integration of atlas seeding into graph load path.Mapping raw entities to
KGNodeand then seeding positions in one place keeps initialization deterministic and easier to reason about.brain-bar/Sources/BrainBar/KnowledgeGraph/KGSidebarView.swift (1)
74-93: Nice section decomposition and reusable styling primitives.Breaking the inspector into focused section builders + shared card/chip helpers makes this view much easier to maintain.
Also applies to: 95-161, 163-249
brain-bar/Sources/BrainBar/KnowledgeGraph/KGEdgeView.swift (1)
13-35: Looks good.The edge stroke and label colors now follow the canvas
colorSchemeand stay consistent with the rest of the KnowledgeGraph rendering pipeline.brain-bar/Sources/BrainBar/InjectionStore.swift (1)
34-42: Looks good.The new parameter check avoids unnecessary forced refreshes while preserving the initial refresh on first start.
brain-bar/Tests/BrainBarTests/KnowledgeGraphTests.swift (1)
8-8: Looks good.Adding
SQLite3here is appropriate, and the new test explicitly covers thekg_entities.importancefallback path that the database now relies on.Also applies to: 72-89
brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift (1)
341-365: Looks good.The popover now reads from
DashboardFlowSummaryfor the headline and sparkline, and the enrichment activity copy correctly follows the configured window.Also applies to: 396-401
brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift (1)
1-57: Looks good.This test suite covers the key parsing and snapshot-execution paths well, including helper noise filtering and the large-stdout case.
brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift (1)
34-52: Looks good.Injecting a fixed
nowmakes these assertions deterministic, and the expected badge/status text now matches the live-presentation behavior.Also applies to: 54-74
brain-bar/Sources/BrainBar/KnowledgeGraph/KGNodeView.swift (1)
12-41: Looks good.The node renderer now uses the canvas
colorSchemeconsistently for its selected and unselected palette.brain-bar/Tests/BrainBarTests/InjectionPresentationTests.swift (1)
1-86: Looks good.These tests cover the snapshot grouping, bucketization, and filtering behavior that the new presentation model depends on.
brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift (1)
178-376: Good centralization of flow-state derivation.Consolidating headline/detail/lane/queue derivation into
DashboardFlowSummarywith injectablenowimproves determinism and keeps UI state logic coherent.brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift (1)
59-91: Timestamp-based age formatting looks solid.
lastEventString+ window label helpers cleanly align the UI with real event timestamps instead of inferred bucket positions.
| static func derive(stats: BrainDatabase.DashboardStats, now: Date = Date()) -> BrainBarLivePresentation { | ||
| if stats.eventIsLive(stats.lastEnrichedAt, now: now) { | ||
| return BrainBarLivePresentation( | ||
| sparklineStyle: .active, | ||
| badgeText: DashboardMetricFormatter.liveBadgeString(ratePerMinute: stats.enrichmentRatePerMinute), | ||
| statusText: "Live enrichment stream" | ||
| badgeText: "live now", | ||
| statusText: "Enrichments in the last 60s" | ||
| ) | ||
| } | ||
|
|
||
| return BrainBarLivePresentation( | ||
| sparklineStyle: .idle, | ||
| badgeText: "idle", | ||
| statusText: "Idle — no enrichment in last 60s" | ||
| badgeText: DashboardMetricFormatter.lastEventString( | ||
| lastEventAt: stats.lastEnrichedAt, | ||
| activityWindowMinutes: stats.activityWindowMinutes, | ||
| now: now | ||
| ), | ||
| statusText: "No enrichments in the last 60s" | ||
| ) |
There was a problem hiding this comment.
Use stats.liveWindowMinutes in the live/idle status copy.
The PR added explicit live-window metadata, but this still hardcodes 60s. If the database changes that window, the badge text stays stale while the logic uses the new value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/BrainBarWindowState.swift` around lines 57 - 74,
The status copy in derive(stats:) hardcodes "60s" instead of using the new
live-window metadata; update the BrainBarLivePresentation initializers to
interpolate stats.liveWindowMinutes into the badge/status text (e.g., replace
"Enrichments in the last 60s" and "No enrichments in the last 60s" with strings
that use stats.liveWindowMinutes) and keep using
DashboardMetricFormatter.lastEventString for badgeText so the displayed window
matches the logic that checks stats.eventIsLive.
| activityWindowMinutes: Int = 30, | ||
| bucketCount: Int = 12, |
There was a problem hiding this comment.
Default activity window still set to 30 minutes.
Both defaults remain 30, so callers that omit activityWindowMinutes won’t get the intended 60-minute dashboard window behavior.
💡 Suggested fix
- activityWindowMinutes: Int = 30,
+ activityWindowMinutes: Int = 60,
@@
- func dashboardStats(activityWindowMinutes: Int = 30, bucketCount: Int = 12) throws -> DashboardStats {
+ func dashboardStats(activityWindowMinutes: Int = 60, bucketCount: Int = 12) throws -> DashboardStats {Also applies to: 1014-1014
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 44 - 45, The
default for activity window is still 30 minutes in the BrainDatabase initializer
signature (activityWindowMinutes: Int = 30); update that default to 60 so
callers that omit the parameter get a 60-minute dashboard window, and make the
same change for the other matching occurrence of the initializer/parameter
elsewhere in the file (the second activityWindowMinutes default at the noted
location).
| let output = Pipe() | ||
| process.standardOutput = output | ||
| process.standardError = Pipe() | ||
|
|
||
| do { | ||
| try process.run() | ||
| } catch { | ||
| return nil | ||
| } | ||
|
|
||
| // Drain stdout before waiting so verbose process lists cannot fill the pipe and deadlock tests. | ||
| let data = output.fileHandleForReading.readDataToEndOfFile() | ||
| process.waitUntilExit() | ||
| guard process.terminationStatus == 0 else { return nil } | ||
| return String(data: data, encoding: .utf8) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify the current wiring and drain behavior in runSnapshotCommand.
rg -n -A20 -B6 'static func runSnapshotCommand' brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swiftRepository: EtanHey/brainlayer
Length of output: 1072
Redirect stderr to the same pipe as stdout to prevent deadlock.
The code creates a pipe for stderr but never reads from it. When the child process writes more than the pipe buffer can hold (~64KB), it blocks, causing waitUntilExit() to hang.
Suggested fix
let output = Pipe()
process.standardOutput = output
- process.standardError = Pipe()
+ process.standardError = output🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift` around lines
123 - 137, The child process can deadlock because stderr is set to a separate
Pipe that is never read; change the code to redirect stderr into the same Pipe
used for stdout (replace the separate Pipe assignment for process.standardError
with process.standardError = output) so both streams are read via
output.fileHandleForReading before calling process.waitUntilExit() (refer to the
process, output, and fileHandleForReading symbols and ensure
process.standardError is not left as an unread Pipe).
| func refresh(force: Bool = false) { | ||
| let nextDaemon = daemonMonitor.sample() | ||
| agentActivity = agentActivityMonitor.sample() |
There was a problem hiding this comment.
Move agent-process sampling off the @MainActor refresh path.
With the default AgentActivityMonitor(), this now runs process-snapshot work synchronously once per second on the UI actor. Even small ps/process-enumeration latency will translate into visible dashboard jank.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift` around lines 84 -
86, The refresh(force:) method is sampling daemonMonitor.sample() and
agentActivityMonitor.sample() on the UI actor causing jank; move the potentially
blocking sampling off the `@MainActor` by performing the calls in a background
task (e.g., Task.detached or DispatchQueue.global) and then marshal only the
state assignment back to the MainActor. Concretely: in refresh(force:) call
Task.detached { let next = daemonMonitor.sample(); let activity =
agentActivityMonitor.sample(); await MainActor.run { self.nextDaemon = next;
self.agentActivity = activity } } (use the actual property names used in this
file) so sampling happens off-main and only the minimal UI updates run on the
main actor.
| Button { | ||
| if let firstChunk = event.chunkIDs.first { | ||
| selectedConversation = try? store.expandedConversation(chunkID: firstChunk) | ||
| } |
There was a problem hiding this comment.
Don’t swallow conversation-load failures behind a dead “Open” action.
Both handlers use try?, so any database/read error turns into a no-op and the user gets no feedback. This is especially painful during startup or DB churn because the new overlay path looks clickable but silently does nothing. Surface an explicit error state instead. Based on learnings: do not silently default on BrainBar database access failures; expose them explicitly so startup/runtime issues are visible.
Also applies to: 257-258
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/InjectionFeedView.swift` around lines 220 - 223,
The Button handlers currently swallow errors by using try? when calling
store.expandedConversation(chunkID:), so database/read failures become silent
no-ops; change both occurrences (the Button action around selectedConversation =
try? store.expandedConversation(...) and the similar code at the 257-258 site)
to perform a do/catch: call try store.expandedConversation(chunkID:) inside do,
assign selectedConversation on success, and in catch set an explicit error state
(e.g., set a `@State` errorMessage / showError flag or update an error enum) so
the UI can surface a visible failure (alert, overlay, or error view) instead of
failing silently. Ensure you reference the same selectedConversation and
store.expandedConversation(chunkID:) symbols so the handlers report the caught
error details to the UI.
| ## Goal | ||
| Finish the second-pass BrainBar dashboard refactor without mixing `live agent CLIs` into `indexed writes` or letting the queue/runtime cards dominate the layout. | ||
|
|
||
| ## Agents | ||
| - `main-codex` | ||
| Ownership: | ||
| - `brain-bar/Sources/BrainBar/Dashboard/AgentActivityMonitor.swift` | ||
| - `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift` | ||
| - `src/brainlayer/pipeline/enrichment_tiers.py` | ||
| - `brain-bar/Tests/BrainBarTests/AgentActivityMonitorTests.swift` | ||
| - `tests/test_enrichment_tiers_agent_sources.py` | ||
| - `clean-claude-ui` | ||
| Ownership: | ||
| - `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` | ||
|
|
||
| ## Constraints | ||
| - Do not edit files outside your ownership. | ||
| - Do not revert or restage another agent's work. | ||
| - `clean-claude-ui` must `git add` only its owned file before commit. | ||
| - `clean-claude-ui` may commit and push its owned UI change after tests pass. | ||
|
|
||
| ## Task Board | ||
| | Task | Owner | Status | | ||
| |------|-------|--------| | ||
| | Add process-based agent family monitor | main-codex | done | | ||
| | Extend enrichment tiering to non-Claude agent sources | main-codex | done | | ||
| | Refactor dashboard layout to add agent strip and compact queue rail | clean-claude-ui | in_progress | | ||
| | Collapse/demote runtime details behind disclosure | clean-claude-ui | in_progress | | ||
|
|
||
| ## Gates | ||
| - `main-codex`: `swift test --filter AgentActivityMonitorTests`, `swift test --filter BrainBarUXLogicTests`, `pytest tests/test_enrichment_tiers.py tests/test_enrichment_tiers_agent_sources.py -q` | ||
| - `clean-claude-ui`: `swift test --filter BrainBarUXLogicTests` before commit | ||
|
|
||
| ## Notes |
There was a problem hiding this comment.
Add required blank lines around headings and the task table.
This document currently violates MD022/MD058 in several places; adding blank separators will clear lint warnings.
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 6-6: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 18-18: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 24-24: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 25-25: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
[warning] 32-32: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 36-36: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/plans/2026-04-18-brainbar-dashboard-flow-collab.md` around lines 3 - 36,
The markdown violates MD022/MD058 because headings and the task table are not
separated by blank lines; update the document by inserting a single blank line
before and after each top-level section heading (e.g., "## Goal", "## Agents",
"## Constraints", "## Task Board", "## Gates", "## Notes") and ensure the task
table has a blank line above and below it so there is a blank separator between
the preceding heading and the table and between the table and the following
heading.
| ### Task 1: Add explicit dashboard time semantics | ||
|
|
||
| **Files:** | ||
| - Modify: `brain-bar/Sources/BrainBar/BrainDatabase.swift` | ||
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift` | ||
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | ||
|
|
||
| **Step 1: Write the failing tests** | ||
|
|
||
| Add tests that assert `DashboardStats` carries: | ||
| - `activityWindowMinutes` | ||
| - `bucketCount` | ||
| - `liveWindowMinutes` | ||
| - `lastWriteAt` | ||
| - `lastEnrichedAt` | ||
|
|
||
| Also add a database test proving: | ||
| - a chunk written now and enriched 90 seconds ago reports `lastWriteAt != nil` | ||
| - `lastEnrichedAt != nil` | ||
| - `enrichmentRatePerMinute == 0` | ||
| - enrichment trend still contains the older completion | ||
|
|
||
| **Step 2: Run test to verify it fails** | ||
|
|
||
| Run: `swift test --filter DashboardTests` | ||
| Expected: compile failure or assertion failure because `DashboardStats` does not yet expose the new time fields. | ||
|
|
||
| **Step 3: Write minimal implementation** | ||
|
|
||
| Update `DashboardStats` and `dashboardStats(...)` to include the new metadata and fetch `MAX(created_at)` / `MAX(enriched_at)` timestamps using the same SQLite date parsing already used for bucket generation. | ||
|
|
||
| **Step 4: Run test to verify it passes** | ||
|
|
||
| Run: `swift test --filter DashboardTests` | ||
| Expected: the new stats tests pass and existing dashboard database tests still pass. | ||
|
|
||
| ### Task 2: Replace the single enum with a richer flow model | ||
|
|
||
| **Files:** | ||
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift` | ||
| - Modify: `brain-bar/Sources/BrainBar/BrainBarWindowState.swift` | ||
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift` | ||
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | ||
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift` | ||
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift` | ||
|
|
||
| **Step 1: Write the failing tests** | ||
|
|
||
| Add tests for a derived flow summary that can represent: | ||
| - writes live + enrichments live at the same time | ||
| - writes idle + enrichments draining backlog | ||
| - writes idle + enrichments idle + backlog present | ||
| - daemon unavailable | ||
|
|
||
| Add formatter tests that require: | ||
| - real last-event strings from actual timestamps, not bucket position | ||
| - a stable “live now” badge that only reflects the configured live window | ||
|
|
||
| **Step 2: Run test to verify it fails** | ||
|
|
||
| Run: `swift test --filter BrainBarUXLogicTests` | ||
| Run: `swift test --filter BrainBarWindowStateTests` | ||
| Expected: failures because the richer flow summary and timestamp-driven formatting do not exist yet. | ||
|
|
||
| **Step 3: Write minimal implementation** | ||
|
|
||
| Introduce a derived dashboard flow model with: | ||
| - a write lane state | ||
| - an enrichment lane state | ||
| - a backlog/queue state | ||
| - a top-level narrative for the hero | ||
|
|
||
| Keep `PipelineState` available for legacy consumers, but make it derive from the richer summary instead of directly from raw buckets. | ||
|
|
||
| Replace bucket-inferred `lastCompletionString(...)` with a timestamp-based formatter and add a generic “time ago” helper for both writes and enrichments. | ||
|
|
||
| **Step 4: Run test to verify it passes** | ||
|
|
||
| Run: `swift test --filter BrainBarUXLogicTests` | ||
| Run: `swift test --filter BrainBarWindowStateTests` | ||
| Expected: new derivation and formatter tests pass without breaking the existing layout/token tests. | ||
|
|
||
| ### Task 3: Rebuild the dashboard into a dual-lane flow board | ||
|
|
||
| **Files:** | ||
| - Modify: `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` | ||
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swift` | ||
| - Possibly modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift` | ||
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift` | ||
|
|
||
| **Step 1: Write the failing tests** | ||
|
|
||
| Add logic-level tests for any new copy helpers or layout-state helpers needed by the dashboard, including: | ||
| - synchronized window labels | ||
| - lane summaries using the same window value | ||
| - idle copy that no longer contradicts a recent-trend graph | ||
|
|
||
| **Step 2: Run test to verify it fails** | ||
|
|
||
| Run: `swift test --filter BrainBarUXLogicTests` | ||
| Expected: failures because the new helpers or derived labels are not implemented yet. | ||
|
|
||
| **Step 3: Write minimal implementation** | ||
|
|
||
| Rebuild `BrainBarDashboardView` so the hero becomes a flow board with: | ||
| - a left lane for writes | ||
| - a center queue/backlog card | ||
| - a right lane for enrichments | ||
| - shared window labeling | ||
| - real last-write / last-enriched timestamps | ||
| - live badges that are explicitly “now” | ||
| - charts with stronger fill, endpoint emphasis, and separate colors for ingress vs enrichment | ||
|
|
||
| Reuse the same derived flow summary everywhere in the window instead of mixing independent heuristics. | ||
|
|
||
| If `StatusPopoverView` still shows contradictory wording after the model change, update it to consume the same derived summary. | ||
|
|
||
| **Step 4: Run test to verify it passes** | ||
|
|
||
| Run: `swift test --filter BrainBarUXLogicTests` | ||
| Expected: the helper/copy tests pass and the dashboard remains readable under compact layout tests. | ||
|
|
||
| ### Task 4: Keep legacy/menu surfaces consistent | ||
|
|
||
| **Files:** | ||
| - Modify: `brain-bar/Sources/BrainBar/BrainBarApp.swift` | ||
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift` | ||
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | ||
|
|
||
| **Step 1: Write the failing tests** | ||
|
|
||
| Add or extend tests that verify the status popover still loads and that the live/menu surfaces can continue rendering with the updated state/formatter interfaces. | ||
|
|
||
| **Step 2: Run test to verify it fails** | ||
|
|
||
| Run: `swift test --filter DashboardTests` | ||
| Expected: failures if any legacy consumer still relies on removed copy or raw bucket inference. | ||
|
|
||
| **Step 3: Write minimal implementation** | ||
|
|
||
| Adapt menu-bar and popover consumers to: | ||
| - use timestamp-safe formatting | ||
| - use the derived summary or derived compatibility state | ||
| - keep their visual contract simple and backward-safe | ||
|
|
||
| **Step 4: Run test to verify it passes** | ||
|
|
||
| Run: `swift test --filter DashboardTests` | ||
| Expected: dashboard and popover tests pass together. | ||
|
|
||
| ### Task 5: Full verification and finish |
There was a problem hiding this comment.
Heading level increments skip from H1 to H3.
Task headings should be ## (not ###) to satisfy MD001 and keep structure consistent.
Suggested fix
-### Task 1: Add explicit dashboard time semantics
+## Task 1: Add explicit dashboard time semantics
...
-### Task 2: Replace the single enum with a richer flow model
+## Task 2: Replace the single enum with a richer flow model
...
-### Task 3: Rebuild the dashboard into a dual-lane flow board
+## Task 3: Rebuild the dashboard into a dual-lane flow board
...
-### Task 4: Keep legacy/menu surfaces consistent
+## Task 4: Keep legacy/menu surfaces consistent
...
-### Task 5: Full verification and finish
+## Task 5: Full verification and finish📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Task 1: Add explicit dashboard time semantics | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/BrainDatabase.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add tests that assert `DashboardStats` carries: | |
| - `activityWindowMinutes` | |
| - `bucketCount` | |
| - `liveWindowMinutes` | |
| - `lastWriteAt` | |
| - `lastEnrichedAt` | |
| Also add a database test proving: | |
| - a chunk written now and enriched 90 seconds ago reports `lastWriteAt != nil` | |
| - `lastEnrichedAt != nil` | |
| - `enrichmentRatePerMinute == 0` | |
| - enrichment trend still contains the older completion | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: compile failure or assertion failure because `DashboardStats` does not yet expose the new time fields. | |
| **Step 3: Write minimal implementation** | |
| Update `DashboardStats` and `dashboardStats(...)` to include the new metadata and fetch `MAX(created_at)` / `MAX(enriched_at)` timestamps using the same SQLite date parsing already used for bucket generation. | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: the new stats tests pass and existing dashboard database tests still pass. | |
| ### Task 2: Replace the single enum with a richer flow model | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/BrainBarWindowState.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add tests for a derived flow summary that can represent: | |
| - writes live + enrichments live at the same time | |
| - writes idle + enrichments draining backlog | |
| - writes idle + enrichments idle + backlog present | |
| - daemon unavailable | |
| Add formatter tests that require: | |
| - real last-event strings from actual timestamps, not bucket position | |
| - a stable “live now” badge that only reflects the configured live window | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Run: `swift test --filter BrainBarWindowStateTests` | |
| Expected: failures because the richer flow summary and timestamp-driven formatting do not exist yet. | |
| **Step 3: Write minimal implementation** | |
| Introduce a derived dashboard flow model with: | |
| - a write lane state | |
| - an enrichment lane state | |
| - a backlog/queue state | |
| - a top-level narrative for the hero | |
| Keep `PipelineState` available for legacy consumers, but make it derive from the richer summary instead of directly from raw buckets. | |
| Replace bucket-inferred `lastCompletionString(...)` with a timestamp-based formatter and add a generic “time ago” helper for both writes and enrichments. | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Run: `swift test --filter BrainBarWindowStateTests` | |
| Expected: new derivation and formatter tests pass without breaking the existing layout/token tests. | |
| ### Task 3: Rebuild the dashboard into a dual-lane flow board | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swift` | |
| - Possibly modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add logic-level tests for any new copy helpers or layout-state helpers needed by the dashboard, including: | |
| - synchronized window labels | |
| - lane summaries using the same window value | |
| - idle copy that no longer contradicts a recent-trend graph | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Expected: failures because the new helpers or derived labels are not implemented yet. | |
| **Step 3: Write minimal implementation** | |
| Rebuild `BrainBarDashboardView` so the hero becomes a flow board with: | |
| - a left lane for writes | |
| - a center queue/backlog card | |
| - a right lane for enrichments | |
| - shared window labeling | |
| - real last-write / last-enriched timestamps | |
| - live badges that are explicitly “now” | |
| - charts with stronger fill, endpoint emphasis, and separate colors for ingress vs enrichment | |
| Reuse the same derived flow summary everywhere in the window instead of mixing independent heuristics. | |
| If `StatusPopoverView` still shows contradictory wording after the model change, update it to consume the same derived summary. | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Expected: the helper/copy tests pass and the dashboard remains readable under compact layout tests. | |
| ### Task 4: Keep legacy/menu surfaces consistent | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/BrainBarApp.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add or extend tests that verify the status popover still loads and that the live/menu surfaces can continue rendering with the updated state/formatter interfaces. | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: failures if any legacy consumer still relies on removed copy or raw bucket inference. | |
| **Step 3: Write minimal implementation** | |
| Adapt menu-bar and popover consumers to: | |
| - use timestamp-safe formatting | |
| - use the derived summary or derived compatibility state | |
| - keep their visual contract simple and backward-safe | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: dashboard and popover tests pass together. | |
| ### Task 5: Full verification and finish | |
| ## Task 1: Add explicit dashboard time semantics | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/BrainDatabase.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/StatsCollector.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add tests that assert `DashboardStats` carries: | |
| - `activityWindowMinutes` | |
| - `bucketCount` | |
| - `liveWindowMinutes` | |
| - `lastWriteAt` | |
| - `lastEnrichedAt` | |
| Also add a database test proving: | |
| - a chunk written now and enriched 90 seconds ago reports `lastWriteAt != nil` | |
| - `lastEnrichedAt != nil` | |
| - `enrichmentRatePerMinute == 0` | |
| - enrichment trend still contains the older completion | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: compile failure or assertion failure because `DashboardStats` does not yet expose the new time fields. | |
| **Step 3: Write minimal implementation** | |
| Update `DashboardStats` and `dashboardStats(...)` to include the new metadata and fetch `MAX(created_at)` / `MAX(enriched_at)` timestamps using the same SQLite date parsing already used for bucket generation. | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: the new stats tests pass and existing dashboard database tests still pass. | |
| ## Task 2: Replace the single enum with a richer flow model | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/BrainBarWindowState.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarWindowStateTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add tests for a derived flow summary that can represent: | |
| - writes live + enrichments live at the same time | |
| - writes idle + enrichments draining backlog | |
| - writes idle + enrichments idle + backlog present | |
| - daemon unavailable | |
| Add formatter tests that require: | |
| - real last-event strings from actual timestamps, not bucket position | |
| - a stable "live now" badge that only reflects the configured live window | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Run: `swift test --filter BrainBarWindowStateTests` | |
| Expected: failures because the richer flow summary and timestamp-driven formatting do not exist yet. | |
| **Step 3: Write minimal implementation** | |
| Introduce a derived dashboard flow model with: | |
| - a write lane state | |
| - an enrichment lane state | |
| - a backlog/queue state | |
| - a top-level narrative for the hero | |
| Keep `PipelineState` available for legacy consumers, but make it derive from the richer summary instead of directly from raw buckets. | |
| Replace bucket-inferred `lastCompletionString(...)` with a timestamp-based formatter and add a generic "time ago" helper for both writes and enrichments. | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Run: `swift test --filter BrainBarWindowStateTests` | |
| Expected: new derivation and formatter tests pass without breaking the existing layout/token tests. | |
| ## Task 3: Rebuild the dashboard into a dual-lane flow board | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swift` | |
| - Possibly modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/BrainBarUXLogicTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add logic-level tests for any new copy helpers or layout-state helpers needed by the dashboard, including: | |
| - synchronized window labels | |
| - lane summaries using the same window value | |
| - idle copy that no longer contradicts a recent-trend graph | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Expected: failures because the new helpers or derived labels are not implemented yet. | |
| **Step 3: Write minimal implementation** | |
| Rebuild `BrainBarDashboardView` so the hero becomes a flow board with: | |
| - a left lane for writes | |
| - a center queue/backlog card | |
| - a right lane for enrichments | |
| - shared window labeling | |
| - real last-write / last-enriched timestamps | |
| - live badges that are explicitly "now" | |
| - charts with stronger fill, endpoint emphasis, and separate colors for ingress vs enrichment | |
| Reuse the same derived flow summary everywhere in the window instead of mixing independent heuristics. | |
| If `StatusPopoverView` still shows contradictory wording after the model change, update it to consume the same derived summary. | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter BrainBarUXLogicTests` | |
| Expected: the helper/copy tests pass and the dashboard remains readable under compact layout tests. | |
| ## Task 4: Keep legacy/menu surfaces consistent | |
| **Files:** | |
| - Modify: `brain-bar/Sources/BrainBar/BrainBarApp.swift` | |
| - Modify: `brain-bar/Sources/BrainBar/Dashboard/StatusPopoverView.swift` | |
| - Test: `brain-bar/Tests/BrainBarTests/DashboardTests.swift` | |
| **Step 1: Write the failing tests** | |
| Add or extend tests that verify the status popover still loads and that the live/menu surfaces can continue rendering with the updated state/formatter interfaces. | |
| **Step 2: Run test to verify it fails** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: failures if any legacy consumer still relies on removed copy or raw bucket inference. | |
| **Step 3: Write minimal implementation** | |
| Adapt menu-bar and popover consumers to: | |
| - use timestamp-safe formatting | |
| - use the derived summary or derived compatibility state | |
| - keep their visual contract simple and backward-safe | |
| **Step 4: Run test to verify it passes** | |
| Run: `swift test --filter DashboardTests` | |
| Expected: dashboard and popover tests pass together. | |
| ## Task 5: Full verification and finish |
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 13-13: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/plans/2026-04-18-brainbar-dashboard-flow.md` around lines 13 - 163, The
document uses H3 (###) for top-level Task headings which skips H2 and fails
MD001; update each task heading text like "Task 1: Add explicit dashboard time
semantics", "Task 2: Replace the single enum with a richer flow model", etc., to
use H2 (##) instead of H3 so heading levels are hierarchical and sequential,
keeping the rest of the content unchanged; scan the file for lines starting with
"### Task" and replace them with "## Task" to correct all task-level headings.
| - `KGPresentation` | ||
| - groups nodes into atlas regions | ||
| - computes altitude-filtered visible nodes | ||
| - preserves selected node visibility | ||
|
|
There was a problem hiding this comment.
Rename KGPresentation to KGAtlasPresentation here.
The implementation and tests in this PR use KGAtlasPresentation, so this plan currently sends follow-up work toward a nonexistent helper name.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/plans/2026-04-18-brainbar-injections-graph-hybrid.md` around lines 102 -
106, Update the doc to match the implemented helper name: replace the outdated
symbol `KGPresentation` with `KGAtlasPresentation` in the listed plan bullet
(and any nearby references in this file) so the plan points to the existing
helper; ensure the description line remains the same (groups nodes into atlas
regions, computes altitude-filtered visible nodes, preserves selected node
visibility) but uses the `KGAtlasPresentation` identifier.
| source_placeholders = ",".join("?" for _ in T1_T2_SOURCES) | ||
| source_query = f""" | ||
| SELECT DISTINCT source_file, project, COUNT(*) as cnt | ||
| FROM chunks | ||
| WHERE source IS NULL OR source = 'claude_code' | ||
| WHERE source IS NULL OR source IN ({source_placeholders}) | ||
| GROUP BY source_file | ||
| HAVING cnt >= 3 | ||
| """ | ||
| for row in cursor.execute(source_query): | ||
| for row in cursor.execute(source_query, tuple(T1_T2_SOURCES)): |
There was a problem hiding this comment.
Update reconstruction/prompting before broadening discovery.
This now admits Codex/Cursor/Gemini sessions, but the downstream path still drops build_log/dir_listing chunks and the prompt still frames the input as a Claude Code conversation. Those sessions will be enriched with incomplete context.
As per coding guidelines, Classification must preserve ai_code, stack_trace, and user_message verbatim; skip noise entries entirely and summarize build_log and dir_listing entries (structure only).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/brainlayer/pipeline/session_enrichment.py` around lines 361 - 369, The
enrichment flow that selects chunks (see source_placeholders, source_query and
T1_T2_SOURCES) currently drops build_log/dir_listing chunks and frames prompts
as a Claude Code conversation; update the reconstruction/prompting logic that
consumes rows from cursor.execute(...) so that: (1) ai_code, stack_trace, and
user_message chunk types are preserved verbatim; (2) noise chunk types are
skipped entirely; (3) build_log and dir_listing chunks are not discarded but
summarized to a structural outline only (e.g., file names, sizes, key
commands/output lines), and (4) sessions from Codex/Cursor/Gemini are recognized
and use a neutral/generic multi-model prompt template instead of the
Claude-specific prompt so the enriched context is accurate across models. Ensure
the filtering/transforming happens immediately after fetching rows from the
chunks query so downstream enrichment receives the corrected chunk list.
| @pytest.mark.parametrize("source", ["codex_cli", "cursor_cli", "gemini_cli"]) | ||
| def test_agent_cli_sessions_are_discovered_from_chunks(self, store, source): | ||
| """Agent-session sources beyond Claude are eligible for session enrichment.""" | ||
| from brainlayer.pipeline.session_enrichment import list_sessions_for_enrichment | ||
|
|
||
| cursor = store.conn.cursor() | ||
| session_id = f"{source}-session" | ||
| source_file = f"/tmp/{session_id}.jsonl" | ||
|
|
||
| for idx in range(3): | ||
| cursor.execute( | ||
| """INSERT INTO chunks | ||
| (id, content, metadata, source_file, project, content_type, | ||
| value_type, char_count, source, created_at) | ||
| VALUES (?, ?, '{}', ?, ?, 'assistant_text', 'HIGH', ?, ?, ?)""", | ||
| ( | ||
| f"{session_id}:c{idx}", | ||
| f"{source} chunk {idx}", | ||
| source_file, | ||
| "brainlayer", | ||
| len(f"{source} chunk {idx}"), | ||
| source, | ||
| f"2026-04-18T0{idx}:00:00Z", | ||
| ), | ||
| ) | ||
|
|
||
| sessions = list_sessions_for_enrichment(store) | ||
| session_ids = [session[0] for session in sessions] | ||
|
|
||
| assert session_id in session_ids | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
Add a negative assertion for an unrecognized source.
This test proves the new allow-list admits codex_cli/cursor_cli/gemini_cli, but it does not protect against accidentally admitting every source. One exclusion case would lock the SQL filter down and catch regressions earlier.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_session_enrichment.py` around lines 529 - 559, In
test_agent_cli_sessions_are_discovered_from_chunks, add a negative case that
inserts a chunk using an unrecognized source (e.g., "unknown_cli" or
"bad_source") and then call list_sessions_for_enrichment(store) and assert that
the corresponding session id is NOT in the returned session_ids; this ensures
the allow-list logic in list_sessions_for_enrichment is enforced and prevents
accidental admission of arbitrary sources.
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
W2 update: ported PR #263 recall hardening into BrainBar Swift on commit c9c011e.\n\nImplemented:\n- exact chunk-id short-circuit before FTS\n- trigram FTS fallback for identifier substrings\n- lexical-defense + KG alias expansion with multi-word semantics preserved\n- repair-only trigram FTS migration for existing DBs\n\nVerification receipts:\n- CodeRabbit local review run: 18 branch findings; fixed the two W2 BrainDatabase findings (blank query handling, alias normalization consistency); remaining findings are W1 UI/doc scope.\n- swift test --filter targeted W2 tests: 5 tests, 0 failures.\n- swift test: 312 tests, 0 failures.\n- pre-push hook: 1811 passed, 9 skipped, 75 deselected, 1 xfailed; MCP registration 3 passed; isolated eval/hook routing 32 passed; bun 1 pass; test_fts5_determinism.sh PASS.\n\nLaunchAgents paused only during pre-push to avoid the known SQLite lock issue, then restored: com.brainlayer.enrichment PID 38157, com.brainlayer.watch PID 38159. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c9c011e2f0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let sql = """ | ||
| SELECT e.id, e.name, e.entity_type, e.description, | ||
| COALESCE(CAST(json_extract(e.metadata, '$.importance') AS REAL), 5.0) AS importance | ||
| COALESCE(e.importance, CAST(json_extract(e.metadata, '$.importance') AS REAL), 5.0) AS importance |
There was a problem hiding this comment.
Preserve metadata importance in KG entity ranking
fetchKGEntities now prioritizes e.importance before metadata importance, but this commit also adds importance REAL DEFAULT 0.5 and insertEntity still does not write the column. That makes e.importance non-null for virtually all rows, so metadata-derived importance is effectively ignored, which flattens ranking and can hide most nodes behind the atlas default filter (minimumImportance = 3). This is a behavior regression for existing and newly inserted entities that only carry importance in metadata.
Useful? React with 👍 / 👎.
| let regions: [Region] = orderedEntityTypes.compactMap { entityType in | ||
| guard let values = grouped[entityType], !values.isEmpty else { return nil } |
There was a problem hiding this comment.
Keep unknown entity types visible in atlas regions
The atlas snapshot only builds regions from orderedEntityTypes, then derives visibleNodes from those regions, so any node whose entityType is not in that hardcoded list is dropped entirely (and its edges disappear) even if it passes the importance filter. In datasets with additional KG types, this silently removes valid graph data from the UI and makes those entities unselectable.
Useful? React with 👍 / 👎.
| let visibleIDs = Set(visibleNodes.map(\.id)) | ||
| let visibleEdges = edges.filter { visibleIDs.contains($0.sourceId) && visibleIDs.contains($0.targetId) } |
There was a problem hiding this comment.
🟢 Low KnowledgeGraph/KGAtlasPresentation.swift:29
When a node has an entityType not in orderedEntityTypes, it is excluded from regions (line 33-35) and therefore from snapshot.visibleNodes (line 53). However, visibleEdges (line 30) still includes edges referencing that node because it was filtered only by minimumImportance, not by entity type. This causes the snapshot to report edges whose endpoints are missing from visibleNodes, making the edge count misleading and potentially causing rendering issues when the UI tries to draw edges to non-existent nodes.
- let visibleIDs = Set(visibleNodes.map(\.id))
- let visibleEdges = edges.filter { visibleIDs.contains($0.sourceId) && visibleIDs.contains($0.targetId) }
+ let regionVisibleIDs = Set(regions.flatMap { $0.nodes }.map(\.id))
+ let visibleEdges = edges.filter { regionVisibleIDs.contains($0.sourceId) && regionVisibleIDs.contains($0.targetId) }🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift around lines 29-30:
When a node has an `entityType` not in `orderedEntityTypes`, it is excluded from `regions` (line 33-35) and therefore from `snapshot.visibleNodes` (line 53). However, `visibleEdges` (line 30) still includes edges referencing that node because it was filtered only by `minimumImportance`, not by entity type. This causes the snapshot to report edges whose endpoints are missing from `visibleNodes`, making the edge count misleading and potentially causing rendering issues when the UI tries to draw edges to non-existent nodes.
Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift lines 27-54 (visibleNodes filtering, visibleEdges filtering, regions built from orderedEntityTypes, Snapshot construction); brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasPresentation.swift lines 77-85 (orderedEntityTypes list); brain-bar/Sources/BrainBar/KnowledgeGraph/KGNode.swift line 6 (entityType is free-form String), lines 18-29 (default color case); brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift line 80 (nodeIndex built from snapshot.visibleNodes), line 87 (guard+continue on missing node), line 158 (visibleEdges.count displayed in UI)
| return "Just now" | ||
| } | ||
| if secondsAgo < 3600 { | ||
| let minutesAgo = Int((secondsAgo / 60).rounded(.up)) |
There was a problem hiding this comment.
🟠 High Dashboard/DashboardMetricFormatter.swift:73
In lastEventString, the ceiling rounding .rounded(.up) causes off-by-one errors in time-ago display: at 61 seconds it returns "2m ago" instead of "1m ago", and at 3601 seconds it returns "2h ago" instead of "1h ago". Consider using truncation (Int(secondsAgo / 60)) instead so the minute/hour thresholds align with intuitive time display.
- let minutesAgo = Int((secondsAgo / 60).rounded(.up))
+ let minutesAgo = Int(secondsAgo / 60)🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift around line 73:
In `lastEventString`, the ceiling rounding `.rounded(.up)` causes off-by-one errors in time-ago display: at 61 seconds it returns "2m ago" instead of "1m ago", and at 3601 seconds it returns "2h ago" instead of "1h ago". Consider using truncation (`Int(secondsAgo / 60)`) instead so the minute/hour thresholds align with intuitive time display.
Evidence trail:
brain-bar/Sources/BrainBar/Dashboard/DashboardMetricFormatter.swift lines 68-79 at REVIEWED_COMMIT: line 73 `let minutesAgo = Int((secondsAgo / 60).rounded(.up))` and line 77 `let hoursAgo = Int((secondsAgo / 3600).rounded(.up))`. Verified: 61/60=1.0167 rounds up to 2 ("2m ago"); 3601/3600=1.00028 rounds up to 2 ("2h ago").
| hasLoadedGraph = true | ||
| } | ||
| } | ||
| .onChange(of: geo.size) { _, newSize in |
There was a problem hiding this comment.
🟡 Medium KnowledgeGraph/KGCanvasView.swift:63
When the sidebar is visible, onChange(of: geo.size) passes the full view size to KGAtlasLayout.seededNodes(), including the sidebar width. This causes nodes to be positioned outside the visible canvas area since the layout assumes more horizontal space than actually available. Consider using the canvas size from the inner GeometryReader (lines 109–114) instead of the outer GeometryReader's size.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift around line 63:
When the sidebar is visible, `onChange(of: geo.size)` passes the full view size to `KGAtlasLayout.seededNodes()`, including the sidebar width. This causes nodes to be positioned outside the visible canvas area since the layout assumes more horizontal space than actually available. Consider using the canvas size from the inner `GeometryReader` (lines 109–114) instead of the outer `GeometryReader`'s size.
Evidence trail:
KGCanvasView.swift lines 26-67: outer GeometryReader wraps the full HStack (canvas + sidebar), and onChange(of: geo.size) passes newSize to KGAtlasLayout.seededNodes(). Inner GeometryReader at lines 109-114 captures actual canvas size but is not used for seededNodes call. KGAtlasLayout.swift lines 37-50: makeRegionCenters positions nodes at fractional multiples of canvasSize.width (up to 0.78), meaning nodes would be placed at positions that include sidebar width.
| private func nodeAt(point: CGPoint, visibleNodes: [KGNode]) -> KGNode? { | ||
| for node in visibleNodes { | ||
| let dx = point.x - node.position.x | ||
| let dy = point.y - node.position.y | ||
| let dist = sqrt(dx * dx + dy * dy) | ||
| if dist <= node.radius + 4 { | ||
| return node | ||
| } | ||
| } | ||
| return nil | ||
| } |
There was a problem hiding this comment.
🟡 Medium KnowledgeGraph/KGCanvasView.swift:329
When nodes overlap, nodeAt returns the first matching node in iteration order, but graphCanvas draws nodes in the same order so later nodes render on top. Tapping an overlapping area selects the visually-hidden node underneath instead of the topmost visible node. Consider iterating visibleNodes in reverse order so the topmost (last-drawn) node is selected first.
private func nodeAt(point: CGPoint, visibleNodes: [KGNode]) -> KGNode? {
- for node in visibleNodes {
+ for node in visibleNodes.reversed() {
let dx = point.x - node.position.x
let dy = point.y - node.position.y
let dist = sqrt(dx * dx + dy * dy)
if dist <= node.radius + 4 {
return node
}
}
return nil
}🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift around lines 329-339:
When nodes overlap, `nodeAt` returns the first matching node in iteration order, but `graphCanvas` draws nodes in the same order so later nodes render on top. Tapping an overlapping area selects the visually-hidden node underneath instead of the topmost visible node. Consider iterating `visibleNodes` in reverse order so the topmost (last-drawn) node is selected first.
Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGCanvasView.swift lines 101-107 (draw loop iterates forward, later nodes on top), lines 329-337 (nodeAt iterates forward, returns first match = bottom-most node), line 289 (tapGesture calls nodeAt with snapshot.visibleNodes)
| let ring = Int(ceil((sqrt(Double(offset + 1)) - 1) / 2)) | ||
| let indexInRing = offset - (ring * ring) |
There was a problem hiding this comment.
🟡 Medium KnowledgeGraph/KGAtlasLayout.swift:57
The position(for:total:center:) formula incorrectly keeps offset=7 and offset=8 in ring 1, causing nodes to overlap. For offset=7, ring = ceil((sqrt(8)-1)/2) = 1 and indexInRing = 7 - 1*1 = 6 with countInRing = 6, yielding angle = 2π — identical to offset=1's angle of 0. Ring 1 should only hold offsets 1-6, but the quadratic indexing calculation breaks down beyond that.
- let ring = Int(ceil((sqrt(Double(offset + 1)) - 1) / 2))
- let indexInRing = offset - (ring * ring)🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift around lines 57-58:
The `position(for:total:center:)` formula incorrectly keeps `offset=7` and `offset=8` in ring 1, causing nodes to overlap. For `offset=7`, `ring = ceil((sqrt(8)-1)/2) = 1` and `indexInRing = 7 - 1*1 = 6` with `countInRing = 6`, yielding `angle = 2π` — identical to `offset=1`'s angle of `0`. Ring 1 should only hold offsets 1-6, but the quadratic indexing calculation breaks down beyond that.
Evidence trail:
brain-bar/Sources/BrainBar/KnowledgeGraph/KGAtlasLayout.swift lines 57-61 (REVIEWED_COMMIT): ring calculation `Int(ceil((sqrt(Double(offset + 1)) - 1) / 2))`, indexInRing = `offset - (ring * ring)`, countInRing = `max(ring * 6, 1)`, angle = `(Double(indexInRing) / Double(countInRing)) * 2π`. For offset=7: ring=1, indexInRing=6, countInRing=6, angle=2π (same as offset=1's angle=0). For offset=8: ring=1, indexInRing=7, countInRing=6, angle=7π/3 (same as offset=2's angle=π/3).
| let enrichmentStatus: DashboardFlowLaneStatus | ||
| if enrichmentsLive { | ||
| enrichmentStatus = .live | ||
| } else if backlogCount > 0 { | ||
| enrichmentStatus = stats.recentEnrichmentCount > 0 ? .recent : .queued |
There was a problem hiding this comment.
🟡 Medium Dashboard/PipelineState.swift:252
enrichmentStatus is assigned .live, .recent, .queued, or .idle, but never .draining. However, line 288 checks enrichmentStatus == .draining and line 306-307 returns text for .draining — this case is unreachable dead code. If .draining is meant to represent enrichment outpacing writes while backlog exists, the assignment logic at lines 252-261 needs to detect this (e.g., enrichmentsLive && backlogCount > 0 && !writesLive). If .draining shouldn't exist, remove the dead switch case and condition check.
let enrichmentStatus: DashboardFlowLaneStatus
if enrichmentsLive {
- enrichmentStatus = .live
+ if backlogCount > 0 && !writesLive {
+ enrichmentStatus = .draining
+ } else {
+ enrichmentStatus = .live
+ }
} else if backlogCount > 0 {🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift around lines 252-256:
`enrichmentStatus` is assigned `.live`, `.recent`, `.queued`, or `.idle`, but never `.draining`. However, line 288 checks `enrichmentStatus == .draining` and line 306-307 returns text for `.draining` — this case is unreachable dead code. If `.draining` is meant to represent enrichment outpacing writes while backlog exists, the assignment logic at lines 252-261 needs to detect this (e.g., `enrichmentsLive && backlogCount > 0 && !writesLive`). If `.draining` shouldn't exist, remove the dead switch case and condition check.
Evidence trail:
brain-bar/Sources/BrainBar/Dashboard/PipelineState.swift lines 95-103 (enum DashboardFlowLaneStatus with .draining case), lines 252-261 (enrichmentStatus assignment logic — never assigns .draining), lines 287-289 (enrichmentStatus == .draining always false in condition), lines 304-307 (.draining switch case unreachable with text 'Recent enrichments are draining backlog'). All at REVIEWED_COMMIT.
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (2)
src/brainlayer/pipeline/session_enrichment.py (1)
361-369:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftBroadened source intake is ahead of reconstruction/prompt parity.
Line 361–369 now admits Codex/Cursor/Gemini sessions, but downstream handling still drops
build_log/dir_listingcontent and frames the prompt as a Claude-only conversation. This will enrich new source sessions with incomplete/misframed context.As per coding guidelines, "Classification must preserve
ai_code,stack_trace, anduser_messageverbatim; skipnoiseentries entirely and summarizebuild_loganddir_listingentries (structure only)."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/brainlayer/pipeline/session_enrichment.py` around lines 361 - 369, The new source inclusion around T1_T2_SOURCES via source_query/cursor.execute broadens accepted session sources (Codex/Cursor/Gemini) but downstream enrichment still assumes Claude-only framing and drops or mishandles build_log/dir_listing; update the session enrichment/classification and prompt construction code so that when processing chunks for these sources you: preserve ai_code, stack_trace, and user_message verbatim (do not alter or summarize them), skip any chunk labeled noise entirely, and replace build_log and dir_listing content with a structural summary only (no verbatim logs); also make the prompt construction logic (the routine that assembles the conversation for model calls) respect the source type instead of forcing Claude-style roles so non-Claude sessions keep correct role labels and included summarized build/dir structure.brain-bar/Sources/BrainBar/BrainDatabase.swift (1)
49-50:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winThe dashboard default window is still 30 minutes.
Both defaults still use
30, so callers that omitactivityWindowMinuteswill not get the intended 60-minute dashboard window. Please update both call sites together.Also applies to: 1105-1105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 49 - 50, The default for the dashboard window is still set to 30 minutes: update the default parameter value for activityWindowMinutes from 30 to 60 wherever it’s declared/used so callers that omit activityWindowMinutes get a 60-minute window; ensure you change both call sites together (look for the activityWindowMinutes parameter in the BrainDatabase initializer/signature and the other occurrence mentioned) and leave bucketCount unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift`:
- Around line 18-19: initialFrame() currently centers a 1_348×1_078 frame on
NSScreen.main without clamping to visible bounds or choosing the screen under
the mouse, so on small displays or multi-monitor setups the panel can land
off-screen or on the wrong display; update initialFrame() to build the desired
rect from BrainBarDashboardPanelController.defaultSize/minSize, pick the screen
using BrainBarApp.orderedScreensByMouseLocation() (or the same mouse-aware
ordering used elsewhere), then pass the candidate rect through
BrainBarWindowPlacement.clamp() (or resolvedFrame()) to constrain it to the
screen's visibleFrame before returning; do the same adjustment for centerPanel()
(lines noted) so both creation and recenters use the mouse-aware screen
selection and clamped frame.
In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`:
- Around line 990-1001: The expensive SparklineRenderer.render call is happening
inside the animated SwiftUI view body and re-rasterizes on every frame; change
this so the sparkline image is computed once and reused by memoizing by (values,
renderSize, accentColor) or moving the render into a non-animating layer.
Concretely, compute/cache the rendered NSImage outside of body (e.g., in a
`@State/`@StateObject or a computed cached property keyed on values, renderSize
and accentColor) and reference that cached image in body while keeping ringScale
and ringOpacity animations as a separate overlay so only the ring animates.
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 497-542: The unread-only delivery frontier is advanced incorrectly
across multiple expansion passes because maxRowID is taken from each pass (and
uses max(limit, 1)) rather than from the final rowid-ordered union of returned
rows; update the logic in the loop that calls
expandedSearchQueries/runFTSSearch/sanitizeTrigramFTS5Query and appendDeduped so
that when unreadOnly is true you (a) do not force max(limit, 1) on runFTSSearch
calls, (b) collect all returned rows from expansion + trigram passes, then
compute the watermark by globally sorting the merged results by c.rowid ASC and
taking the last row's rowid as maxRowID (or nil if no rows), and only then call
markDelivered(agentID: subscriberID, seq: maxRowID); this preserves c.rowid ASC
frontier semantics and prevents advancing the cursor past unread results the
caller never received.
In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift`:
- Around line 224-340: Add a legacy-migration test that simulates an upgraded DB
missing the trigram FTS by using db.exec to remove or rename chunks_fts_trigram
and its triggers, then call ensureMigrations() (or the migration entrypoint) to
trigger repair, insert a chunk with id "trigram-hit" via db.insertChunk and
verify db.search(query: "alker-go", limit: 10) returns that chunk; reference the
ensureMigrations() call, db.exec, db.insertChunk(id:), and
db.search(query:limit:) to locate where to implement this new test case.
- Around line 239-260: The test testSearchExactChunkIDShortCircuitsFTS currently
only asserts the top-ranked result, so add an assertion that the search results
length is exactly one to prove the fast-path short-circuited the FTS;
specifically, after calling let results = try db.search(query:
"brainbar-5ac50aa1-ed5", limit: 10) add XCTAssertEqual(results.count, 1) (then
keep the existing XCTAssertEqual on results.first?["chunk_id"]) to ensure the
decoy did not get returned or appended.
---
Duplicate comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 49-50: The default for the dashboard window is still set to 30
minutes: update the default parameter value for activityWindowMinutes from 30 to
60 wherever it’s declared/used so callers that omit activityWindowMinutes get a
60-minute window; ensure you change both call sites together (look for the
activityWindowMinutes parameter in the BrainDatabase initializer/signature and
the other occurrence mentioned) and leave bucketCount unchanged.
In `@src/brainlayer/pipeline/session_enrichment.py`:
- Around line 361-369: The new source inclusion around T1_T2_SOURCES via
source_query/cursor.execute broadens accepted session sources
(Codex/Cursor/Gemini) but downstream enrichment still assumes Claude-only
framing and drops or mishandles build_log/dir_listing; update the session
enrichment/classification and prompt construction code so that when processing
chunks for these sources you: preserve ai_code, stack_trace, and user_message
verbatim (do not alter or summarize them), skip any chunk labeled noise
entirely, and replace build_log and dir_listing content with a structural
summary only (no verbatim logs); also make the prompt construction logic (the
routine that assembles the conversation for model calls) respect the source type
instead of forcing Claude-style roles so non-Claude sessions keep correct role
labels and included summarized build/dir structure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a50114ed-725e-49db-ad9e-4db2d90f9147
📒 Files selected for processing (7)
brain-bar/Sources/BrainBar/BrainBarApp.swiftbrain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swiftbrain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftbrain-bar/Sources/BrainBar/BrainDatabase.swiftbrain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swiftbrain-bar/Tests/BrainBarTests/DatabaseTests.swiftsrc/brainlayer/pipeline/session_enrichment.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Cursor Bugbot
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: Macroscope - Correctness Check
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Flag risky DB or concurrency changes explicitly and do not hand-wave lock behavior
Enforce one-write-at-a-time concurrency constraint; reads are safe but brain_digest is write-heavy and must not run in parallel with other MCP work
Run pytest before claiming behavior changed safely; current test suite has 929 tests
**/*.py: Usepaths.py:get_db_path()for all database path resolution; all scripts and CLI must use this function rather than hardcoding paths
When performing bulk database operations: stop enrichment workers first, checkpoint WAL before and after, drop FTS triggers before bulk deletes, batch deletes in 5-10K chunks, and checkpoint every 3 batches
Files:
src/brainlayer/pipeline/session_enrichment.py
src/brainlayer/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
src/brainlayer/**/*.py: Use retry logic onSQLITE_BUSYerrors; each worker must use its own database connection to handle concurrency safely
Classification must preserveai_code,stack_trace, anduser_messageverbatim; skipnoiseentries entirely and summarizebuild_loganddir_listingentries (structure only)
Use AST-aware chunking via tree-sitter; never split stack traces; mask large tool output
For enrichment backend selection: use Groq as primary backend (cloud, configured in launchd plist), Gemini as fallback viaenrichment_controller.py, and Ollama as offline last-resort; allow override viaBRAINLAYER_ENRICH_BACKENDenv var
Configure enrichment rate viaBRAINLAYER_ENRICH_RATEenvironment variable (default 0.2 = 12 RPM)
Implement chunk lifecycle columns:superseded_by,aggregated_into,archived_aton chunks table; exclude lifecycle-managed chunks from default search; allowinclude_archived=Trueto show history
Implementbrain_supersedewith safety gate for personal data (journals, notes, health/finance); use soft-delete forbrain_archivewith timestamp
Addsupersedesparameter tobrain_storefor atomic store-and-replace operations
Run linting and formatting with:ruff check src/ && ruff format src/
Run tests withpytest
UsePRAGMA wal_checkpoint(FULL)before and after bulk database operations to prevent WAL bloat
Files:
src/brainlayer/pipeline/session_enrichment.py
🧠 Learnings (12)
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : Classification must preserve `ai_code`, `stack_trace`, and `user_message` verbatim; skip `noise` entries entirely and summarize `build_log` and `dir_listing` entries (structure only)
Applied to files:
src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Pipeline architecture: Extract → Classify → Chunk → Embed → Index, with post-processing for enrichment, brain graph, and Obsidian export
Applied to files:
src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-04T15:16:13.883Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-04T15:16:13.883Z
Learning: Applies to src/brainlayer/hooks/dedup_coordination.py : Session dedup coordination: SessionStart hook writes injected chunk_ids to `/tmp/brainlayer_session_{id}.json`; UserPromptSubmit hook skips already-injected chunks; skip auto-search on handoff prompts
Applied to files:
src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-13T15:04:00.631Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-13T15:04:00.631Z
Learning: Applies to src/brainlayer/*classify*.py : Classification must preserve verbatim: `ai_code`, `stack_trace`, `user_message` content
Applied to files:
src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/hooks/dedup_coordination.py : Use `/tmp/brainlayer_session_{id}.json` for SessionStart and UserPromptSubmit hook coordination; SessionStart writes injected chunk_ids, UserPromptSubmit skips already-injected; detect handoffs with 'handoff' or 'session-handoff' keywords
Applied to files:
src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.
Applied to files:
brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swiftbrain-bar/Sources/BrainBar/BrainBarApp.swiftbrain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftbrain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.
Applied to files:
brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swiftbrain-bar/Sources/BrainBar/BrainBarApp.swiftbrain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftbrain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-03-29T18:45:42.532Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:42.532Z
Learning: In `brain-bar/Sources/BrainBar/BrainDatabase.swift` (Swift, BrainBar module), the `search()` function uses `maxRowID` as a delivery watermark cursor in the `unreadOnly` path. The result set **must** be ordered by `c.rowid ASC` when `unreadOnly=true`, so that `maxRowID` correctly represents the contiguous delivery frontier. Using a relevance-based ordering (e.g., `ORDER BY f.rank`) in the `unreadOnly` path would cause gaps in the watermark and mark unseen rows as delivered. The correct pattern is: `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`. Flag any future change to the `ORDER BY` clause in this function that applies relevance sorting unconditionally.
Applied to files:
brain-bar/Tests/BrainBarTests/DatabaseTests.swift
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to src/brainlayer/**/*.py : Implement chunk lifecycle columns: `superseded_by`, `aggregated_into`, `archived_at` on chunks table; exclude lifecycle-managed chunks from default search; allow `include_archived=True` to show history
Applied to files:
brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-04-04T23:24:03.159Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-04T23:24:03.159Z
Learning: Applies to src/brainlayer/{vector_store,search}*.py : Chunk lifecycle: implement columns `superseded_by`, `aggregated_into`, `archived_at` on chunks table; exclude lifecycle-managed chunks from default search
Applied to files:
brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-04-03T11:43:08.915Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-03T11:43:08.915Z
Learning: Applies to src/brainlayer/*bulk*.py : Before bulk database operations: stop enrichment workers, checkpoint WAL with `PRAGMA wal_checkpoint(FULL)`, drop FTS triggers before bulk deletes
Applied to files:
brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-04-06T08:40:13.531Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-06T08:40:13.531Z
Learning: Applies to **/*.py : When performing bulk database operations: stop enrichment workers first, checkpoint WAL before and after, drop FTS triggers before bulk deletes, batch deletes in 5-10K chunks, and checkpoint every 3 batches
Applied to files:
brain-bar/Sources/BrainBar/BrainDatabase.swift
🪛 SwiftLint (0.63.2)
brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift
[Warning] 6-6: Classes should have an explicit deinit method
(required_deinit)
brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift
[Warning] 4-4: Classes should have an explicit deinit method
(required_deinit)
[Warning] 16-16: Classes should have an explicit deinit method
(required_deinit)
brain-bar/Sources/BrainBar/BrainDatabase.swift
[Warning] 1691-1691: Prefer empty collection over optional collection
(discouraged_optional_collection)
🔇 Additional comments (1)
src/brainlayer/pipeline/session_enrichment.py (1)
27-27: Good use of centralized source policy.Importing
T1_T2_SOURCEShere keeps session discovery aligned with tier classification rules.
| static let defaultSize = NSSize(width: 1_348, height: 1_078) | ||
| static let minSize = NSSize(width: 760, height: 560) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swiftRepository: EtanHey/brainlayer
Length of output: 3883
🏁 Script executed:
rg -i "screen.*placement\|window.*placement\|BrainBarWindowPlacement" --type swift -A 3 -B 1Repository: EtanHey/brainlayer
Length of output: 44
🏁 Script executed:
rg "NSScreen" --type swift -B 2 -A 2Repository: EtanHey/brainlayer
Length of output: 9090
🏁 Script executed:
sed -n '1,100p' brain-bar/Sources/BrainBar/BrainBarWindowState.swiftRepository: EtanHey/brainlayer
Length of output: 2673
🏁 Script executed:
sed -n '100,250p' brain-bar/Sources/BrainBar/BrainBarWindowState.swiftRepository: EtanHey/brainlayer
Length of output: 5282
🏁 Script executed:
sed -n '230,300p' brain-bar/Sources/BrainBar/BrainBarWindowState.swiftRepository: EtanHey/brainlayer
Length of output: 2465
🏁 Script executed:
sed -n '280,350p' brain-bar/Sources/BrainBar/BrainBarWindowState.swiftRepository: EtanHey/brainlayer
Length of output: 2516
🏁 Script executed:
sed -n '1,50p' brain-bar/Sources/BrainBar/SearchPanelController.swiftRepository: EtanHey/brainlayer
Length of output: 1477
🏁 Script executed:
sed -n '40,90p' brain-bar/Sources/BrainBar/SearchPanelController.swiftRepository: EtanHey/brainlayer
Length of output: 1792
🏁 Script executed:
sed -n '80,110p' brain-bar/Sources/BrainBar/SearchPanelController.swiftRepository: EtanHey/brainlayer
Length of output: 1071
Clamp the initial panel frame to the visible screen bounds and use mouse-aware screen selection.
The initialFrame() method centers a 1348×1078 frame on NSScreen.main without constraining to the visible bounds. On displays smaller than that size, the panel will open partly off-screen. On multi-monitor setups, the panel may appear on a different screen than where the user clicked the menu bar.
Use BrainBarWindowPlacement.clamp() or resolvedFrame() to constrain the frame to available screen space, and apply BrainBarApp's orderedScreensByMouseLocation() pattern to select the appropriate display context.
Also applies to: lines 98–100 (centerPanel())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/BrainBarDashboardPanelController.swift` around
lines 18 - 19, initialFrame() currently centers a 1_348×1_078 frame on
NSScreen.main without clamping to visible bounds or choosing the screen under
the mouse, so on small displays or multi-monitor setups the panel can land
off-screen or on the wrong display; update initialFrame() to build the desired
rect from BrainBarDashboardPanelController.defaultSize/minSize, pick the screen
using BrainBarApp.orderedScreensByMouseLocation() (or the same mouse-aware
ordering used elsewhere), then pass the candidate rect through
BrainBarWindowPlacement.clamp() (or resolvedFrame()) to constrain it to the
screen's visibleFrame before returning; do the same adjustment for centerPanel()
(lines noted) so both creation and recenters use the mouse-aware screen
selection and clamped frame.
| var body: some View { | ||
| GeometryReader { proxy in | ||
| let renderSize = NSSize( | ||
| width: max(proxy.size.width.rounded(.up), 1), | ||
| height: max(proxy.size.height.rounded(.up), 1) | ||
| ) | ||
| let image = SparklineRenderer.render( | ||
| state: .idle, | ||
| values: values, | ||
| size: renderSize, | ||
| accentColor: accentColor | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the file and examine the context around the mentioned lines
wc -l brain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftRepository: EtanHey/brainlayer
Length of output: 123
🏁 Script executed:
# Read the context around lines 990-1001 and 1034-1039
sed -n '980,1010p' brain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftRepository: EtanHey/brainlayer
Length of output: 1009
🏁 Script executed:
# Read the context around lines 1030-1045
sed -n '1030,1045p' brain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftRepository: EtanHey/brainlayer
Length of output: 427
🏁 Script executed:
# Look for `@State` declarations and animation-related code to understand the animation mechanism
rg -n "@State|\.animation\(|withAnimation" brain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftRepository: EtanHey/brainlayer
Length of output: 987
🏁 Script executed:
# Search for SparklineRenderer to understand all usages
rg -n "SparklineRenderer" brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift -A 3 -B 1Repository: EtanHey/brainlayer
Length of output: 418
🏁 Script executed:
# Read more context of the body implementation to see the full structure
sed -n '988,1050p' brain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftRepository: EtanHey/brainlayer
Length of output: 2090
🏁 Script executed:
# Also check what SparklineRenderer.render does to understand the cost
fd SparklineRenderer -t fRepository: EtanHey/brainlayer
Length of output: 123
🏁 Script executed:
# Look for the implementation of SparklineRenderer
rg -l "class SparklineRenderer|struct SparklineRenderer" -t swiftRepository: EtanHey/brainlayer
Length of output: 44
🏁 Script executed:
# Examine the SparklineRenderer implementation to understand rendering cost
cat brain-bar/Sources/BrainBar/Dashboard/SparklineRenderer.swiftRepository: EtanHey/brainlayer
Length of output: 5028
Cache the sparkline image outside the animated body.
SparklineRenderer.render(...) performs expensive rasterization (CGContext setup, path drawing, antialiasing). Calling it at line 996 inside body causes it to re-execute on every animation frame during the 0.75-second pulse. The sparkline content itself never changes—only the ring overlay's ringScale and ringOpacity animate—yet the image regenerates ~45 times per pulse across multiple cards. Memoize by (values, renderSize, accentColor) to eliminate redundant rendering, or refactor the ring into a separate animated layer so the sparkline rasterizes once.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` around lines 990 -
1001, The expensive SparklineRenderer.render call is happening inside the
animated SwiftUI view body and re-rasterizes on every frame; change this so the
sparkline image is computed once and reused by memoizing by (values, renderSize,
accentColor) or moving the render into a non-animating layer. Concretely,
compute/cache the rendered NSImage outside of body (e.g., in a
`@State/`@StateObject or a computed cached property keyed on values, renderSize
and accentColor) and reference that cached image in body while keeping ringScale
and ringOpacity animations as a separate overlay so only the ring animates.
| let expandedQueries = expandedSearchQueries(trimmedQuery) | ||
| var results: [[String: Any]] = [] | ||
| var seenChunkIDs = Set<String>() | ||
| var maxRowID: Int64 = 0 | ||
|
|
||
| for expandedQuery in expandedQueries { | ||
| let searchResult = try runFTSSearch( | ||
| tableName: "chunks_fts", | ||
| matchQuery: sanitizeFTS5Query(expandedQuery), | ||
| limit: max(limit, 1), | ||
| project: project, | ||
| tag: tag, | ||
| importanceMin: importanceMin, | ||
| subscribedTags: subscribedTags, | ||
| ackFloor: ackFloor, | ||
| unreadOnly: unreadOnly | ||
| ) | ||
| maxRowID = max(maxRowID, searchResult.maxRowID) | ||
| appendDeduped(searchResult.rows, to: &results, seenChunkIDs: &seenChunkIDs, limit: limit) | ||
| if results.count >= limit { break } | ||
| } | ||
|
|
||
| if results.count < limit { | ||
| for expandedQuery in expandedQueries { | ||
| let trigramQuery = sanitizeTrigramFTS5Query(expandedQuery) | ||
| guard !trigramQuery.isEmpty else { continue } | ||
| let searchResult = try runFTSSearch( | ||
| tableName: "chunks_fts_trigram", | ||
| matchQuery: trigramQuery, | ||
| limit: max(limit, 1), | ||
| project: project, | ||
| tag: tag, | ||
| importanceMin: importanceMin, | ||
| subscribedTags: subscribedTags, | ||
| ackFloor: ackFloor, | ||
| unreadOnly: unreadOnly | ||
| ) | ||
| maxRowID = max(maxRowID, searchResult.maxRowID) | ||
| appendDeduped(searchResult.rows, to: &results, seenChunkIDs: &seenChunkIDs, limit: limit) | ||
| if results.count >= limit { break } | ||
| } | ||
| } | ||
|
|
||
| if unreadOnly, let subscriberID, maxRowID > 0 { | ||
| try markDelivered(agentID: subscriberID, seq: maxRowID) | ||
| } |
There was a problem hiding this comment.
Preserve the unread delivery frontier across expansion passes.
In unreadOnly mode, this now runs multiple expanded/trigram searches, applies limit per pass, and advances markDelivered to the highest rowid seen. Those rows are not globally unioned and re-sorted by c.rowid ASC before the watermark is derived, so a later alias/trigram pass can move the cursor past unread matches the caller never received. The forced max(limit, 1) also means limit <= 0 can still advance the cursor. Either keep unread delivery on a single rowid-ordered query or compute the watermark from the final returned row after a global rowid merge. Based on learnings, the unread-only search() path uses maxRowID as a contiguous delivery watermark and must preserve c.rowid ASC frontier semantics.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 497 - 542, The
unread-only delivery frontier is advanced incorrectly across multiple expansion
passes because maxRowID is taken from each pass (and uses max(limit, 1)) rather
than from the final rowid-ordered union of returned rows; update the logic in
the loop that calls expandedSearchQueries/runFTSSearch/sanitizeTrigramFTS5Query
and appendDeduped so that when unreadOnly is true you (a) do not force
max(limit, 1) on runFTSSearch calls, (b) collect all returned rows from
expansion + trigram passes, then compute the watermark by globally sorting the
merged results by c.rowid ASC and taking the last row's rowid as maxRowID (or
nil if no rows), and only then call markDelivered(agentID: subscriberID, seq:
maxRowID); this preserves c.rowid ASC frontier semantics and prevents advancing
the cursor past unread results the caller never received.
| func testSearchReturnsEmptyForBlankQuery() throws { | ||
| try db.insertChunk( | ||
| id: "blank-query-decoy", | ||
| content: "Blank searches should not hit arbitrary content.", | ||
| sessionId: "session-blank", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 5 | ||
| ) | ||
|
|
||
| let results = try db.search(query: " ", limit: 10) | ||
|
|
||
| XCTAssertTrue(results.isEmpty) | ||
| } | ||
|
|
||
| func testSearchExactChunkIDShortCircuitsFTS() throws { | ||
| try db.insertChunk( | ||
| id: "brainbar-5ac50aa1-ed5", | ||
| content: "Exact chunk lookup should not require the identifier to appear in content.", | ||
| sessionId: "session-exact", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 9 | ||
| ) | ||
| try db.insertChunk( | ||
| id: "brainbar-d01", | ||
| content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.", | ||
| sessionId: "session-decoy", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 9 | ||
| ) | ||
|
|
||
| let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10) | ||
|
|
||
| XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5") | ||
| } | ||
|
|
||
| func testSearchExactChunkIDRespectsProjectScope() throws { | ||
| try db.insertChunk( | ||
| id: "brainbar-scoped-001", | ||
| content: "Project-scoped exact id lookup should return only matching project chunks.", | ||
| sessionId: "session-scope", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 8 | ||
| ) | ||
|
|
||
| let scoped = try db.search(query: "brainbar-scoped-001", limit: 10, project: "brainlayer") | ||
| let wrongScope = try db.search(query: "brainbar-scoped-001", limit: 10, project: "voicelayer") | ||
|
|
||
| XCTAssertEqual(scoped.first?["chunk_id"] as? String, "brainbar-scoped-001") | ||
| XCTAssertTrue(wrongScope.isEmpty) | ||
| } | ||
|
|
||
| func testSearchUsesTrigramFTSForIdentifierSubstrings() throws { | ||
| try db.insertChunk( | ||
| id: "trigram-hit", | ||
| content: "stalker-golem queue note", | ||
| sessionId: "session-trigram", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 7 | ||
| ) | ||
|
|
||
| let results = try db.search(query: "alker-go", limit: 10) | ||
|
|
||
| XCTAssertEqual(results.first?["chunk_id"] as? String, "trigram-hit") | ||
| } | ||
|
|
||
| func testSearchAliasExpansionPreservesMultiwordSemantics() throws { | ||
| try db.insertChunk( | ||
| id: "alias-good", | ||
| content: "Hershkovits reviewed the release plan yesterday.", | ||
| sessionId: "session-alias-good", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 8 | ||
| ) | ||
| try db.insertChunk( | ||
| id: "alias-bad", | ||
| content: "Met with Hershkovits yesterday.", | ||
| sessionId: "session-alias-bad", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 8 | ||
| ) | ||
|
|
||
| let results = try db.search(query: "Hershkovitz release plan", limit: 10) | ||
| let resultIDs = results.compactMap { $0["chunk_id"] as? String } | ||
|
|
||
| XCTAssertTrue(resultIDs.contains("alias-good")) | ||
| XCTAssertFalse(resultIDs.contains("alias-bad")) | ||
| } | ||
|
|
||
| func testSearchAliasExpansionNormalizesDotsConsistently() throws { | ||
| db.exec(""" | ||
| INSERT INTO kg_entities (id, entity_type, name) | ||
| VALUES ('entity-openai', 'org', 'Open.AI'); | ||
| """) | ||
| db.exec(""" | ||
| INSERT INTO kg_entity_aliases (alias, entity_id) | ||
| VALUES ('OpenAI', 'entity-openai'); | ||
| """) | ||
| try db.insertChunk( | ||
| id: "alias-dot", | ||
| content: "Open.AI roadmap review notes.", | ||
| sessionId: "session-alias-dot", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 8 | ||
| ) | ||
|
|
||
| let results = try db.search(query: "OpenAI roadmap", limit: 10) | ||
|
|
||
| XCTAssertEqual(results.first?["chunk_id"] as? String, "alias-dot") | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
Please add a legacy-database migration case for trigram FTS.
These new search cases only exercise fresh databases from setUp(), but the risky rollout path is opening an existing chunks DB and letting ensureMigrations() repair chunks_fts_trigram plus its maintenance triggers. A regression there would slip past this suite and only fail for upgraded users.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 224 - 340,
Add a legacy-migration test that simulates an upgraded DB missing the trigram
FTS by using db.exec to remove or rename chunks_fts_trigram and its triggers,
then call ensureMigrations() (or the migration entrypoint) to trigger repair,
insert a chunk with id "trigram-hit" via db.insertChunk and verify
db.search(query: "alker-go", limit: 10) returns that chunk; reference the
ensureMigrations() call, db.exec, db.insertChunk(id:), and
db.search(query:limit:) to locate where to implement this new test case.
| func testSearchExactChunkIDShortCircuitsFTS() throws { | ||
| try db.insertChunk( | ||
| id: "brainbar-5ac50aa1-ed5", | ||
| content: "Exact chunk lookup should not require the identifier to appear in content.", | ||
| sessionId: "session-exact", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 9 | ||
| ) | ||
| try db.insertChunk( | ||
| id: "brainbar-d01", | ||
| content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.", | ||
| sessionId: "session-decoy", | ||
| project: "brainlayer", | ||
| contentType: "assistant_text", | ||
| importance: 9 | ||
| ) | ||
|
|
||
| let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10) | ||
|
|
||
| XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5") | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
Assert the short-circuit, not just the top rank.
This still passes if the exact-ID fast path falls through to FTS and the decoy is appended after the exact hit. Add a count assertion so the test proves the fast path actually skipped the broader search.
Suggested assertion
let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10)
+ XCTAssertEqual(results.count, 1)
XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func testSearchExactChunkIDShortCircuitsFTS() throws { | |
| try db.insertChunk( | |
| id: "brainbar-5ac50aa1-ed5", | |
| content: "Exact chunk lookup should not require the identifier to appear in content.", | |
| sessionId: "session-exact", | |
| project: "brainlayer", | |
| contentType: "assistant_text", | |
| importance: 9 | |
| ) | |
| try db.insertChunk( | |
| id: "brainbar-d01", | |
| content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.", | |
| sessionId: "session-decoy", | |
| project: "brainlayer", | |
| contentType: "assistant_text", | |
| importance: 9 | |
| ) | |
| let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10) | |
| XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5") | |
| } | |
| func testSearchExactChunkIDShortCircuitsFTS() throws { | |
| try db.insertChunk( | |
| id: "brainbar-5ac50aa1-ed5", | |
| content: "Exact chunk lookup should not require the identifier to appear in content.", | |
| sessionId: "session-exact", | |
| project: "brainlayer", | |
| contentType: "assistant_text", | |
| importance: 9 | |
| ) | |
| try db.insertChunk( | |
| id: "brainbar-d01", | |
| content: "Decoy chunk mentions brainbar-5ac50aa1-ed5 but must not outrank the exact chunk id.", | |
| sessionId: "session-decoy", | |
| project: "brainlayer", | |
| contentType: "assistant_text", | |
| importance: 9 | |
| ) | |
| let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10) | |
| XCTAssertEqual(results.count, 1) | |
| XCTAssertEqual(results.first?["chunk_id"] as? String, "brainbar-5ac50aa1-ed5") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 239 - 260,
The test testSearchExactChunkIDShortCircuitsFTS currently only asserts the
top-ranked result, so add an assertion that the search results length is exactly
one to prove the fast-path short-circuited the FTS; specifically, after calling
let results = try db.search(query: "brainbar-5ac50aa1-ed5", limit: 10) add
XCTAssertEqual(results.count, 1) (then keep the existing XCTAssertEqual on
results.first?["chunk_id"]) to ensure the decoy did not get returned or
appended.
BrainBar could block opening /tmp/brainbar.sock while synchronously rebuilding chunks_fts_trigram on a live ~360K-chunk database. That startup rebuild cascaded into lock pressure for enrichment and MCP clients because the canonical Unix socket was not ready while the DB was busy. Add a testable startup repair decision and cap synchronous trigram backfill to small databases with synchronousTrigramBackfillChunkLimit = 10_000. Large desynced trigram indexes now keep schema/triggers current and defer the heavy backfill to explicit maintenance. Verification: RED->GREEN testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild; swift test --filter DatabaseTests; bash scripts/run_tests.sh; swift test. Follow-ups tracked in docs.local/plans/2026-05-02-brainlayer-reliability-hardening/: PR B explicit trigram maintenance command, PR D boot health contract.
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
| let columns = Array( | ||
| repeating: GridItem(.flexible(minimum: 130), spacing: layout.gridSpacing, alignment: .leading), | ||
| count: 2 | ||
| ) |
There was a problem hiding this comment.
Layout column count hardcoded instead of using computed value
Medium Severity
The overviewMetaRow function hardcodes count: 2 for the grid columns, ignoring the layout.overviewMetricColumns property that's computed to be 4 on wide layouts (≥980px). This defeats the responsive design intent — the overview stats section will always display in a 2-column grid regardless of available width.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 59b4025. Configure here.
| height: max(proxy.size.height.rounded(.up), 1) | ||
| ) | ||
| let image = SparklineRenderer.render( | ||
| state: .idle, |
There was a problem hiding this comment.
Sparkline always renders with idle state ignoring actual pipeline
Low Severity
BrainBarHeroSparkline always passes .idle as the state parameter to SparklineRenderer.render, regardless of the actual pipeline state. The lane-level sparklines will never show the active/indexing visual treatment even when the pipeline is actively processing.
Reviewed by Cursor Bugbot for commit 59b4025. Configure here.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
brain-bar/Tests/BrainBarTests/DatabaseTests.swift (2)
268-270:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAssert the short-circuit, not just the top hit.
This still passes if the exact-ID lookup falls through to FTS and the decoy is appended after the exact match. Add
XCTAssertEqual(results.count, 1)before checkingfirst.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 268 - 270, The test currently only asserts the top hit but not that the exact-ID short-circuit prevented FTS results from being appended; update the test around the db.search call and the results variable by inserting an assertion that results.count == 1 before checking results.first?["chunk_id"], so that the test fails if any additional FTS matches were included (referencing db.search(query:limit:), results, and the "chunk_id" check).
189-198: 🧹 Nitpick | 🔵 Trivial | ⚡ Quick winAdd a legacy-schema repair test, not just the decision helper.
This only proves
.skipBackfillfor one large desync case. The risky rollout path is opening an existing database with a stalechunks_fts_trigramdefinition and letting the migration entrypoint repair it. Without a fixture that creates the old trigram table/triggers and verifies substring search after migration, a broken repair path will still ship unnoticed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift` around lines 189 - 198, Add an integration-style test that actually exercises the migration entrypoint rather than only the helper decision: create a test (e.g., alongside testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild) that constructs a database using the legacy/stale trigram FTS schema (old chunks_fts_trigram table and associated triggers), then call the real migration path that BrainDatabase uses on open (the same entrypoint that would run startup repairs), and finally verify substring search works against the migrated DB and that the trigram table/triggers have been replaced or updated as expected; use the BrainDatabase open/migrate functions and existing query helpers to locate where to hook the test.brain-bar/Sources/BrainBar/BrainDatabase.swift (1)
509-549:⚠️ Potential issue | 🟠 Major | ⚡ Quick winPreserve the unread delivery frontier across expansion passes.
unreadOnlystill runs multiple expansion/trigram searches withlimit: max(limit, 1), updatesmaxRowIDfrom each pass, and then marks that maximum as delivered. Because the rows are deduped and capped incrementally instead of being globally merged and re-sorted byc.rowid ASCfirst, a later pass can advancelast_delivered_seqpast rows that were never returned.limit <= 0can also advance the cursor without returning anything.Based on learnings: when
unreadOnly=true, thesearch()path must preserve contiguousc.rowid ASCfrontier semantics and derive the watermark from the final returned set only.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 509 - 549, The current unread-only flow updates maxRowID during each expansion/trigram pass (using runFTSSearch and appendDeduped) and then marks delivered, which can advance the unread frontier past rows that were never returned; instead, when unreadOnly is true, collect rows from all passes (expandedQueries and trigram pass) without updating maxRowID per pass, perform global dedup/sort by c.rowid ASC and enforce the final limit, then compute maxRowID from the final returned set only (e.g., highest rowid in results) and call markDelivered(subscriberID, seq: maxRowID) only if that computed maxRowID > 0; keep runFTSSearch and sanitizeTrigramFTS5Query calls but remove per-pass maxRowID updates and avoid using max(limit, 1) to advance cursor when no rows are returned.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 1908-1912: rebuildTrigramFTSTable currently only calls
ensureTrigramFTSSchemaAndTriggers() which uses CREATE VIRTUAL TABLE IF NOT
EXISTS, leaving an old chunks_fts_trigram with stale schema/tokenizer; modify
rebuildTrigramFTSTable (and the same repair path invoked by
rebuildSynchronously) to DROP TABLE IF EXISTS chunks_fts_trigram before calling
ensureTrigramFTSSchemaAndTriggers() so the virtual table is recreated cleanly
and backfillTrigramFTSTable() inserts match the new schema. Ensure you reference
rebuildTrigramFTSTable(), ensureTrigramFTSSchemaAndTriggers(),
backfillTrigramFTSTable(), and the repair path via rebuildSynchronously when
applying the change.
---
Duplicate comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 509-549: The current unread-only flow updates maxRowID during each
expansion/trigram pass (using runFTSSearch and appendDeduped) and then marks
delivered, which can advance the unread frontier past rows that were never
returned; instead, when unreadOnly is true, collect rows from all passes
(expandedQueries and trigram pass) without updating maxRowID per pass, perform
global dedup/sort by c.rowid ASC and enforce the final limit, then compute
maxRowID from the final returned set only (e.g., highest rowid in results) and
call markDelivered(subscriberID, seq: maxRowID) only if that computed maxRowID >
0; keep runFTSSearch and sanitizeTrigramFTS5Query calls but remove per-pass
maxRowID updates and avoid using max(limit, 1) to advance cursor when no rows
are returned.
In `@brain-bar/Tests/BrainBarTests/DatabaseTests.swift`:
- Around line 268-270: The test currently only asserts the top hit but not that
the exact-ID short-circuit prevented FTS results from being appended; update the
test around the db.search call and the results variable by inserting an
assertion that results.count == 1 before checking results.first?["chunk_id"], so
that the test fails if any additional FTS matches were included (referencing
db.search(query:limit:), results, and the "chunk_id" check).
- Around line 189-198: Add an integration-style test that actually exercises the
migration entrypoint rather than only the helper decision: create a test (e.g.,
alongside testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild) that
constructs a database using the legacy/stale trigram FTS schema (old
chunks_fts_trigram table and associated triggers), then call the real migration
path that BrainDatabase uses on open (the same entrypoint that would run startup
repairs), and finally verify substring search works against the migrated DB and
that the trigram table/triggers have been replaced or updated as expected; use
the BrainDatabase open/migrate functions and existing query helpers to locate
where to hook the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4309d710-18a7-44c0-899d-cddbe5a67457
📒 Files selected for processing (2)
brain-bar/Sources/BrainBar/BrainDatabase.swiftbrain-bar/Tests/BrainBarTests/DatabaseTests.swift
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Cursor Bugbot
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: Macroscope - Correctness Check
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.
Applied to files:
brain-bar/Sources/BrainBar/BrainDatabase.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.
Applied to files:
brain-bar/Sources/BrainBar/BrainDatabase.swift
🪛 SwiftLint (0.63.2)
brain-bar/Sources/BrainBar/BrainDatabase.swift
[Warning] 1698-1698: Prefer empty collection over optional collection
(discouraged_optional_collection)
| private func rebuildTrigramFTSTable() throws { | ||
| try ensureTrigramFTSSchemaAndTriggers() | ||
| try execute("DELETE FROM chunks_fts_trigram") | ||
| try backfillTrigramFTSTable() | ||
| } |
There was a problem hiding this comment.
Drop the stale trigram table before running the “repair” path.
When schemaIsValid is false, .rebuildSynchronously currently calls rebuildTrigramFTSTable(), but that method only does CREATE VIRTUAL TABLE IF NOT EXISTS. If an upgraded DB already has an old chunks_fts_trigram, the old schema/tokenizer survives and the backfill either keeps the broken layout or fails when inserting summary / resolved_query. That makes the legacy repair path unable to repair some existing databases and can fail startup entirely.
Suggested fix
private func rebuildTrigramFTSTable() throws {
+ try execute("DROP TRIGGER IF EXISTS chunks_fts_trigram_insert")
+ try execute("DROP TRIGGER IF EXISTS chunks_fts_trigram_delete")
+ try execute("DROP TRIGGER IF EXISTS chunks_fts_trigram_update")
+ try execute("DROP TABLE IF EXISTS chunks_fts_trigram")
try ensureTrigramFTSSchemaAndTriggers()
- try execute("DELETE FROM chunks_fts_trigram")
try backfillTrigramFTSTable()
}Also applies to: 1954-1968
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 1908 - 1912,
rebuildTrigramFTSTable currently only calls ensureTrigramFTSSchemaAndTriggers()
which uses CREATE VIRTUAL TABLE IF NOT EXISTS, leaving an old chunks_fts_trigram
with stale schema/tokenizer; modify rebuildTrigramFTSTable (and the same repair
path invoked by rebuildSynchronously) to DROP TABLE IF EXISTS chunks_fts_trigram
before calling ensureTrigramFTSSchemaAndTriggers() so the virtual table is
recreated cleanly and backfillTrigramFTSTable() inserts match the new schema.
Ensure you reference rebuildTrigramFTSTable(),
ensureTrigramFTSSchemaAndTriggers(), backfillTrigramFTSTable(), and the repair
path via rebuildSynchronously when applying the change.
…atabase Fix the SwiftUI lifecycle gap where BrainBarRuntime.database could be installed while the NSPanel/root view was hidden, leaving the command bar's local @State view model nil and rendering Warming memory until a later activation refreshed the view. Replace the subscription-dependent local @State assignment with a stable provider that derives the QuickCaptureViewModel from the current runtime database, so already-set database values are handled on first render as well as later readiness changes. Audit reference: docs.local/plans/2026-05-02-brainlayer-reliability-hardening/phase-1/findings.md § WARMING-MEMORY AUDIT. New RED→GREEN coverage: BrainBarDashboardPanelControllerTests/testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden. Search input latency and graph tab stalls stay deferred to polish-pass Phase 1 / reliability-hardening Phase 2.
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 5 total unresolved issues (including 4 from previous reviews).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit b8588f4. Configure here.
| return .rebuildSynchronously | ||
| } | ||
| return .skipBackfill | ||
| } |
There was a problem hiding this comment.
Trigram backfill ignores chunk limit on first migration
High Severity
trigramStartupRepairDecision returns .rebuildSynchronously whenever tableExists is false, regardless of chunkCount. This means the first time an existing large database (e.g., 360K chunks) gets this migration via ensureMigrations, the synchronousTrigramBackfillChunkLimit guard is bypassed and a full synchronous backfill occurs — the exact startup-blocking scenario the stability addendum describes fixing.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit b8588f4. Configure here.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b8588f4f2f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| ackFloor: ackFloor, | ||
| unreadOnly: unreadOnly | ||
| ) | ||
| maxRowID = max(maxRowID, searchResult.maxRowID) |
There was a problem hiding this comment.
Advance unread watermark only for delivered rows
In unread mode, maxRowID is updated from every runFTSSearch result set before dedupe/limit truncation, so alias-expanded queries can mark rows as delivered that were never returned to the caller. A concrete case is results.count == limit - 1: the next expanded query fetches up to limit rows, appendDeduped adds only one row, but markDelivered still advances to that query’s highest rowid, skipping unseen unread chunks on subsequent polls.
Useful? React with 👍 / 👎.
| if !tableExists || !schemaIsValid { | ||
| return .rebuildSynchronously |
There was a problem hiding this comment.
Avoid synchronous trigram rebuild on first large migration
When chunks_fts_trigram is missing (tableExists == false), startup always chooses .rebuildSynchronously regardless of database size. On large existing installs upgrading to this build, that forces a full trigram backfill during app startup, which can hold long write locks and delay availability (the same class of lock-pressure issue this patch is trying to mitigate for desynced tables).
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift`:
- Around line 129-130: The current implementation keeps returning a cached
QuickCaptureViewModel from commandBarProvider.viewModel(database:
runtime.database) even after runtime.database changes; update the code so the
commandBarViewModel is recreated when runtime.database is replaced or cleared —
either make commandBarViewModel a purely computed property that calls
commandBarProvider.viewModel(database: runtime.database) each access, or add
explicit invalidation logic that compares the cached QuickCaptureViewModel's
database to runtime.database and discards it when they differ; update the same
pattern referenced around the other occurrence (lines ~160-165) so no stale view
model is returned after a database reopen/swap.
In `@brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift`:
- Around line 44-84: The test
testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden should
stop using fixed 50ms spins (runMainRunLoop) and instead poll until the
KeyHandlingCommandBarField appears or a timeout elapses; add a helper like
waitForCommandBarField(timeout:) that repeatedly runs the main run loop a short
interval and checks controller.panelForTesting.contentView (using
findSubview(ofType: KeyHandlingCommandBarField.self, in:)) and return the field
when found (or nil after timeout), then replace the explicit runMainRunLoop
calls and the final direct lookup with a call to this helper and XCTAssertNotNil
on its result so the test becomes robust under slower CI scheduling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 0f7cec59-61fb-49ff-a81c-8dfef1e401e4
📒 Files selected for processing (2)
brain-bar/Sources/BrainBar/BrainBarWindowRootView.swiftbrain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: Cursor Bugbot
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-03-18T00:12:08.774Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 87
File: brain-bar/Sources/BrainBar/BrainBarServer.swift:118-129
Timestamp: 2026-03-18T00:12:08.774Z
Learning: In Swift files under brain-bar/Sources/BrainBar, enforce that when a critical dependency like the database is nil due to startup ordering (socket before DB), any tool handler that accesses the database must throw an explicit error (e.g., ToolError.noDatabase) instead of returning a default/empty value. Do not allow silent defaults (e.g., guard let db else { return ... }). Flag patterns that silently return defaults when db is nil, as this masks startup timing issues. This guidance applies broadly to similar Swift files in the BrainBar module, not just this one location.
Applied to files:
brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
📚 Learning: 2026-03-29T18:45:40.988Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 133
File: brain-bar/Sources/BrainBar/BrainDatabase.swift:0-0
Timestamp: 2026-03-29T18:45:40.988Z
Learning: In the BrainBar module’s Swift database layer (notably BrainDatabase.swift), ensure that the `search()` function’s `unreadOnly=true` path orders results by the delivery frontier cursor so the watermark `maxRowID` stays contiguous. Specifically, when `unreadOnly` is enabled, the query must include `ORDER BY c.rowid ASC` (e.g., via `let orderByClause = unreadOnly ? "c.rowid ASC" : "f.rank"`). Do not replace the unread-only ordering with relevance-based sorting (e.g., `f.rank`) unconditionally or for the unread-only path, as it can introduce gaps in the watermark and incorrectly mark unseen rows as delivered. Flag any future change to the `ORDER BY` clause in this function that makes relevance sorting apply to the unread-only case.
Applied to files:
brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
🪛 SwiftLint (0.63.2)
brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift
[Warning] 6-6: Classes should have an explicit deinit method
(required_deinit)
brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift
[Warning] 156-156: Classes should have an explicit deinit method
(required_deinit)
| private var commandBarViewModel: QuickCaptureViewModel? { | ||
| commandBarProvider.viewModel(database: runtime.database) |
There was a problem hiding this comment.
Recreate the command-bar view model when the database instance changes.
This provider only initializes QuickCaptureViewModel once, then keeps returning that same instance even if runtime.database is later replaced or cleared. That leaves the command bar bound to a stale database handle after a reopen/swap.
Suggested fix
`@MainActor`
private final class BrainBarCommandBarViewModelProvider {
private let panelState = QuickCapturePanelState()
+ private var currentDatabaseID: ObjectIdentifier?
private var currentViewModel: QuickCaptureViewModel?
func viewModel(database: BrainDatabase?) -> QuickCaptureViewModel? {
- guard let database else { return currentViewModel }
- if currentViewModel == nil {
+ guard let database else {
+ currentDatabaseID = nil
+ currentViewModel = nil
+ return nil
+ }
+
+ let databaseID = ObjectIdentifier(database)
+ if currentDatabaseID != databaseID {
+ currentDatabaseID = databaseID
currentViewModel = QuickCaptureViewModel(db: database, panelState: panelState)
}
return currentViewModel
}
}Based on learnings: when a critical dependency like the database is nil due to startup ordering, do not silently return defaults that mask the lifecycle problem.
Also applies to: 160-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Sources/BrainBar/BrainBarWindowRootView.swift` around lines 129 -
130, The current implementation keeps returning a cached QuickCaptureViewModel
from commandBarProvider.viewModel(database: runtime.database) even after
runtime.database changes; update the code so the commandBarViewModel is
recreated when runtime.database is replaced or cleared — either make
commandBarViewModel a purely computed property that calls
commandBarProvider.viewModel(database: runtime.database) each access, or add
explicit invalidation logic that compares the cached QuickCaptureViewModel's
database to runtime.database and discards it when they differ; update the same
pattern referenced around the other occurrence (lines ~160-165) so no stale view
model is returned after a database reopen/swap.
| func testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden() { | ||
| let runtime = BrainBarRuntime() | ||
| let controller = BrainBarDashboardPanelController(runtime: runtime) | ||
| let tempDBPath = NSTemporaryDirectory() + "brainbar-commandbar-ready-\(UUID().uuidString).db" | ||
| let db = BrainDatabase(path: tempDBPath) | ||
| let collector = StatsCollector( | ||
| dbPath: tempDBPath, | ||
| daemonMonitor: DaemonHealthMonitor(targetPID: getpid()) | ||
| ) | ||
| defer { | ||
| collector.stop() | ||
| db.close() | ||
| controller.dismiss() | ||
| try? FileManager.default.removeItem(atPath: tempDBPath) | ||
| try? FileManager.default.removeItem(atPath: tempDBPath + "-wal") | ||
| try? FileManager.default.removeItem(atPath: tempDBPath + "-shm") | ||
| } | ||
|
|
||
| // Match launch order: AppDelegate creates the hidden NSPanel before the | ||
| // async database install lands, then the user opens BrainBar later. | ||
| _ = controller.panelForTesting.contentViewController?.view | ||
| runMainRunLoop() | ||
|
|
||
| runtime.install(collector: collector, injectionStore: nil, database: db) | ||
| runMainRunLoop() | ||
|
|
||
| controller.show() | ||
| runMainRunLoop() | ||
|
|
||
| let field = controller.panelForTesting.contentView.flatMap { | ||
| findSubview(ofType: KeyHandlingCommandBarField.self, in: $0) | ||
| } | ||
| XCTAssertNotNil( | ||
| field, | ||
| "Command bar should create its ready text field when runtime.database was installed before the panel became visible." | ||
| ) | ||
| } | ||
|
|
||
| private func runMainRunLoop() { | ||
| RunLoop.main.run(until: Date().addingTimeInterval(0.05)) | ||
| } |
There was a problem hiding this comment.
Replace the fixed 50 ms run-loop sleeps with a timeout-based wait.
This test currently assumes AppKit/SwiftUI finishes panel and field setup within three 50 ms spins. That is likely to flake under slower CI scheduling. Poll for KeyHandlingCommandBarField until a reasonable timeout instead of relying on fixed delays.
Suggested fix
- runMainRunLoop()
+ waitUntil {
+ controller.panelForTesting.contentView.flatMap {
+ findSubview(ofType: KeyHandlingCommandBarField.self, in: $0)
+ } != nil
+ }
let field = controller.panelForTesting.contentView.flatMap {
findSubview(ofType: KeyHandlingCommandBarField.self, in: $0)
}
XCTAssertNotNil(
@@
- private func runMainRunLoop() {
- RunLoop.main.run(until: Date().addingTimeInterval(0.05))
+ private func waitUntil(
+ timeout: TimeInterval = 1.0,
+ condition: () -> Bool
+ ) {
+ let deadline = Date().addingTimeInterval(timeout)
+ while !condition(), Date() < deadline {
+ RunLoop.main.run(until: Date().addingTimeInterval(0.01))
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@brain-bar/Tests/BrainBarTests/BrainBarDashboardPanelControllerTests.swift`
around lines 44 - 84, The test
testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden should
stop using fixed 50ms spins (runMainRunLoop) and instead poll until the
KeyHandlingCommandBarField appears or a timeout elapses; add a helper like
waitForCommandBarField(timeout:) that repeatedly runs the main run loop a short
interval and checks controller.panelForTesting.contentView (using
findSubview(ofType: KeyHandlingCommandBarField.self, in:)) and return the field
when found (or nil after timeout), then replace the explicit runMainRunLoop
calls and the final direct lookup with a call to this helper and XCTAssertNotNil
on its result so the test becomes robust under slower CI scheduling.
- Recent Hardening section traces each claim to a merged PR - BrainBar build-script guards (#264, #265) called out at the install step - Phase B preventive infra block (orchestrator#58, #60) connects deploy registry to the BrainBar build-stamp + canonical-build refuse layer - In-flight PR #251 entry documents NSPanel revival + trigram FTS5 startup-safety guard (10K-chunk threshold) + preserved /tmp/brainbar.sock pub/sub plane Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Recent Hardening section traces each claim to a merged PR - BrainBar build-script guards (#264, #265) called out at the install step - Phase B preventive infra block (orchestrator#58, #60) connects deploy registry to the BrainBar build-stamp + canonical-build refuse layer - In-flight PR #251 entry documents NSPanel revival + trigram FTS5 startup-safety guard (10K-chunk threshold) + preserved /tmp/brainbar.sock pub/sub plane Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…267) - Recent Hardening section traces each claim to a merged PR - BrainBar build-script guards (#264, #265) called out at the install step - Phase B preventive infra block (orchestrator#58, #60) connects deploy registry to the BrainBar build-stamp + canonical-build refuse layer - In-flight PR #251 entry documents NSPanel revival + trigram FTS5 startup-safety guard (10K-chunk threshold) + preserved /tmp/brainbar.sock pub/sub plane Co-authored-by: Test User <test@example.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>


Summary
mainCLAUDE.mdpost-merge deploy-check docs commit because the deploy-check workflow now lives in orchestrator via feat: compact search default + brain_expand tool + entity dedup #60Verification
git diff --check origin/main...HEADfix/brainbar-dashboard-flow:1811 passed, 9 skipped, 75 deselected, 1 xfailed; MCP registration3 passed; isolated eval/hook routing32 passed; bun1 pass;test_fts5_determinism.shPASSNotes
com.brainlayer.enrichment/com.brainlayer.watch; exact failed tests passed after pausing those write-heavy launch agents (3 passed in 21.60s), then the full pre-push hook passed.Summary by CodeRabbit
New Features
UI / UX
Tests
Documentation
Note
Restore BrainBar dashboard flow UI with knowledge graph atlas, injection feed, and agent presence
NSPanelmanaged byBrainBarDashboardPanelController, which remembers size/position, appears on all spaces, and hosts the root dashboard viewBrainBarDashboardViewinto a multi-card layout with separate write/enrichment pulse lanes, a queue rail, agent presence strip, sparkline cards, and a collapsible diagnostics section driven by a newDashboardFlowSummarymodelInjectionFeedViewinto a dashboard with burst grouping, activity ribbon, side-rail session/token metrics, and an overlay conversation viewer via the newInjectionPresentationsnapshot pipelineBrainDatabasewith schema migration, trigger-based sync, conditional startup rebuild/backfill, KG alias query expansion, and exact chunk-id short-circuit lookupT1_T2_SOURCESinenrichment_tiers.pyto includecodex_cli,cursor_cli, andgemini_cli, updating session discovery SQL and tier source filters accordinglyAgentActivityMonitorto sample live agent CLI processes and publish per-family presence counts to the dashboard viaStatsCollectorKGAtlasLayout, and a redesigned sidebar; dark/light mode support added throughoutMacroscope summarized b8588f4.
Note
Medium Risk
Moderate risk because it changes BrainBar’s primary windowing behavior and adds new SQLite schema/migration + FTS search paths (including conditional trigram index rebuild/backfill) that could affect startup time and query correctness on existing databases.
Overview
BrainBar’s menu-bar surface is reworked: the
MenuBarExtraswitches from a window-style surface to a menu with explicit actions (open dashboard/search/capture), and the main UI now opens in a floatingNSPanelviaBrainBarDashboardPanelController(with autosaved sizing/position and escape-to-dismiss).Dashboard & tabs are redesigned for “flow” visibility:
BrainBarWindowRootViewrefactors tab rendering/lifecycle (lazy activation + shared command bar VM), andBrainBarDashboardViewbecomes a scrollable, card-based layout with separate write/enrichment lane charts, queue/diagnostics sections, and a new agent-presence strip fed byAgentActivityMonitor.Search and schema/migrations expand:
BrainDatabaseadds a trigram FTS table (chunks_fts_trigram) with triggers, analysis, and startup repair logic that may skip synchronous backfill on large desynced DBs; search now short-circuits exact chunk-id matches, expands queries via lexical/KG alias replacements (newkg_entity_aliasestable +kg_entities.importancecolumn), and falls back to trigram matches with deduping.Graph and injection UIs are overhauled:
KGCanvasViewgains an atlas-style presentation (importance filtering, region backdrops, deterministic seeding viaKGAtlasLayout, dark/light rendering tweaks, and updated sidebar styling), andInjectionFeedViewbecomes a richer, snapshot-driven feed (burst grouping, activity ribbon, side-rail metrics, and overlay-based conversation drilldown).Reviewed by Cursor Bugbot for commit b8588f4. Bugbot is set up for automated code reviews on this repo. Configure here.
Stability addendum
59b4025d30bc59ff9917973b4537a2f365df67f1:fix(brainbar): skip synchronous trigram backfill on large desynced DBs.chunks_fts_trigramon the live ~360K-chunk database before/tmp/brainbar.sockopened, cascading lock pressure into enrichment and MCP clients.synchronousTrigramBackfillChunkLimit = 10_000; large desynced trigram tables now keep schema/triggers current and defer heavy backfill instead of blocking startup.testLargeTrigramDesyncDoesNotForceSynchronousStartupRebuild;swift test --filter DatabaseTests;bash scripts/run_tests.sh;swift test.docs.local/plans/2026-05-02-brainlayer-reliability-hardening/: PR B explicit trigram maintenance command, PR D boot health contract.Command-bar readiness fix for dashboard NSPanel surface
b8588f4f2f31f2650a6a663a25c50789c25f0717:fix(brainbar): restore command bar readiness on already-set runtime database.runtime.databasecould be installed while the NSPanel/root view was hidden, leaving the command bar local view-model state nil and renderingWarming memory…until a later activation refreshed SwiftUI.runtime.database, covering already-set values and later readiness changes without relying only on a missed subscription edge.testCommandBarBecomesReadyWhenDatabaseWasInstalledWhilePanelWasHidden;swift test --filter BrainBarDashboardPanelControllerTests; fullswift test(316/316);bash scripts/run_tests.shpre-push gate PASS.