Skip to content

feat: BrainBar Swift daemon — MCP over Unix socket#84

Merged
EtanHey merged 6 commits intomainfrom
feat/phase-3-brainbar-daemon
Mar 17, 2026
Merged

feat: BrainBar Swift daemon — MCP over Unix socket#84
EtanHey merged 6 commits intomainfrom
feat/phase-3-brainbar-daemon

Conversation

@EtanHey
Copy link
Copy Markdown
Owner

@EtanHey EtanHey commented Mar 17, 2026

Summary

  • New Swift menu bar daemon (brain-bar/) that owns the BrainLayer SQLite database and serves 8 MCP tools over /tmp/brainbar.sock
  • Eliminates 10 Python brainlayer-mcp processes (931 MB RSS measured) → 1 native daemon (~40 MB est., 209 KB binary)
  • Zero external dependencies: direct SQLite3 C API, POSIX sockets, FTS5, WAL mode
  • Single-writer architecture eliminates all SQLITE_BUSY lock contention by design

Components (28 tests, all GREEN)

Module Tests What
MCPFraming 7 Content-Length parser/encoder (partial, multi-message, split header)
MCPRouter 7 JSON-RPC dispatch: initialize, tools/list (8 tools), tools/call, notifications
BrainDatabase 10 SQLite3 + FTS5 + WAL + PRAGMAs (busy_timeout, cache_size, synchronous)
SocketIntegration 4 E2E: connect → initialize → tools/list → brain_search over Unix socket

Benchmark

Metric Before (Python) After (BrainBar) Savings
Processes 10 1 9 eliminated
RAM (RSS) 931 MB ~40 MB est. ~891 MB
Binary Python 3.13 (~50MB) 209 KB native 99.6% smaller
Lock contention 10 writers 1 single-writer Zero SQLITE_BUSY

Files

  • brain-bar/ — Full Swift Package (Package.swift, Sources/, Tests/)
  • brain-bar/build-app.sh — Build and install as /Applications/BrainBar.app
  • brain-bar/bundle/ — Info.plist + LaunchAgent plist
  • .mcp.json.example — Updated with socat config (socat STDIO UNIX-CONNECT:/tmp/brainbar.sock)

Test plan

  • 28 Swift tests pass (cd brain-bar && swift test)
  • 846 Python tests pass (zero regressions)
  • Socat smoke test: full round-trip MCP Content-Length framing
  • Build .app bundle and verify menu bar appearance
  • Verify real DB access (312K chunks, 6.6GB)
  • Compare latency: Python MCP vs BrainBar for brain_search

Part of Three-Daemon Sprint (Phase 3). See also Phase 2 (VoiceBar MCP) and Phase 4 (cmux MCP).

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Adds BrainBar macOS menu‑bar app with a local UNIX‑socket MCP interface, integrated SQLite brain store, MCP framing and routing, and a set of MCP tools (search, store, recall, entity, digest, update, expand, tags).
  • Tests

    • Adds unit and integration suites covering the database, framing, router, and end‑to‑end socket flows.
  • Chores

    • Adds macOS app bundling and launch config, .gitignore updates, Swift package manifest, and example MCP config entries for legacy/fallback and multiplexing servers.

New Swift menu bar daemon that owns the BrainLayer SQLite database
and serves MCP tools over /tmp/brainbar.sock. Eliminates 10 Python
processes (931MB RAM) with a single native daemon (~40MB est.).

Components (28 tests, all passing):
- MCPFraming: Content-Length parser/encoder (7 tests)
- MCPRouter: JSON-RPC dispatch for 8 BrainLayer tools (7 tests)
- BrainDatabase: SQLite3 C API with FTS5, WAL, PRAGMAs (10 tests)
- BrainBarServer: POSIX Unix socket + E2E integration (4 tests)

Architecture: direct SQLite3 C bindings (zero external deps),
single-writer (eliminates all SQLITE_BUSY errors), 209KB binary.

Includes: build-app.sh, Info.plist, LaunchAgent plist,
.mcp.json.example updated with socat config.

Part of Three-Daemon Sprint (Phase 3).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 17, 2026

📝 Walkthrough

Walkthrough

Adds BrainBar: a macOS menu‑bar app that starts a UNIX‑domain MCP JSON‑RPC server backed by a SQLite FTS database, plus MCP framing/router, build/bundle scripts, tests, and updates to .mcp.json.example adding alternate server entries.

Changes

Cohort / File(s) Summary
MCP config example
.mcp.json.example
Added top-level _comment2; updated brainlayer to use socat with args for /tmp/brainbar.sock; added brainlayer-legacy (brainlayer-mcp) and cmux (node pointing at local cmux-mcp script).
Bundle & build
brain-bar/.gitignore, brain-bar/build-app.sh, brain-bar/bundle/Info.plist, brain-bar/bundle/com.brainlayer.brainbar.plist
Added ignore rules, app bundling script, app Info.plist, and a launchd plist for autostart/logging.
Swift package & UI entry
brain-bar/Package.swift, brain-bar/Sources/BrainBar/BrainBarApp.swift
New SwiftPM manifest and macOS MenuBarExtra SwiftUI app with AppDelegate that manages BrainBarServer lifecycle.
Server & framing
brain-bar/Sources/BrainBar/BrainBarServer.swift, brain-bar/Sources/BrainBar/MCPFraming.swift
New BrainBarServer: UNIX socket accept loop, non-blocking per-client I/O, MCP framing integration and lifecycle; MCPFraming: Content-Length stream parser/encoder for MCP messages.
Routing & tools
brain-bar/Sources/BrainBar/MCPRouter.swift
New MCPRouter with JSON‑RPC handling (initialize, tools/list, tools/call), tool dispatch for eight brain_* tools, DB injection via setDatabase.
Database layer
brain-bar/Sources/BrainBar/BrainDatabase.swift
New BrainDatabase: sqlite3 wrapper with WAL, FTS5, pragmas, schema (chunks + FTS index), store/insert/search APIs and error handling.
Tests: unit & integration
brain-bar/Tests/BrainBarTests/DatabaseTests.swift, .../MCPFramingTests.swift, .../MCPRouterTests.swift, .../SocketIntegrationTests.swift
Added tests covering DB pragmas/schema/search/store/concurrency, MCP framing edge cases/encoding, router behavior (initialize, tools/list, tools/call, errors, notifications), and end‑to‑end socket integration over Unix socket.
App metadata & packaging
brain-bar/Package.swift, brain-bar/bundle/*
Added package and bundle metadata (targets, version, LSMinimumSystemVersion, LSUIElement) for bundling.
Minor Python formatting
src/brainlayer/ingest/codex.py, tests/test_ingest_codex.py
Non-functional reflows and small import cleanups; style/formatting changes only.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant Server as BrainBarServer
    participant Framing as MCPFraming
    participant Router as MCPRouter
    participant DB as BrainDatabase

    Client->>Server: connect /tmp/brainbar.sock
    Server-->>Client: accept

    Client->>Server: send framed initialize request
    Server->>Framing: append(data)
    Framing-->>Server: extractMessages -> request
    Server->>Router: handle(request)
    Router-->>Server: initialize response
    Server->>Framing: encode(response)
    Framing-->>Server: framed Data
    Server-->>Client: send response

    Client->>Server: send framed tools/call (brain_search)
    Server->>Framing: append(data)
    Framing-->>Server: extractMessages -> request
    Server->>Router: handle(request)
    Router->>DB: search(query)
    DB-->>Router: results
    Router-->>Server: search response
    Server->>Framing: encode(response)
    Framing-->>Server: framed Data
    Server-->>Client: send response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I stitched a socket, soft and bright,
I nibbled bytes through velvet night,
Framed RPCs and search alight,
Menu whispers, DB hums just right,
A hopping rabbit cheers: "Goodnight!"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 46.15% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main change: a new Swift daemon serving MCP tools via Unix socket, which is the primary objective of the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/phase-3-brainbar-daemon
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

EtanHey and others added 3 commits March 17, 2026 02:56
Pre-CodeRabbit fixes for 5 MAJOR + 4 MEDIUM issues:

MAJOR fixes:
- SQL injection in pragma(): whitelist allowed pragma names
- JSON injection in tool handlers: use JSONSerialization, not string interpolation
- Socket path buffer overflow: add length check before memcpy
- Content-Length DoS: cap at 10MB max payload
- FULLMUTEX retained: needed for WAL concurrent reads + close() race safety

MEDIUM fixes:
- FTS5 sanitization: strip * and handle empty queries
- Listen backlog: 5 → 16 for connection burst handling
- sendResponse: handle EAGAIN on non-blocking sockets
- Database exposes isOpen for server startup validation

28/28 tests still GREEN.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove unused `import sys` and fix import sort order.
Pre-existing on main, not from BrainBar changes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.mcp.json.example:
- Around line 16-21: The example file contains a hardcoded absolute user path in
the "cmux" entry (keys "command" and "args") which should be replaced with a
generic placeholder or environment variable; update the "args" value to
something like a reusable placeholder (e.g.
"/path/to/orchestrator/tools/cmux-mcp/dist/index.js" or reference an env var) so
the example is portable and remove any user-specific paths.

In `@brain-bar/build-app.sh`:
- Around line 43-45: The script ends without a trailing newline which violates
POSIX conventions; update the end of the file (after the final echo line that
prints "[build-app] DB: ~/.local/share/brainlayer/brainlayer.db") to include a
single newline character so the file terminates with a newline byte (i.e., add
one empty line at the end of the script).
- Line 13: The script hardcodes APP_DIR="/Applications/BrainBar.app" which
requires elevated privileges; update build-app.sh to allow an environment
override (use APP_DIR if set, otherwise default to a user-writable path like
"$HOME/Applications/BrainBar.app"), add a permission check on the chosen APP_DIR
(writable or parent dir writable) and if not writable either prompt to re-run
with sudo or attempt to escalate (or fall back to ~/Applications), and ensure
all references to APP_DIR in the script use this variable so installs can honor
the override and avoid permission errors.

In `@brain-bar/bundle/com.brainlayer.brainbar.plist`:
- Around line 18-21: The launchd plist currently writes unconstrained logs to
StandardOutPath and StandardErrorPath (/tmp/brainbar.stdout.log and
/tmp/brainbar.stderr.log); update the plist to avoid unbounded growth by either
switching to system logging (os_log/ASL) or moving logs to a managed location
and adding a log rotation strategy (e.g., configure newsyslog for the target
files), and add the launchd ThrottleInterval key to reduce rapid restart
logging; specifically, edit the plist keys StandardOutPath and StandardErrorPath
(or replace them by enabling system logging) and add ThrottleInterval to the
same plist to mitigate restart log storms.

In `@brain-bar/Sources/BrainBar/BrainBarServer.swift`:
- Around line 36-39: The startup currently runs asynchronously in start() via
queue.async and swallows failures inside startOnQueue() (including the database
open and the other logged-but-return paths), so change the API to surface
errors: make start() accept a completion handler (or make it
throwing/synchronous) and have startOnQueue() propagate thrown errors or call
the completion with a failure; stop swallowing the result of openDatabase (or
the databaseOpen function) and propagate its error upward instead of only
logging, and on the main thread call the supplied completion (or rethrow) so the
menu bar app can handle/terminate if the daemon failed to become ready; update
any call sites that relied on the old fire-and-forget start() to wait for the
completion or handle the thrown error.
- Around line 115-116: The socket is set nonblocking via fcntl(clientFD,
F_SETFL, flags | O_NONBLOCK) but sendResponse exits on a single failed write (in
sendResponse where write() result checked), which drops partial data when write
returns -1 with errno==EAGAIN; fix by changing sendResponse to handle partial
writes and retry logic: loop to advance a byte-offset until all bytes are
written, handling short writes and EINTR, and if write would block
(EAGAIN/EWOULDBLOCK) enqueue the remaining bytes in a per-connection send buffer
and register a write-ready event (e.g. a DispatchSource or equivalent) tied to
clientFD to resume sending when writable; ensure the writable handler drains the
queue, removes the source when empty, and closes on fatal errors.
- Around line 63-72: The copy into sockaddr_un.sun_path must validate the
socketPath length to avoid overflowing sun_path: compute the capacity via
MemoryLayout.size(ofValue: addr.sun_path) (or 108) and if
socketPath.utf8CString.count > capacity, bail out (return/throw/log) with a
clear error; otherwise copy only up to capacity bytes and ensure
null-termination (or use a bounded copy like strncpy semantics). Update the code
around addr, socketPath, and the withUnsafeMutablePointer block in
BrainBarServer.swift to perform this length check and bounded copy before
calling bind().

In `@brain-bar/Sources/BrainBar/MCPFraming.swift`:
- Around line 15-17: The append(_ data: Data) method appends unbounded data into
buffer allowing a client to exhaust memory; introduce caps by defining constants
like MAX_BUFFER_SIZE and MAX_HEADER_SIZE, and modify mutating func append(_
data: Data) to first check and enforce buffer.count + data.count <=
MAX_BUFFER_SIZE (reject/return error or drop the message if exceeded) and also
ensure header parsing checks that the header separator index does not exceed
MAX_HEADER_SIZE before accepting headers; additionally, when parsing
Content-Length elsewhere, enforce a MAX_CONTENT_LENGTH and reject bodies
claiming more than that. Ensure you reference and update buffer handling in
append(_:) and the header/content-length parsing logic to perform these bounds
checks and fail fast.

In `@brain-bar/Sources/BrainBar/MCPRouter.swift`:
- Around line 142-150: The brain_search handler (handleBrainSearch) currently
only reads query and num_results and ignores advertised filters (project, tag,
importance_min, detail) and does not clamp num_results; update handleBrainSearch
to parse optional parameters project, tag, importance_min, and detail from args,
validate/convert them, and pass them into the search call (or construct a
SearchOptions/Filter object used by db.search) so the database enforces those
filters; also clamp num_results to a maximum of 100 before using it and ensure
db.search or the constructed filter uses importance_min and detail to control
result scoring and verbosity (refer to handleBrainSearch and db.search /
SearchOptions to locate the relevant code to modify).
- Around line 168-201: The handlers (handleBrainRecall, handleBrainEntity,
handleBrainDigest, handleBrainUpdate, handleBrainExpand, handleBrainTags)
currently return hard-coded "success" JSON while doing no work against
BrainDatabase; instead either implement each handler to read/mutate
BrainDatabase according to the tool schema (use BrainDatabase APIs to query
entities, create chunks for digest, validate and use chunk_id for update/expand,
and return real tags/context), or if not yet implemented return a clear
non-success error (e.g., throw a ToolError.notImplemented or return an explicit
error JSON) so clients won't treat placeholders as successful; update tools/list
registration if you choose to leave unimplemented so these endpoints aren't
advertised. Ensure you reference and use the BrainDatabase instance and validate
required parameters (query, content, action, chunk_id) in the respective handler
functions.

In `@brain-bar/Tests/BrainBarTests/MCPFramingTests.swift`:
- Around line 108-109: Replace the force unwrap of String(data:framed, encoding:
.utf8)! with an XCTUnwrap call so test failures produce diagnostics instead of
crashes: call let str = try XCTUnwrap(String(data: framed, encoding: .utf8)) (or
use XCTAssertNotNil + unwrap) in the test around MCPFraming.encode(response) so
that the decoding result is validated safely; ensure the surrounding test
function imports XCTest and propagates the thrown error with try where needed.

In `@brain-bar/Tests/BrainBarTests/MCPRouterTests.swift`:
- Around line 150-153: The test assertion in MCPRouterTests.swift is ambiguous
because `response["id"] == nil` compares an Optional<Any> to nil and can be
unclear; update the XCTAssertTrue in the test that follows `let response =
router.handle(request)` to explicitly assert notification behavior—either assert
`response.isEmpty` (preferred: `XCTAssertTrue(response.isEmpty, "Notifications
should not produce a response")`) or cast the lookup explicitly (e.g., check
`response["id"] as Any? == nil`) so the intent is clear; modify the assertion
near router.handle(request) accordingly.

In `@brain-bar/Tests/BrainBarTests/SocketIntegrationTests.swift`:
- Around line 17-19: Replace the fixed Thread.sleep(forTimeInterval: 0.2) used
after server.start() with a readiness probe that polls/attempts to connect until
a deadline: repeatedly try to open a socket/connection to the server bind
address/port (or call a server.isReady() API if available) with short sleeps
between attempts and fail the test if the deadline is exceeded; update the
SocketIntegrationTests setup to remove Thread.sleep(...) and use this
connect-or-poll loop tied to server.start() so tests wait deterministically for
the socket to be bound.
- Around line 124-126: The test opens a blocking socket (fd) and later calls
read(), which can hang; after creating the socket (the socket(...) call that
sets fd) set the socket into non-blocking mode via fcntl(fd, F_SETFL,
O_NONBLOCK) or, alternatively, perform a poll()/select() on fd with a 5-second
timeout immediately before the read loop to ensure read() will not block
indefinitely; update the code around the socket creation/defer block and the
read() call to use the chosen approach (reference fd and the read()
call/poll/select) so the test respects the 5-second deadline.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 83506470-421b-49ba-8410-4b126b98d65a

📥 Commits

Reviewing files that changed from the base of the PR and between 31e669c and b8296a3.

📒 Files selected for processing (15)
  • .mcp.json.example
  • brain-bar/.gitignore
  • brain-bar/Package.swift
  • brain-bar/Sources/BrainBar/BrainBarApp.swift
  • brain-bar/Sources/BrainBar/BrainBarServer.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
  • brain-bar/Sources/BrainBar/MCPFraming.swift
  • brain-bar/Sources/BrainBar/MCPRouter.swift
  • brain-bar/Tests/BrainBarTests/DatabaseTests.swift
  • brain-bar/Tests/BrainBarTests/MCPFramingTests.swift
  • brain-bar/Tests/BrainBarTests/MCPRouterTests.swift
  • brain-bar/Tests/BrainBarTests/SocketIntegrationTests.swift
  • brain-bar/build-app.sh
  • brain-bar/bundle/Info.plist
  • brain-bar/bundle/com.brainlayer.brainbar.plist
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2026-03-12T14:22:54.809Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to src/brainlayer/mcp/**/*.py : MCP tools must implement: `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_get_person` with legacy `brainlayer_*` aliases for backward compatibility

Applied to files:

  • .mcp.json.example
📚 Learning: 2026-03-12T14:22:54.809Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to setup.py,pyproject.toml,setup.cfg : MCP server must be executable via `brainlayer-mcp` entrypoint

Applied to files:

  • .mcp.json.example
📚 Learning: 2026-03-12T14:22:54.809Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Use `/tmp/brainlayer.sock` as the daemon socket and `/tmp/brainlayer-enrichment.lock` as the enrichment worker lock file

Applied to files:

  • .mcp.json.example
🪛 SwiftLint (0.63.2)
brain-bar/Sources/BrainBar/BrainBarServer.swift

[Warning] 11-11: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Tests/BrainBarTests/MCPRouterTests.swift

[Warning] 11-11: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Tests/BrainBarTests/MCPFramingTests.swift

[Warning] 15-15: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Tests/BrainBarTests/SocketIntegrationTests.swift

[Warning] 9-9: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/MCPRouter.swift

[Warning] 12-12: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/BrainBarApp.swift

[Warning] 11-11: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Tests/BrainBarTests/DatabaseTests.swift

[Warning] 14-14: Classes should have an explicit deinit method

(required_deinit)

🔇 Additional comments (9)
brain-bar/.gitignore (1)

1-4: LGTM!

Standard Swift package ignore patterns covering build artifacts, SwiftPM metadata, Xcode projects, and derived data.

brain-bar/bundle/com.brainlayer.brainbar.plist (1)

1-25: Launchd plist configuration is appropriate.

The KeepAlive with SuccessfulExit: false ensures the daemon restarts on crashes while allowing clean shutdowns. The ProcessType: Interactive is suitable for a menu bar app. The binary path correctly matches the build script output at /Applications/BrainBar.app/Contents/MacOS/BrainBar.

brain-bar/bundle/Info.plist (1)

1-26: LGTM!

Info.plist is correctly configured for a menu bar daemon:

  • LSUIElement: true hides the dock icon
  • LSMinimumSystemVersion: 14.0 matches Package.swift's .macOS(.v14)
  • Bundle identifier com.brainlayer.brainbar matches the launchd plist label
brain-bar/Tests/BrainBarTests/MCPRouterTests.swift (2)

11-154: Test coverage is comprehensive.

Good coverage of MCP router functionality including initialize handshake, tools listing, tool dispatch, error handling for unknown tools/methods, and notification semantics. The static analysis warning about required_deinit is a false positive for XCTestCase subclasses.


58-62: Test correctly reflects the MCPRouter implementation.

The test expects brain_tags, which matches the actual tools implemented in MCPRouter.swift (lines 133 and 323). However, the documented Python MCP tools specification includes brain_get_person, not brain_tags. Verify whether the Swift implementation intentionally uses brain_tags as an alternative tool or if brain_get_person should be implemented instead.

brain-bar/Tests/BrainBarTests/MCPFramingTests.swift (1)

15-124: Excellent test coverage for MCP framing edge cases.

The tests comprehensively cover critical framing scenarios:

  • Single/multiple message parsing
  • Partial message buffering and completion
  • Split headers across reads
  • Response encoding
  • Zero-length frame rejection

This coverage is essential for a streaming protocol parser. The static analysis warning about required_deinit is a false positive for XCTestCase subclasses.

.mcp.json.example (1)

6-7: Update socket path to match documented convention.

Change /tmp/brainbar.sock to /tmp/brainlayer.sock. The main brainlayer daemon and all documentation use /tmp/brainlayer.sock as the socket path. The /tmp/brainbar.sock path is specific to the BrainBar service (a separate component in brain-bar/), and should not be used in the default MCP example configuration.

brain-bar/Package.swift (1)

1-23: Package manifest is well-structured and consistent with deployment configuration.

The package correctly defines an executable target linking the system SQLite3 library and a test target with dependency on the main module. The macOS 14 minimum specified in Package.swift matches the Info.plist LSMinimumSystemVersion (14.0). Swift 6.0 tools version is current and stable.

brain-bar/Sources/BrainBar/BrainDatabase.swift (1)

58-91: Verify that legacy rows are searchable after chunks_fts is created.

Lines 58-91 create the FTS table and future sync triggers, but they never backfill existing chunks rows. If this daemon is opening an older brainlayer.db that does not already have a populated chunks_fts, brain_search will stay empty until you run a one-time rebuild/backfill.

🛠️ One way to backfill after creation
+        let hadFTSTable = (try? tableExists("chunks_fts")) ?? false
         exec("""
             CREATE VIRTUAL TABLE IF NOT EXISTS chunks_fts USING fts5(
                 content,
                 summary,
                 tags,
                 content='chunks',
                 content_rowid='rowid'
             )
         """)
+        if !hadFTSTable {
+            exec("INSERT INTO chunks_fts(chunks_fts) VALUES('rebuild')")
+        }

Comment thread .mcp.json.example
Comment on lines +16 to 21
"cmux": {
"command": "node",
"args": [
"/Users/etanheyman/Gits/orchestrator/tools/cmux-mcp/dist/index.js"
]
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove hardcoded user-specific path from example file.

The cmux entry contains a hardcoded absolute path (/Users/etanheyman/...) that won't work for other developers. Example files should use placeholder paths or environment variables.

💡 Suggested fix
     "cmux": {
       "command": "node",
       "args": [
-        "/Users/etanheyman/Gits/orchestrator/tools/cmux-mcp/dist/index.js"
+        "${ORCHESTRATOR_PATH}/tools/cmux-mcp/dist/index.js"
       ]
     }

Or use a comment placeholder like "/path/to/orchestrator/tools/cmux-mcp/dist/index.js".

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"cmux": {
"command": "node",
"args": [
"/Users/etanheyman/Gits/orchestrator/tools/cmux-mcp/dist/index.js"
]
}
"cmux": {
"command": "node",
"args": [
"${ORCHESTRATOR_PATH}/tools/cmux-mcp/dist/index.js"
]
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.mcp.json.example around lines 16 - 21, The example file contains a
hardcoded absolute user path in the "cmux" entry (keys "command" and "args")
which should be replaced with a generic placeholder or environment variable;
update the "args" value to something like a reusable placeholder (e.g.
"/path/to/orchestrator/tools/cmux-mcp/dist/index.js" or reference an env var) so
the example is portable and remove any user-specific paths.

Comment thread brain-bar/build-app.sh Outdated
Comment thread brain-bar/build-app.sh
Comment on lines +43 to +45
echo "[build-app] Done: $APP_DIR"
echo "[build-app] Socket: /tmp/brainbar.sock"
echo "[build-app] DB: ~/.local/share/brainlayer/brainlayer.db"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add trailing newline to script.

POSIX convention requires files to end with a newline. Some tools may warn or behave unexpectedly without it.

💡 Fix
 echo "[build-app] Done: $APP_DIR"
 echo "[build-app] Socket: /tmp/brainbar.sock"
 echo "[build-app] DB: ~/.local/share/brainlayer/brainlayer.db"
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
echo "[build-app] Done: $APP_DIR"
echo "[build-app] Socket: /tmp/brainbar.sock"
echo "[build-app] DB: ~/.local/share/brainlayer/brainlayer.db"
echo "[build-app] Done: $APP_DIR"
echo "[build-app] Socket: /tmp/brainbar.sock"
echo "[build-app] DB: ~/.local/share/brainlayer/brainlayer.db"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/build-app.sh` around lines 43 - 45, The script ends without a
trailing newline which violates POSIX conventions; update the end of the file
(after the final echo line that prints "[build-app] DB:
~/.local/share/brainlayer/brainlayer.db") to include a single newline character
so the file terminates with a newline byte (i.e., add one empty line at the end
of the script).

Comment thread brain-bar/bundle/com.brainlayer.brainbar.plist
Comment on lines +36 to +39
func start() {
queue.async { [weak self] in
self?.startOnQueue()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't let startup fail silently.

Lines 36-39 detach startup from launch, and the failure paths in Lines 58-60 and 79-92 only log and return. Line 50 also cannot surface a database-open failure back to the caller, so the menu bar app can finish launching even when the daemon never became ready.

Also applies to: 48-52, 58-60, 79-92

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainBarServer.swift` around lines 36 - 39, The
startup currently runs asynchronously in start() via queue.async and swallows
failures inside startOnQueue() (including the database open and the other
logged-but-return paths), so change the API to surface errors: make start()
accept a completion handler (or make it throwing/synchronous) and have
startOnQueue() propagate thrown errors or call the completion with a failure;
stop swallowing the result of openDatabase (or the databaseOpen function) and
propagate its error upward instead of only logging, and on the main thread call
the supplied completion (or rethrow) so the menu bar app can handle/terminate if
the daemon failed to become ready; update any call sites that relied on the old
fire-and-forget start() to wait for the completion or handle the thrown error.

Comment thread brain-bar/Sources/BrainBar/MCPRouter.swift Outdated
Comment thread brain-bar/Tests/BrainBarTests/MCPFramingTests.swift Outdated
Comment on lines +150 to +153
let response = router.handle(request)
// Notifications should return empty/nil response (no id to respond to)
XCTAssertTrue(response.isEmpty || response["id"] == nil)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Notification response assertion may not behave as expected.

The condition response["id"] == nil compares Any? to nil, which checks if the key exists and its value is not nil. If the key is absent, this returns nil (the Any?), and comparing Optional<Any>.none == nil is true. However, the intent might be clearer with explicit handling.

Consider clarifying:

XCTAssertTrue(response.isEmpty || response["id"] as Any? == nil)

Or simply check the response is empty for notifications:

XCTAssertTrue(response.isEmpty, "Notifications should not produce a response")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/MCPRouterTests.swift` around lines 150 - 153,
The test assertion in MCPRouterTests.swift is ambiguous because `response["id"]
== nil` compares an Optional<Any> to nil and can be unclear; update the
XCTAssertTrue in the test that follows `let response = router.handle(request)`
to explicitly assert notification behavior—either assert `response.isEmpty`
(preferred: `XCTAssertTrue(response.isEmpty, "Notifications should not produce a
response")`) or cast the lookup explicitly (e.g., check `response["id"] as Any?
== nil`) so the intent is clear; modify the assertion near
router.handle(request) accordingly.

Comment on lines +17 to +19
server.start()
// Give server time to bind
Thread.sleep(forTimeInterval: 0.2)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace the fixed sleep with a readiness probe.

Lines 17-19 assume the server binds within 200ms. That will flake on slower CI hosts and turns real startup regressions into intermittent test failures. Poll for socket readiness or connect until a deadline instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/SocketIntegrationTests.swift` around lines 17 -
19, Replace the fixed Thread.sleep(forTimeInterval: 0.2) used after
server.start() with a readiness probe that polls/attempts to connect until a
deadline: repeatedly try to open a socket/connection to the server bind
address/port (or call a server.isReady() API if available) with short sleeps
between attempts and fail the test if the deadline is exceeded; update the
SocketIntegrationTests setup to remove Thread.sleep(...) and use this
connect-or-poll loop tied to server.start() so tests wait deterministically for
the socket to be bound.

Comment on lines +124 to +126
let fd = socket(AF_UNIX, SOCK_STREAM, 0)
guard fd >= 0 else { throw NSError(domain: "test", code: 1, userInfo: [NSLocalizedDescriptionKey: "socket() failed"]) }
defer { close(fd) }
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Get the file and view the relevant section with context
git ls-files | grep -i "SocketIntegrationTests.swift"

Repository: EtanHey/brainlayer

Length of output: 120


🏁 Script executed:

# Read the file to see the full context
wc -l brain-bar/Tests/BrainBarTests/SocketIntegrationTests.swift

Repository: EtanHey/brainlayer

Length of output: 124


🏁 Script executed:

# Read the full file with line numbers
cat -n brain-bar/Tests/BrainBarTests/SocketIntegrationTests.swift

Repository: EtanHey/brainlayer

Length of output: 8534


Set the socket to non-blocking mode or use poll/select to enforce the 5-second timeout.

Line 166 calls read() on a blocking socket, so if the server stalls, the test hangs indefinitely and never reaches the deadline check. The socket is created without the non-blocking flag, and no fcntl(fd, F_SETFL, O_NONBLOCK) is called. Either set the socket non-blocking after creation, or use poll() or select() with the 5-second timeout before the read loop.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Tests/BrainBarTests/SocketIntegrationTests.swift` around lines 124
- 126, The test opens a blocking socket (fd) and later calls read(), which can
hang; after creating the socket (the socket(...) call that sets fd) set the
socket into non-blocking mode via fcntl(fd, F_SETFL, O_NONBLOCK) or,
alternatively, perform a poll()/select() on fd with a 5-second timeout
immediately before the read loop to ensure read() will not block indefinitely;
update the code around the socket creation/defer block and the read() call to
use the chosen approach (reference fd and the read() call/poll/select) so the
test respects the 5-second deadline.

@EtanHey
Copy link
Copy Markdown
Owner Author

EtanHey commented Mar 17, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 17, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

MAJOR fixes:
- FTS5 backfill: rebuild index when opening existing DB (critical for 312K chunk DB)
- Stub handlers: return notImplemented error instead of fake success
- brain_search: clamp num_results to max 100

MEDIUM fixes:
- MCPFraming: add 16MB total buffer limit (DoS protection)
- build-app.sh: allow BRAINBAR_APP_DIR env override (no sudo needed)
- Test: replace force unwrap with XCTUnwrap
- LaunchAgent: add ThrottleInterval for restart storms

Design decisions (commented on PR):
- /tmp/brainbar.sock intentional (not brainlayer.sock) — avoids conflict during migration
- brain_tags replaces brain_get_person — per Phase B spec

28/28 tests GREEN.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@EtanHey
Copy link
Copy Markdown
Owner Author

EtanHey commented Mar 17, 2026

@coderabbitai The socket path /tmp/brainbar.sock is intentional and should NOT be changed to /tmp/brainlayer.sock. BrainBar is a NEW daemon that will coexist with the existing Python brainlayer-mcp during the migration period. Using a different socket path avoids conflicts and allows A/B testing. Once BrainBar is proven stable, the Python server will be retired and the .mcp.json will point to /tmp/brainbar.sock via socat. Please learn this for future reviews.

@EtanHey
Copy link
Copy Markdown
Owner Author

EtanHey commented Mar 17, 2026

@coderabbitai brain_tags is intentionally used instead of brain_get_person. The Phase B spec (merged in PR #72) added brain_tags as the 8th MCP tool, replacing brain_get_person in the tool set. The Python MCP server already has this tool. Please learn this for future reviews.

@EtanHey
Copy link
Copy Markdown
Owner Author

EtanHey commented Mar 17, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 17, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 17, 2026

Tip

For best results, initiate chat on the files or code changes.

@EtanHey: Understood! I've noted the intent behind the /tmp/brainbar.sock path — it's the new BrainBar Swift daemon's socket, intentionally separate from the Python brainlayer-mcp socket (/tmp/brainlayer.sock) to allow safe coexistence, conflict-free A/B testing, and a gradual migration path. I'll make sure not to flag this in future reviews.


✏️ Learnings added
Learnt from: EtanHey
URL: https://github.com/EtanHey/brainlayer/pull/84

Timestamp: 2026-03-17T01:04:11.732Z
Learning: The socket path `/tmp/brainbar.sock` is intentional for the BrainBar Swift daemon (brain-bar/) and must NOT be changed to `/tmp/brainlayer.sock`. BrainBar is a new daemon that coexists with the existing Python `brainlayer-mcp` (which uses `/tmp/brainlayer.sock`) during the migration period. The different paths avoid conflicts and allow A/B testing. Once BrainBar is proven stable, the Python server will be retired and `.mcp.json` will point to `/tmp/brainbar.sock` via socat.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-14T02:20:54.656Z
Learning: Request codex review, cursor review, and bugbot review for BrainLayer PRs

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-14T02:20:54.656Z
Learning: Treat retrieval correctness, write safety, and MCP stability as critical-path concerns in BrainLayer reviews

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to src/brainlayer/mcp/**/*.py : MCP tools must implement: `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_get_person` with legacy `brainlayer_*` aliases for backward compatibility

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Use `/tmp/brainlayer.sock` as the daemon socket and `/tmp/brainlayer-enrichment.lock` as the enrichment worker lock file

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to src/brainlayer/**/*.py : Python package structure should follow the layout: `src/brainlayer/` for package code, with separate modules for `vector_store.py`, `embeddings.py`, `daemon.py`, `dashboard/`, and `mcp/` for different concerns

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to src/brainlayer/**/*.py : Prioritize MLX (`Qwen2.5-Coder-14B-Instruct-4bit`) on Apple Silicon (port 8080) as the enrichment backend; fall back to Ollama (`glm-4.7-flash` on port 11434) after 3 consecutive MLX failures; support backend override via `BRAINLAYER_ENRICH_BACKEND` environment variable

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 17, 2026

Tip

For best results, initiate chat on the files or code changes.

@EtanHey, understood! I've noted that brain_tags is the correct 8th MCP tool, replacing brain_get_person per the Phase B spec from PR #72. I'll also remove the outdated learning that listed brain_get_person.

(´• ω •`) 🐇✨ Got it, I'll remember both clarifications for future reviews!


✏️ Learnings added
Learnt from: EtanHey
URL: https://github.com/EtanHey/brainlayer/pull/84

Timestamp: 2026-03-17T01:04:22.473Z
Learning: Applies to src/brainlayer/mcp/**/*.py and brain-bar/Sources/BrainBar/MCPRouter.swift: The 8 required MCP tools are `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_tags`. `brain_tags` is the 8th tool, replacing `brain_get_person`, as defined in the Phase B spec merged in PR `#72`. The Python MCP server already implements `brain_tags`. Legacy `brainlayer_*` aliases must be maintained for backward compatibility.

Learnt from: EtanHey
URL: https://github.com/EtanHey/brainlayer/pull/84

Timestamp: 2026-03-17T01:04:22.473Z
Learning: In BrainLayer, the BrainBar daemon uses the socket path `/tmp/brainbar.sock` (NOT `/tmp/brainlayer.sock`). BrainBar is a new native Swift daemon designed to coexist with the existing Python `brainlayer-mcp` server during the migration period. Different socket paths avoid conflicts and enable A/B testing. Once BrainBar is proven stable, the Python server will be retired.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


⛔ Learnings removed
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to src/brainlayer/mcp/**/*.py : MCP tools must implement: `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_get_person` with legacy `brainlayer_*` aliases for backward compatibility

🧠 Learnings used
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-14T02:20:54.656Z
Learning: Request codex review, cursor review, and bugbot review for BrainLayer PRs

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-14T02:20:54.656Z
Learning: Treat retrieval correctness, write safety, and MCP stability as critical-path concerns in BrainLayer reviews

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to src/brainlayer/mcp/**/*.py : MCP tools must implement: `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_get_person` with legacy `brainlayer_*` aliases for backward compatibility

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Use `/tmp/brainlayer.sock` as the daemon socket and `/tmp/brainlayer-enrichment.lock` as the enrichment worker lock file

@EtanHey
Copy link
Copy Markdown
Owner Author

EtanHey commented Mar 17, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 17, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

♻️ Duplicate comments (2)
brain-bar/Sources/BrainBar/BrainBarServer.swift (1)

36-39: ⚠️ Potential issue | 🟠 Major

Surface startup readiness/failure instead of fire-and-forget logging.

start() returns immediately, and the failure paths in startOnQueue() only NSLog + return. That lets the menu bar app finish launching while the daemon never became reachable. Please add a readiness/error callback, or make startup synchronous/throwing on the server queue, so launch can fail fast.

Also applies to: 48-60, 85-98

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainBarServer.swift` around lines 36 - 39, The
start() method currently dispatches startOnQueue() fire-and-forget on queue
which logs failures via NSLog and returns, so startup readiness/failure is never
propagated; change start() and related callers (e.g., startOnQueue(), any launch
entrypoints) to surface results by either (A) adding a completion handler or
Result/async callback parameter to start() that is invoked on completion/failure
from startOnQueue() with detailed Error, or (B) perform the work synchronously
on the server queue and make start() throwing so callers can catch errors;
ensure all failure paths inside startOnQueue() and the other startup helpers
return or throw the Error (not just NSLog) and that the chosen
completion/throwing path reports detailed failure reasons to the launcher so it
can fail fast.
brain-bar/Sources/BrainBar/MCPRouter.swift (1)

142-152: ⚠️ Potential issue | 🟠 Major

brain_search still advertises parameters it never honors.

toolDefinitions publishes project, tag, importance_min, and detail, but handleBrainSearch only uses query and num_results. Line 146 also only clamps the upper bound, so zero/negative values still reach the database layer. Either plumb these parameters into BrainDatabase.search and validate num_results, or remove them from the schema until they work.

🛠️ Minimum guard for the limit path
-        let limit = min(args["num_results"] as? Int ?? 5, 100)
+        let rawLimit = args["num_results"] as? Int ?? 5
+        let limit = min(max(rawLimit, 1), 100)

Based on learnings: Treat retrieval correctness, write safety, and MCP stability as critical-path concerns in BrainLayer reviews.

Also applies to: 245-260

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/MCPRouter.swift` around lines 142 - 152,
handleBrainSearch currently only uses query and num_results (and only clamps an
upper bound), while toolDefinitions advertises project, tag, importance_min, and
detail; update handleBrainSearch to validate num_results (ensure it's a positive
Int, apply a min of 1 and max of 100) and pass the additional parameters into
BrainDatabase.search (or remove those keys from toolDefinitions if they are not
supported). Locate handleBrainSearch and ensure you extract and validate
args["num_results"] as a positive Int, extract args["project"], args["tag"],
args["importance_min"], and args["detail"] with safe types, then call
BrainDatabase.search with a signature that accepts these filters (or add an
overload/mapping function) so the search honors the advertised parameters; if
you choose to remove them from the schema, update toolDefinitions to stop
advertising project/tag/importance_min/detail instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@brain-bar/build-app.sh`:
- Around line 4-6: Update the script header comment so the documented default
output path matches the actual default used in the script (change "Output:
/Applications/BrainBar.app" to "Output: $HOME/Applications/BrainBar.app" or
similar), ensuring the header mirrors the script's default install directory
referenced in the file (the default install variable assigned around the top of
build-app.sh).
- Around line 36-37: The script blindly copies "$BUNDLE_DIR/Info.plist" and
"$BINARY" into "$APP_DIR/Contents/", so add an explicit existence check for
"$BUNDLE_DIR/Info.plist" before the cp (and similarly consider checking
"$BINARY"), and if the file is missing call an explanatory echo to stderr and
exit non‑zero; update the block surrounding the cp lines (the cp
"$BUNDLE_DIR/Info.plist" "$APP_DIR/Contents/" and cp "$BINARY"
"$APP_DIR/Contents/MacOS/BrainBar") to perform test -f checks for the source
files and fail fast with a clear error message mentioning the missing path and
the variables BUNDLE_DIR or BINARY.

In `@brain-bar/bundle/com.brainlayer.brainbar.plist`:
- Around line 7-10: The plist hardcodes /Applications/BrainBar.app while
build-app.sh uses
APP_DIR="${BRAINBAR_APP_DIR:-$HOME/Applications/BrainBar.app}", causing a path
mismatch; update the plist to use a placeholder (e.g., __APP_DIR__) in the
ProgramArguments array and update the install flow to substitute that
placeholder with the runtime APP_DIR value (as set by build-app.sh) — mirror the
pattern used by scripts/launchd/install.sh (which replaces __HOME__) so the
final installed bundle points to the actual APP_DIR rather than a hardcoded
path.
- Around line 1-27: The launchd plist com.brainlayer.brainbar.plist is never
installed; add an automated install path by adding a "brainbar" case to
scripts/launchd/install.sh that follows the existing pattern (copy
brain-bar/bundle/com.brainlayer.brainbar.plist into ~/Library/LaunchAgents and
run the appropriate launchctl bootstrap/unbootstrap commands for the current GUI
user), or alternatively modify brain-bar/build-app.sh to perform the copy and
launchctl bootstrap after building; update the script to log success/failure and
ensure idempotence (check for existing plist) so users no longer must install
the plist manually.

In `@brain-bar/Sources/BrainBar/BrainBarServer.swift`:
- Around line 167-183: The sendResponse(fd: Int32, response: [String: Any]) is
currently busy-waiting on write with repeated usleep calls on the server serial
queue; change this to a non-blocking, bounded strategy: attempt one immediate
non-blocking write pass (using write as currently done), and if not fully
written, stop retrying on the server queue and either (a) schedule the remaining
bytes to be written by attaching a DispatchSource.makeWriteSource for the client
fd that resumes when writable and completes the tail write, or (b) if you prefer
a simpler fix, enforce a small retry budget (e.g., N attempts) and then
close(fd) and log an error; implement this using the existing sendResponse,
framed buffer, and fd variables and avoid any infinite loop on the server queue.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 21-40: openAndConfigure() currently marks the DB open immediately
and only logs failures from exec(_:) and createSchema(), so startup can falsely
succeed; change exec(_:) and createSchema() to return Bool or throw on error,
call sqlite3_open_v2(...) and if it succeeds then run all PRAGMA/DDL and only
set isOpen = true and allow the instance to be published to
MCPRouter/BrainBarServer after all exec/createSchema calls succeed; on any
failure, log the error, close and nil out db (sqlite3_close(db); db = nil),
leave isOpen false, and propagate the failure (return Bool/throw) so callers
know initialization failed. Ensure callers that currently assume success check
the new return/throw value before registering the DB.

In `@brain-bar/Sources/BrainBar/MCPRouter.swift`:
- Around line 79-93: handleToolsCall currently rejects legacy brainlayer_* tool
names before validation/dispatch; add a normalization step that maps legacy
aliases (e.g., brainlayer_search => brain_search, brainlayer_store =>
brain_store, etc.) to the canonical brain_* names and use the normalized name
for both the toolDefinitions existence check and the dispatchTool call;
implement a single reusable alias map (used wherever you validate/dispatch
tools, including the other tools-handling block) so legacy clients are accepted
and dispatchTool always receives canonical names.

In `@src/brainlayer/ingest/codex.py`:
- Around line 378-382: Replace direct uses of DEFAULT_DB_PATH in the codex
ingest fallback branches with a call to the standard resolver get_db_path():
import get_db_path from paths (e.g., from ..paths import get_db_path) and set
db_path = get_db_path() wherever the code currently does db_path =
DEFAULT_DB_PATH (both the block handling db_path is None and the similar branch
around lines 454-458); this ensures environment overrides and canonical path
resolution are respected.

---

Duplicate comments:
In `@brain-bar/Sources/BrainBar/BrainBarServer.swift`:
- Around line 36-39: The start() method currently dispatches startOnQueue()
fire-and-forget on queue which logs failures via NSLog and returns, so startup
readiness/failure is never propagated; change start() and related callers (e.g.,
startOnQueue(), any launch entrypoints) to surface results by either (A) adding
a completion handler or Result/async callback parameter to start() that is
invoked on completion/failure from startOnQueue() with detailed Error, or (B)
perform the work synchronously on the server queue and make start() throwing so
callers can catch errors; ensure all failure paths inside startOnQueue() and the
other startup helpers return or throw the Error (not just NSLog) and that the
chosen completion/throwing path reports detailed failure reasons to the launcher
so it can fail fast.

In `@brain-bar/Sources/BrainBar/MCPRouter.swift`:
- Around line 142-152: handleBrainSearch currently only uses query and
num_results (and only clamps an upper bound), while toolDefinitions advertises
project, tag, importance_min, and detail; update handleBrainSearch to validate
num_results (ensure it's a positive Int, apply a min of 1 and max of 100) and
pass the additional parameters into BrainDatabase.search (or remove those keys
from toolDefinitions if they are not supported). Locate handleBrainSearch and
ensure you extract and validate args["num_results"] as a positive Int, extract
args["project"], args["tag"], args["importance_min"], and args["detail"] with
safe types, then call BrainDatabase.search with a signature that accepts these
filters (or add an overload/mapping function) so the search honors the
advertised parameters; if you choose to remove them from the schema, update
toolDefinitions to stop advertising project/tag/importance_min/detail instead.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 5b2c578c-1cd8-49e4-ba63-12b3651471cc

📥 Commits

Reviewing files that changed from the base of the PR and between b8296a3 and ff33abc.

📒 Files selected for processing (9)
  • brain-bar/Sources/BrainBar/BrainBarServer.swift
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
  • brain-bar/Sources/BrainBar/MCPFraming.swift
  • brain-bar/Sources/BrainBar/MCPRouter.swift
  • brain-bar/Tests/BrainBarTests/MCPFramingTests.swift
  • brain-bar/build-app.sh
  • brain-bar/bundle/com.brainlayer.brainbar.plist
  • src/brainlayer/ingest/codex.py
  • tests/test_ingest_codex.py
📜 Review details
🧰 Additional context used
📓 Path-based instructions (2)
src/brainlayer/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/brainlayer/**/*.py: Python package structure should follow the layout: src/brainlayer/ for package code, with separate modules for vector_store.py, embeddings.py, daemon.py, dashboard/, and mcp/ for different concerns
Use paths.py:get_db_path() for all database path resolution instead of hardcoding paths; support environment variable overrides and canonical path fallback (~/.local/share/brainlayer/brainlayer.db)
Lint and format Python code using ruff check src/ and ruff format src/
Preserve verbatim content for ai_code, stack_trace, and user_message message types during classification and chunking; skip noise content entirely; summarize build_log content; extract structure-only for dir_listing
Use AST-aware chunking with tree-sitter; never split stack traces; mask large tool output during chunking
Handle SQLite concurrency by implementing retry logic on SQLITE_BUSY errors; ensure each worker uses its own database connection
Prioritize MLX (Qwen2.5-Coder-14B-Instruct-4bit) on Apple Silicon (port 8080) as the enrichment backend; fall back to Ollama (glm-4.7-flash on port 11434) after 3 consecutive MLX failures; support backend override via BRAINLAYER_ENRICH_BACKEND environment variable
Brain graph API must expose endpoints: /brain/graph, /brain/node/{node_id} (FastAPI)
Backlog API must support endpoints: /backlog/items with GET, POST, PATCH, DELETE operations (FastAPI)
Provide brainlayer brain-export command to export brain graph as JSON for dashboard consumption
Provide brainlayer export-obsidian command to export as Markdown vault with backlinks and tags
For bulk database operations: stop enrichment workers first, checkpoint WAL before and after operations, drop FTS triggers before bulk deletes, batch deletes in 5-10K chunks with checkpoint every 3 batches, never delete from chunks while FTS trigger is active

Files:

  • src/brainlayer/ingest/codex.py
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Flag risky DB or concurrency changes explicitly and do not hand-wave lock behavior
Enforce one-write-at-a-time concurrency constraint; reads are safe but brain_digest is write-heavy and must not run in parallel with other MCP work
Run pytest before claiming behavior changed safely; current test suite has 929 tests

Files:

  • src/brainlayer/ingest/codex.py
  • tests/test_ingest_codex.py
🧠 Learnings (6)
📓 Common learnings
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:11.732Z
Learning: The socket path `/tmp/brainbar.sock` is intentional for the BrainBar Swift daemon (brain-bar/) and must NOT be changed to `/tmp/brainlayer.sock`. BrainBar is a new daemon that coexists with the existing Python `brainlayer-mcp` (which uses `/tmp/brainlayer.sock`) during the migration period. The different paths avoid conflicts and allow A/B testing. Once BrainBar is proven stable, the Python server will be retired and `.mcp.json` will point to `/tmp/brainbar.sock` via socat.
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:22.473Z
Learning: In BrainLayer, the BrainBar daemon uses the socket path `/tmp/brainbar.sock` (NOT `/tmp/brainlayer.sock`). BrainBar is a new native Swift daemon designed to coexist with the existing Python `brainlayer-mcp` server during the migration period. Different socket paths avoid conflicts and enable A/B testing. Once BrainBar is proven stable, the Python server will be retired.
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:22.473Z
Learning: Applies to src/brainlayer/mcp/**/*.py and brain-bar/Sources/BrainBar/MCPRouter.swift: The 8 required MCP tools are `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_tags`. `brain_tags` is the 8th tool, replacing `brain_get_person`, as defined in the Phase B spec merged in PR `#72`. The Python MCP server already implements `brain_tags`. Legacy `brainlayer_*` aliases must be maintained for backward compatibility.
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-14T02:20:54.656Z
Learning: Request codex review, cursor review, and bugbot review for BrainLayer PRs
📚 Learning: 2026-03-17T01:04:11.732Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:11.732Z
Learning: The socket path `/tmp/brainbar.sock` is intentional for the BrainBar Swift daemon (brain-bar/) and must NOT be changed to `/tmp/brainlayer.sock`. BrainBar is a new daemon that coexists with the existing Python `brainlayer-mcp` (which uses `/tmp/brainlayer.sock`) during the migration period. The different paths avoid conflicts and allow A/B testing. Once BrainBar is proven stable, the Python server will be retired and `.mcp.json` will point to `/tmp/brainbar.sock` via socat.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainBarServer.swift
  • brain-bar/bundle/com.brainlayer.brainbar.plist
📚 Learning: 2026-03-17T01:04:22.473Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:22.473Z
Learning: In BrainLayer, the BrainBar daemon uses the socket path `/tmp/brainbar.sock` (NOT `/tmp/brainlayer.sock`). BrainBar is a new native Swift daemon designed to coexist with the existing Python `brainlayer-mcp` server during the migration period. Different socket paths avoid conflicts and enable A/B testing. Once BrainBar is proven stable, the Python server will be retired.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainBarServer.swift
  • brain-bar/bundle/com.brainlayer.brainbar.plist
📚 Learning: 2026-03-17T01:04:22.473Z
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:22.473Z
Learning: Applies to src/brainlayer/mcp/**/*.py and brain-bar/Sources/BrainBar/MCPRouter.swift: The 8 required MCP tools are `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_tags`. `brain_tags` is the 8th tool, replacing `brain_get_person`, as defined in the Phase B spec merged in PR `#72`. The Python MCP server already implements `brain_tags`. Legacy `brainlayer_*` aliases must be maintained for backward compatibility.

Applied to files:

  • brain-bar/Sources/BrainBar/BrainBarServer.swift
  • brain-bar/Sources/BrainBar/MCPRouter.swift
📚 Learning: 2026-03-14T02:20:54.656Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-14T02:20:54.656Z
Learning: Treat retrieval correctness, write safety, and MCP stability as critical-path concerns in BrainLayer reviews

Applied to files:

  • brain-bar/Sources/BrainBar/MCPRouter.swift
📚 Learning: 2026-03-12T14:22:54.809Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to src/brainlayer/**/*.py : Preserve verbatim content for `ai_code`, `stack_trace`, and `user_message` message types during classification and chunking; skip `noise` content entirely; summarize `build_log` content; extract structure-only for `dir_listing`

Applied to files:

  • src/brainlayer/ingest/codex.py
🪛 SwiftLint (0.63.2)
brain-bar/Tests/BrainBarTests/MCPFramingTests.swift

[Warning] 15-15: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/BrainBarServer.swift

[Warning] 11-11: Classes should have an explicit deinit method

(required_deinit)

brain-bar/Sources/BrainBar/MCPRouter.swift

[Warning] 12-12: Classes should have an explicit deinit method

(required_deinit)

🔇 Additional comments (6)
tests/test_ingest_codex.py (1)

226-232: Formatting-only test updates look good.

These call-site reflows are non-functional and keep test intent/readability intact.

Also applies to: 242-248, 254-260, 266-272, 278-287, 295-301, 309-315, 324-330, 337-343, 349-355, 361-367, 373-379, 394-400, 408-425

src/brainlayer/ingest/codex.py (1)

447-447: These refactors are clean and behavior-preserving.

The comprehension/query/argparse reflow remains clear and functionally equivalent.

Also applies to: 466-466, 504-510

brain-bar/build-app.sh (4)

8-14: LGTM — configuration and env override look solid.

set -euo pipefail ensures early failure on errors. The BRAINBAR_APP_DIR override addresses the earlier feedback about parameterizing the install location, and defaulting to $HOME/Applications avoids permission issues.


15-24: Build and binary verification are correct.

The script invokes swift build twice (lines 16 and 19), but the second call is just to retrieve --show-bin-path. Swift's incremental build cache makes this effectively a no-op the second time. The binary existence check provides a clear error message on failure.


44-45: Socket and DB paths are correct and consistent.

The echoed paths (/tmp/brainbar.sock and ~/.local/share/brainlayer/brainlayer.db) match the defaults in BrainBarServer.swift. Based on learnings, the socket path /tmp/brainbar.sock is intentional for coexistence with the Python MCP server during migration.


45-45: Add trailing newline to script.

POSIX convention requires files to end with a newline. This was flagged in a previous review and does not appear to be addressed yet.

Comment thread brain-bar/build-app.sh
Comment on lines +4 to +6
# Usage: bash brain-bar/build-app.sh
#
# Output: /Applications/BrainBar.app
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update comment to reflect new default install path.

Line 6 states Output: /Applications/BrainBar.app, but the actual default (line 13) is now $HOME/Applications/BrainBar.app. This inconsistency may confuse users reading the header.

📝 Suggested fix
 # Usage: bash brain-bar/build-app.sh
 #
-# Output: /Applications/BrainBar.app
+# Output: ~/Applications/BrainBar.app (override with BRAINBAR_APP_DIR)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Usage: bash brain-bar/build-app.sh
#
# Output: /Applications/BrainBar.app
# Usage: bash brain-bar/build-app.sh
#
# Output: ~/Applications/BrainBar.app (override with BRAINBAR_APP_DIR)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/build-app.sh` around lines 4 - 6, Update the script header comment
so the documented default output path matches the actual default used in the
script (change "Output: /Applications/BrainBar.app" to "Output:
$HOME/Applications/BrainBar.app" or similar), ensuring the header mirrors the
script's default install directory referenced in the file (the default install
variable assigned around the top of build-app.sh).

Comment thread brain-bar/build-app.sh
Comment on lines +36 to +37
cp "$BUNDLE_DIR/Info.plist" "$APP_DIR/Contents/"
cp "$BINARY" "$APP_DIR/Contents/MacOS/BrainBar"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider verifying Info.plist exists before copying.

If $BUNDLE_DIR/Info.plist is missing, the script fails with a generic cp error. An explicit check would improve debuggability.

🛡️ Optional improvement
+if [ ! -f "$BUNDLE_DIR/Info.plist" ]; then
+    echo "[build-app] ERROR: Info.plist not found at $BUNDLE_DIR/Info.plist"
+    exit 1
+fi
 cp "$BUNDLE_DIR/Info.plist" "$APP_DIR/Contents/"
 cp "$BINARY" "$APP_DIR/Contents/MacOS/BrainBar"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cp "$BUNDLE_DIR/Info.plist" "$APP_DIR/Contents/"
cp "$BINARY" "$APP_DIR/Contents/MacOS/BrainBar"
if [ ! -f "$BUNDLE_DIR/Info.plist" ]; then
echo "[build-app] ERROR: Info.plist not found at $BUNDLE_DIR/Info.plist"
exit 1
fi
cp "$BUNDLE_DIR/Info.plist" "$APP_DIR/Contents/"
cp "$BINARY" "$APP_DIR/Contents/MacOS/BrainBar"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/build-app.sh` around lines 36 - 37, The script blindly copies
"$BUNDLE_DIR/Info.plist" and "$BINARY" into "$APP_DIR/Contents/", so add an
explicit existence check for "$BUNDLE_DIR/Info.plist" before the cp (and
similarly consider checking "$BINARY"), and if the file is missing call an
explanatory echo to stderr and exit non‑zero; update the block surrounding the
cp lines (the cp "$BUNDLE_DIR/Info.plist" "$APP_DIR/Contents/" and cp "$BINARY"
"$APP_DIR/Contents/MacOS/BrainBar") to perform test -f checks for the source
files and fail fast with a clear error message mentioning the missing path and
the variables BUNDLE_DIR or BINARY.

Comment on lines +1 to +27
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.brainlayer.brainbar</string>
<key>ProgramArguments</key>
<array>
<string>/Applications/BrainBar.app/Contents/MacOS/BrainBar</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
</dict>
<key>StandardOutPath</key>
<string>/tmp/brainbar.stdout.log</string>
<key>StandardErrorPath</key>
<string>/tmp/brainbar.stderr.log</string>
<key>ProcessType</key>
<string>Interactive</string>
<key>ThrottleInterval</key>
<integer>10</integer>
</dict>
</plist>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

No installation mechanism for this plist.

The plist is defined but never installed. Looking at the codebase:

  • scripts/launchd/install.sh handles index, enrich, and wal-checkpoint but has no case for brainbar
  • brain-bar/build-app.sh builds the app bundle but doesn't copy the plist to ~/Library/LaunchAgents/ or run launchctl bootstrap

Users would need to manually install:

cp brain-bar/bundle/com.brainlayer.brainbar.plist ~/Library/LaunchAgents/
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.brainlayer.brainbar.plist

Consider either:

  1. Adding a brainbar case to scripts/launchd/install.sh
  2. Adding plist installation to brain-bar/build-app.sh
  3. Documenting manual installation steps in README or build script output
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/bundle/com.brainlayer.brainbar.plist` around lines 1 - 27, The
launchd plist com.brainlayer.brainbar.plist is never installed; add an automated
install path by adding a "brainbar" case to scripts/launchd/install.sh that
follows the existing pattern (copy
brain-bar/bundle/com.brainlayer.brainbar.plist into ~/Library/LaunchAgents and
run the appropriate launchctl bootstrap/unbootstrap commands for the current GUI
user), or alternatively modify brain-bar/build-app.sh to perform the copy and
launchctl bootstrap after building; update the script to log success/failure and
ensure idempotence (check for existing plist) so users no longer must install
the plist manually.

Comment on lines +7 to +10
<key>ProgramArguments</key>
<array>
<string>/Applications/BrainBar.app/Contents/MacOS/BrainBar</string>
</array>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Path mismatch with build script defaults.

The plist hardcodes /Applications/BrainBar.app/... but build-app.sh defaults to $HOME/Applications/BrainBar.app:

APP_DIR="${BRAINBAR_APP_DIR:-$HOME/Applications/BrainBar.app}"

If a user builds with defaults and installs this plist, launchd will fail to find the binary because it's looking in the wrong directory.

Options to fix:

  1. Change the default in build-app.sh to /Applications/BrainBar.app (requires admin privileges)
  2. Use a placeholder in the plist and have an install script substitute the actual path (like scripts/launchd/install.sh does with __HOME__)
  3. Document that BRAINBAR_APP_DIR=/Applications/BrainBar.app must be set
🛠️ Suggested fix: Add placeholder substitution
     <key>ProgramArguments</key>
     <array>
-        <string>/Applications/BrainBar.app/Contents/MacOS/BrainBar</string>
+        <string>__APP_DIR__/Contents/MacOS/BrainBar</string>
     </array>

Then in an install script:

sed -e "s|__APP_DIR__|$APP_DIR|g" "$PLIST_SRC" > "$PLIST_DST"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/bundle/com.brainlayer.brainbar.plist` around lines 7 - 10, The
plist hardcodes /Applications/BrainBar.app while build-app.sh uses
APP_DIR="${BRAINBAR_APP_DIR:-$HOME/Applications/BrainBar.app}", causing a path
mismatch; update the plist to use a placeholder (e.g., __APP_DIR__) in the
ProgramArguments array and update the install flow to substitute that
placeholder with the runtime APP_DIR value (as set by build-app.sh) — mirror the
pattern used by scripts/launchd/install.sh (which replaces __HOME__) so the
final installed bundle points to the actual APP_DIR rather than a hardcoded
path.

Comment on lines +167 to +183
private func sendResponse(fd: Int32, response: [String: Any]) {
guard let framed = try? MCPFraming.encode(response) else { return }
framed.withUnsafeBytes { ptr in
var totalWritten = 0
while totalWritten < framed.count {
let n = write(fd, ptr.baseAddress!.advanced(by: totalWritten), framed.count - totalWritten)
if n < 0 {
if errno == EAGAIN || errno == EWOULDBLOCK {
// Kernel buffer full — brief retry
usleep(1000) // 1ms
continue
}
break // Real error
}
if n == 0 { break } // EOF
totalWritten += n
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't spin on a blocked client socket on the server queue.

Lines 174-177 retry forever with usleep(1000) on the same serial queue that accepts clients and processes requests. One slow client can pin the queue here indefinitely and stall the entire daemon. Queue the unwritten tail behind a writable source, or close the client after a bounded retry budget.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainBarServer.swift` around lines 167 - 183, The
sendResponse(fd: Int32, response: [String: Any]) is currently busy-waiting on
write with repeated usleep calls on the server serial queue; change this to a
non-blocking, bounded strategy: attempt one immediate non-blocking write pass
(using write as currently done), and if not fully written, stop retrying on the
server queue and either (a) schedule the remaining bytes to be written by
attaching a DispatchSource.makeWriteSource for the client fd that resumes when
writable and completes the tail write, or (b) if you prefer a simpler fix,
enforce a small retry budget (e.g., N attempts) and then close(fd) and log an
error; implement this using the existing sendResponse, framed buffer, and fd
variables and avoid any infinite loop on the server queue.

Comment on lines +21 to +40
private func openAndConfigure() {
// FULLMUTEX: SQLite serializes C-level access. Needed because WAL concurrent
// reads come from GCD threads, and close() could race with in-flight queries.
let flags = SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_FULLMUTEX
let rc = sqlite3_open_v2(path, &db, flags, nil)
guard rc == SQLITE_OK else {
NSLog("[BrainBar] Failed to open database at %@: %d", path, rc)
return
}
isOpen = true

// Configure PRAGMAs
exec("PRAGMA journal_mode = WAL")
exec("PRAGMA busy_timeout = 5000")
exec("PRAGMA cache_size = -64000")
exec("PRAGMA synchronous = NORMAL")

// Create schema
createSchema()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail database initialization when required setup does not complete.

Line 30 marks the handle open before the WAL/FTS/triggers setup runs, and every exec(_:) failure path only logs. If any of that DDL fails, BrainBarServer still hands this instance to MCPRouter as if startup succeeded. Make openAndConfigure()/createSchema() fail fast and only publish an open database after the required setup completes.

Also applies to: 42-101, 253-261

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 21 - 40,
openAndConfigure() currently marks the DB open immediately and only logs
failures from exec(_:) and createSchema(), so startup can falsely succeed;
change exec(_:) and createSchema() to return Bool or throw on error, call
sqlite3_open_v2(...) and if it succeeds then run all PRAGMA/DDL and only set
isOpen = true and allow the instance to be published to MCPRouter/BrainBarServer
after all exec/createSchema calls succeed; on any failure, log the error, close
and nil out db (sqlite3_close(db); db = nil), leave isOpen false, and propagate
the failure (return Bool/throw) so callers know initialization failed. Ensure
callers that currently assume success check the new return/throw value before
registering the DB.

Comment on lines +79 to +93
private func handleToolsCall(id: Any, params: [String: Any]) -> [String: Any] {
guard let toolName = params["name"] as? String else {
return jsonRPCError(id: id, code: -32602, message: "Missing tool name")
}

let arguments = params["arguments"] as? [String: Any] ?? [:]

// Check tool exists
guard Self.toolDefinitions.contains(where: { ($0["name"] as? String) == toolName }) else {
return jsonRPCError(id: id, code: -32601, message: "Unknown tool: \(toolName)")
}

// Dispatch to handler
do {
let result = try dispatchTool(name: toolName, arguments: arguments)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize legacy brainlayer_* names before tool validation/dispatch.

handleToolsCall rejects anything not present in toolDefinitions, and dispatchTool only accepts the canonical brain_* names. Older clients using brainlayer_search, brainlayer_store, etc. now get Unknown tool instead of the compatibility path this migration needs.

🔁 Minimal normalization
+        let canonicalToolName = toolName.replacingOccurrences(
+            of: "brainlayer_",
+            with: "brain_",
+            options: [.anchored]
+        )
-        guard Self.toolDefinitions.contains(where: { ($0["name"] as? String) == toolName }) else {
+        guard Self.toolDefinitions.contains(where: { ($0["name"] as? String) == canonicalToolName }) else {
             return jsonRPCError(id: id, code: -32601, message: "Unknown tool: \(toolName)")
         }
 ...
-            let result = try dispatchTool(name: toolName, arguments: arguments)
+            let result = try dispatchTool(name: canonicalToolName, arguments: arguments)

Based on learnings: Legacy brainlayer_* aliases must be maintained for backward compatibility.

Also applies to: 117-137

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/MCPRouter.swift` around lines 79 - 93,
handleToolsCall currently rejects legacy brainlayer_* tool names before
validation/dispatch; add a normalization step that maps legacy aliases (e.g.,
brainlayer_search => brain_search, brainlayer_store => brain_store, etc.) to the
canonical brain_* names and use the normalized name for both the toolDefinitions
existence check and the dispatchTool call; implement a single reusable alias map
(used wherever you validate/dispatch tools, including the other tools-handling
block) so legacy clients are accepted and dispatchTool always receives canonical
names.

Comment on lines 378 to 382
if db_path is None:
from ..paths import DEFAULT_DB_PATH

db_path = DEFAULT_DB_PATH

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use get_db_path() instead of DEFAULT_DB_PATH in both fallback branches.

These two fallback blocks bypass the repository-standard DB path resolver.

Suggested patch
-    if db_path is None:
-        from ..paths import DEFAULT_DB_PATH
-
-        db_path = DEFAULT_DB_PATH
+    if db_path is None:
+        from ..paths import get_db_path
+
+        db_path = get_db_path()
...
-    if not dry_run and db_path is None:
-        from ..paths import DEFAULT_DB_PATH
-
-        db_path = DEFAULT_DB_PATH
+    if not dry_run and db_path is None:
+        from ..paths import get_db_path
+
+        db_path = get_db_path()

As per coding guidelines: "Use paths.py:get_db_path() for all database path resolution instead of hardcoding paths; support environment variable overrides and canonical path fallback (~/.local/share/brainlayer/brainlayer.db)."

Also applies to: 454-458

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/brainlayer/ingest/codex.py` around lines 378 - 382, Replace direct uses
of DEFAULT_DB_PATH in the codex ingest fallback branches with a call to the
standard resolver get_db_path(): import get_db_path from paths (e.g., from
..paths import get_db_path) and set db_path = get_db_path() wherever the code
currently does db_path = DEFAULT_DB_PATH (both the block handling db_path is
None and the similar branch around lines 454-458); this ensures environment
overrides and canonical path resolution are respected.

Root cause of "SQLite prepare failed: 1":
- BrainBar used `chunk_id` — production uses `id`
- BrainBar used `session_id` — production uses `conversation_id`
- BrainBar created its own FTS5 schema — production uses
  (content, summary, tags, resolved_query, chunk_id UNINDEXED)
- Production requires source_file NOT NULL, metadata NOT NULL

Fix:
- All SQL now uses production column names (id, conversation_id, etc.)
- ensureSchema() skips creation on existing DB (production already has schema)
- New/test DBs get production-compatible schema
- FTS5 triggers match production (include resolved_query, chunk_id)

Tested: brain_search via socat returns real results from 312K chunk DB.
28/28 Swift tests GREEN.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
brain-bar/Sources/BrainBar/BrainDatabase.swift (1)

25-45: ⚠️ Potential issue | 🔴 Critical

Fail DB initialization if PRAGMA/DDL setup fails (startup currently reports false success).

Line 34 sets isOpen = true before required setup, and exec(_:)/ensureSchema() only log errors. This can publish a partially initialized DB to the server/router.

Proposed fail-fast patch
-    init(path: String) {
+    init(path: String) throws {
         self.path = path
-        openAndConfigure()
+        try openAndConfigure()
     }

-    private func openAndConfigure() {
+    private func openAndConfigure() throws {
         let flags = SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_FULLMUTEX
         let rc = sqlite3_open_v2(path, &db, flags, nil)
         guard rc == SQLITE_OK else {
             NSLog("[BrainBar] Failed to open database at %@: %d", path, rc)
-            return
+            throw DBError.open(rc)
         }
-        isOpen = true

-        exec("PRAGMA journal_mode = WAL")
-        exec("PRAGMA busy_timeout = 5000")
-        exec("PRAGMA cache_size = -64000")
-        exec("PRAGMA synchronous = NORMAL")
+        try exec("PRAGMA journal_mode = WAL")
+        try exec("PRAGMA busy_timeout = 5000")
+        try exec("PRAGMA cache_size = -64000")
+        try exec("PRAGMA synchronous = NORMAL")

-        ensureSchema()
+        try ensureSchema()
+        isOpen = true
     }

-    private func ensureSchema() {
+    private func ensureSchema() throws {
         if (try? tableExists("chunks")) == true { return }
-        exec("...")
+        try exec("...")
     }

-    private func exec(_ sql: String) {
-        guard let db else { return }
+    private func exec(_ sql: String) throws {
+        guard let db else { throw DBError.notOpen }
         var errMsg: UnsafeMutablePointer<CChar>?
         let rc = sqlite3_exec(db, sql, nil, nil, &errMsg)
         if rc != SQLITE_OK {
             let msg = errMsg.map { String(cString: $0) } ?? "unknown error"
             NSLog("[BrainBar] SQL error: %@ (code: %d)", msg, rc)
             sqlite3_free(errMsg)
+            throw DBError.exec(rc, msg)
         }
     }

Also applies to: 47-110, 265-274

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 25 - 45, The
database is marked open too early in openAndConfigure(): move setting isOpen =
true until after PRAGMA and DDL setup succeed; change exec(_:) and
ensureSchema() to return Bool (or surface errors) and in openAndConfigure()
check their results, and on any failure call close() (or sqlite3_close_v2(db))
to clean up, log the failure with details, leave isOpen false, and return
without publishing the DB; keep the sqlite3_open_v2(...) success check as-is and
only flip isOpen after all exec("PRAGMA ...") calls and ensureSchema() complete
successfully.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 158-164: The INSERT in
insertChunk(id:content:sessionId:project:contentType:importance:) uses "INSERT
OR REPLACE" which breaks FTS5 rowid sync; replace it with an "INSERT ... ON
CONFLICT(id) DO UPDATE SET ..." statement that updates the columns (content,
metadata, source_file, project, content_type, importance, conversation_id,
char_count) using the excluded values so the rowid is preserved and the FTS
update trigger will run as an UPDATE; update the sql variable in insertChunk and
keep parameter order compatible with the prepared statement and bind calls so
existing bindings for id, content, project, contentType, importance, sessionId
and char_count still match.
- Around line 14-18: The BrainDatabase class mutates shared state (db and
isOpen) without synchronization and its close() is incomplete: it doesn't set
isOpen = false and calls sqlite3_close() ignoring the result. Fix by adding
explicit synchronization around all accesses/mutations of db and isOpen (e.g.,
serial DispatchQueue or a lock) used in open(), close(), and any methods
touching db; update the close() implementation inside BrainDatabase to call
sqlite3_close_v2(db), check and handle its return code, set db = nil and isOpen
= false under the same synchronization primitive, and ensure any early returns
still run the synchronized cleanup to avoid stale state.

---

Duplicate comments:
In `@brain-bar/Sources/BrainBar/BrainDatabase.swift`:
- Around line 25-45: The database is marked open too early in
openAndConfigure(): move setting isOpen = true until after PRAGMA and DDL setup
succeed; change exec(_:) and ensureSchema() to return Bool (or surface errors)
and in openAndConfigure() check their results, and on any failure call close()
(or sqlite3_close_v2(db)) to clean up, log the failure with details, leave
isOpen false, and return without publishing the DB; keep the
sqlite3_open_v2(...) success check as-is and only flip isOpen after all
exec("PRAGMA ...") calls and ensureSchema() complete successfully.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: e3856e46-8130-45cd-a154-1b3d5292919a

📥 Commits

Reviewing files that changed from the base of the PR and between ff33abc and e2fd0cf.

📒 Files selected for processing (1)
  • brain-bar/Sources/BrainBar/BrainDatabase.swift
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:11.732Z
Learning: The socket path `/tmp/brainbar.sock` is intentional for the BrainBar Swift daemon (brain-bar/) and must NOT be changed to `/tmp/brainlayer.sock`. BrainBar is a new daemon that coexists with the existing Python `brainlayer-mcp` (which uses `/tmp/brainlayer.sock`) during the migration period. The different paths avoid conflicts and allow A/B testing. Once BrainBar is proven stable, the Python server will be retired and `.mcp.json` will point to `/tmp/brainbar.sock` via socat.
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:22.473Z
Learning: In BrainLayer, the BrainBar daemon uses the socket path `/tmp/brainbar.sock` (NOT `/tmp/brainlayer.sock`). BrainBar is a new native Swift daemon designed to coexist with the existing Python `brainlayer-mcp` server during the migration period. Different socket paths avoid conflicts and enable A/B testing. Once BrainBar is proven stable, the Python server will be retired.
Learnt from: EtanHey
Repo: EtanHey/brainlayer PR: 0
File: :0-0
Timestamp: 2026-03-17T01:04:22.473Z
Learning: Applies to src/brainlayer/mcp/**/*.py and brain-bar/Sources/BrainBar/MCPRouter.swift: The 8 required MCP tools are `brain_search`, `brain_store`, `brain_recall`, `brain_entity`, `brain_expand`, `brain_update`, `brain_digest`, `brain_tags`. `brain_tags` is the 8th tool, replacing `brain_get_person`, as defined in the Phase B spec merged in PR `#72`. The Python MCP server already implements `brain_tags`. Legacy `brainlayer_*` aliases must be maintained for backward compatibility.
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-14T02:20:54.656Z
Learning: Request codex review, cursor review, and bugbot review for BrainLayer PRs
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-12T14:22:54.809Z
Learning: Applies to setup.py,pyproject.toml,setup.cfg : MCP server must be executable via `brainlayer-mcp` entrypoint

Comment on lines +14 to +18
final class BrainDatabase: @unchecked Sendable {
private var db: OpaquePointer?
private let path: String
/// Whether the database opened successfully.
private(set) var isOpen: Bool = false
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "BrainDatabase.swift" -type f

Repository: EtanHey/brainlayer

Length of output: 110


🏁 Script executed:

cat -n ./brain-bar/Sources/BrainBar/BrainDatabase.swift

Repository: EtanHey/brainlayer

Length of output: 14740


🏁 Script executed:

grep -n "@MainActor\|actor\|nonisolated\|Sendable\|Lock\|DispatchQueue\|SerialQueue" ./brain-bar/Sources/BrainBar/BrainDatabase.swift

Repository: EtanHey/brainlayer

Length of output: 113


🏁 Script executed:

grep -r "BrainDatabase" ./brain-bar/Sources --include="*.swift" -n | head -30

Repository: EtanHey/brainlayer

Length of output: 649


🏁 Script executed:

cat -n ./brain-bar/Sources/BrainBar/BrainBarServer.swift | head -80

Repository: EtanHey/brainlayer

Length of output: 3189


🏁 Script executed:

grep -A5 "database\." ./brain-bar/Sources/BrainBar/BrainBarServer.swift | head -40

Repository: EtanHey/brainlayer

Length of output: 255


🏁 Script executed:

grep -n "database\." ./brain-bar/Sources/BrainBar/*.swift

Repository: EtanHey/brainlayer

Length of output: 189


🏁 Script executed:

grep -n "\.search\|\.store\|\.pragma\|\.insertChunk\|\.close()" ./brain-bar/Sources/BrainBar/*.swift

Repository: EtanHey/brainlayer

Length of output: 402


🏁 Script executed:

cat -n ./brain-bar/Sources/BrainBar/MCPRouter.swift | sed -n '1,50p'

Repository: EtanHey/brainlayer

Length of output: 2139


🏁 Script executed:

cat -n ./brain-bar/Sources/BrainBar/MCPRouter.swift | sed -n '140,170p'

Repository: EtanHey/brainlayer

Length of output: 1552


🏁 Script executed:

cat -n ./brain-bar/Sources/BrainBar/BrainBarServer.swift | sed -n '180,210p'

Repository: EtanHey/brainlayer

Length of output: 669


🏁 Script executed:

grep -B5 -A5 "router\.handle\|MCPRouter" ./brain-bar/Sources/BrainBar/BrainBarServer.swift | head -60

Repository: EtanHey/brainlayer

Length of output: 1118


🏁 Script executed:

grep -r "BrainDatabase\|database\." ./brain-bar/Sources --include="*.swift" -B2 -A2 | grep -E "async|await|Task|DispatchQueue|thread|Thread"

Repository: EtanHey/brainlayer

Length of output: 44


🏁 Script executed:

cat -n ./brain-bar/Sources/BrainBar/BrainBarServer.swift | sed -n '80,130p'

Repository: EtanHey/brainlayer

Length of output: 2086


Add explicit synchronization to db lifecycle and fix incomplete close() implementation.

The class uses @unchecked Sendable with mutable db and isOpen properties. While SQLITE_OPEN_FULLMUTEX protects SQLite's C API calls, it does not make Swift property access race-free. More critically, the close() method has two bugs: (1) it does not set isOpen = false, creating stale state, and (2) it uses sqlite3_close() instead of the safer sqlite3_close_v2() and ignores the return code.

Recommended fix
 final class BrainDatabase: `@unchecked` Sendable {
+    private let stateLock = NSLock()
     private var db: OpaquePointer?
     ...
     func close() {
-        if let db {
-            sqlite3_close(db)
-            self.db = nil
-        }
+        stateLock.lock(); defer { stateLock.unlock() }
+        guard let db else { isOpen = false; return }
+        let rc = sqlite3_close_v2(db)
+        guard rc == SQLITE_OK else {
+            NSLog("[BrainBar] sqlite3_close_v2 failed: %d", rc)
+            return
+        }
+        self.db = nil
+        isOpen = false
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 14 - 18, The
BrainDatabase class mutates shared state (db and isOpen) without synchronization
and its close() is incomplete: it doesn't set isOpen = false and calls
sqlite3_close() ignoring the result. Fix by adding explicit synchronization
around all accesses/mutations of db and isOpen (e.g., serial DispatchQueue or a
lock) used in open(), close(), and any methods touching db; update the close()
implementation inside BrainDatabase to call sqlite3_close_v2(db), check and
handle its return code, set db = nil and isOpen = false under the same
synchronization primitive, and ensure any early returns still run the
synchronized cleanup to avoid stale state.

Comment on lines +158 to +164
func insertChunk(id: String, content: String, sessionId: String, project: String, contentType: String, importance: Int) throws {
guard let db else { throw DBError.notOpen }
var stmt: OpaquePointer?
let sql = """
INSERT OR REPLACE INTO chunks (id, content, metadata, source_file, project, content_type, importance, conversation_id, char_count)
VALUES (?, ?, '{}', 'brainbar', ?, ?, ?, ?, ?)
"""
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "*.swift" | grep -i database | head -20

Repository: EtanHey/brainlayer

Length of output: 162


🏁 Script executed:

fd BrainDatabase

Repository: EtanHey/brainlayer

Length of output: 108


🏁 Script executed:

git ls-files | grep -i brain

Repository: EtanHey/brainlayer

Length of output: 3457


🏁 Script executed:

wc -l brain-bar/Sources/BrainBar/BrainDatabase.swift

Repository: EtanHey/brainlayer

Length of output: 112


🏁 Script executed:

cat -n brain-bar/Sources/BrainBar/BrainDatabase.swift | head -250

Repository: EtanHey/brainlayer

Length of output: 12012


🏁 Script executed:

cat -n brain-bar/Sources/BrainBar/BrainDatabase.swift | tail -100

Repository: EtanHey/brainlayer

Length of output: 4327


🏁 Script executed:

cat -n brain-bar/Tests/BrainBarTests/DatabaseTests.swift

Repository: EtanHey/brainlayer

Length of output: 5132


Use ON CONFLICT DO UPDATE for FTS5 consistency with rowid-synced indexes.

INSERT OR REPLACE on a row with FTS sync triggers fires separate DELETE and INSERT triggers, which can desynchronize the FTS index. With FTS5 external-content tables, ON CONFLICT(id) DO UPDATE ... maintains rowid and fires the UPDATE trigger, ensuring atomic synchronization:

Proposed SQL change
-        let sql = """
-            INSERT OR REPLACE INTO chunks (id, content, metadata, source_file, project, content_type, importance, conversation_id, char_count)
-            VALUES (?, ?, '{}', 'brainbar', ?, ?, ?, ?, ?)
-        """
+        let sql = """
+            INSERT INTO chunks (id, content, metadata, source_file, project, content_type, importance, conversation_id, char_count)
+            VALUES (?, ?, '{}', 'brainbar', ?, ?, ?, ?, ?)
+            ON CONFLICT(id) DO UPDATE SET
+                content = excluded.content,
+                metadata = excluded.metadata,
+                source_file = excluded.source_file,
+                project = excluded.project,
+                content_type = excluded.content_type,
+                importance = excluded.importance,
+                conversation_id = excluded.conversation_id,
+                char_count = excluded.char_count
+        """
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func insertChunk(id: String, content: String, sessionId: String, project: String, contentType: String, importance: Int) throws {
guard let db else { throw DBError.notOpen }
var stmt: OpaquePointer?
let sql = """
INSERT OR REPLACE INTO chunks (id, content, metadata, source_file, project, content_type, importance, conversation_id, char_count)
VALUES (?, ?, '{}', 'brainbar', ?, ?, ?, ?, ?)
"""
func insertChunk(id: String, content: String, sessionId: String, project: String, contentType: String, importance: Int) throws {
guard let db else { throw DBError.notOpen }
var stmt: OpaquePointer?
let sql = """
INSERT INTO chunks (id, content, metadata, source_file, project, content_type, importance, conversation_id, char_count)
VALUES (?, ?, '{}', 'brainbar', ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
content = excluded.content,
metadata = excluded.metadata,
source_file = excluded.source_file,
project = excluded.project,
content_type = excluded.content_type,
importance = excluded.importance,
conversation_id = excluded.conversation_id,
char_count = excluded.char_count
"""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@brain-bar/Sources/BrainBar/BrainDatabase.swift` around lines 158 - 164, The
INSERT in insertChunk(id:content:sessionId:project:contentType:importance:) uses
"INSERT OR REPLACE" which breaks FTS5 rowid sync; replace it with an "INSERT ...
ON CONFLICT(id) DO UPDATE SET ..." statement that updates the columns (content,
metadata, source_file, project, content_type, importance, conversation_id,
char_count) using the excluded values so the rowid is preserved and the FTS
update trigger will run as an UPDATE; update the sql variable in insertChunk and
keep parameter order compatible with the prepared statement and bind calls so
existing bindings for id, content, project, contentType, importance, sessionId
and char_count still match.

@EtanHey EtanHey merged commit 6fc6fea into main Mar 17, 2026
6 checks passed
@EtanHey EtanHey deleted the feat/phase-3-brainbar-daemon branch March 17, 2026 10:46
@coderabbitai coderabbitai Bot mentioned this pull request Mar 31, 2026
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant