Skip to content

fix: Phase 8b code quality — dead code, MCP hardening, doc cleanup#17

Merged
EtanHey merged 3 commits intomainfrom
feature/phase-8b-bl-code-quality
Feb 21, 2026
Merged

fix: Phase 8b code quality — dead code, MCP hardening, doc cleanup#17
EtanHey merged 3 commits intomainfrom
feature/phase-8b-bl-code-quality

Conversation

@EtanHey
Copy link
Copy Markdown
Owner

@EtanHey EtanHey commented Feb 21, 2026

Summary

Phase 8b of the Layers v2 Launch plan — code quality fixes across 5 audit reports.

CRITICAL

  • Delete fix_projects CLI — imported non-existent pipeline/index.py, would crash on any invocation
  • Delete index-md CLI — deprecated, referenced removed functionality

HIGH

  • MCP tool descriptions rewritten — all 14 tools now have Returns:, Use when:, Not for: sections
  • Schema constraintsminimum/maximum on all numeric params (num_results 1-100, importance 1-10, etc.)
  • Error signaling — all error paths use _error_result() with isError=True
  • Recall validation — returns error if neither file_path nor topic provided
  • print() → logging — dashboard/search.py, dashboard/views.py, client.py

MEDIUM

  • Docs generalized — removed internal project references from data-locations.md, enrichment-runbook.md
  • Daemon health URLs — env-var driven (BRAINLAYER_OLLAMA_URL, BRAINLAYER_MLX_URL) instead of hardcoded
  • Exception narrowingexcept Exceptionexcept (apsw.Error, RuntimeError) in store.py _find_related
  • Store output schemaidchunk_id for clarity

LOW

  • Private functionnormalize_project_name_normalize_project_name
  • pyproject.toml — added Documentation URL
  • .gitignore — added docs/plan/, claude.scratchpad.md
  • Clustering docstring — explained intentional serialize_f32 duplication

Stats

  • 15 files changed, 189 insertions, 344 deletions (net code reduction!)
  • 266 tests pass, 2 skipped, 0 failures
  • ruff clean

Test plan

  • All non-integration tests pass (266 passed, 2 skipped)
  • ruff check clean
  • Import of _normalize_project_name works in both test files
  • No references to deleted CLI commands remain
  • CodeRabbit review

🤖 Generated with Claude Code


Note

Medium Risk
Touches MCP tool contracts and error signaling, which can break integrations expecting prior schemas/fields despite being largely validation/logging and cleanup changes.

Overview
Removes deprecated and broken CLI commands (including index-md and legacy fix_projects) and tightens project-name handling by making MCP normalization helper private and updating tests accordingly.

Hardens the MCP server by rewriting tool descriptions, adding numeric schema bounds, standardizing error responses via _error_result(isError=True), validating brainlayer_recall input, clamping importance, and renaming store output from id to chunk_id for clarity.

Replaces ad-hoc print() error output with logging in the daemon client and dashboard search/views, makes daemon service health checks derive base URLs from BRAINLAYER_OLLAMA_URL/BRAINLAYER_MLX_URL, and updates docs/.gitignore/pyproject.toml with generalized guidance and minor metadata tweaks.

Written by Cursor Bugbot for commit ac25898. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

  • New Features

    • Service health-check URLs configurable via environment variables (sensible defaults).
  • Documentation

    • Data locations and enrichment runbook updated with new project naming, generalized/manual backup guidance (SQLite VACUUM INTO), and revised archive examples.
    • Added project documentation URL to package metadata.
  • Refactor

    • Removed deprecated CLI commands; standardized output field naming and strengthened validation.
    • Replaced print-based messages with structured logging for clearer error reporting.
  • Chores

    • Added ignores for internal planning scratch files.
  • Tests

    • Updated tests to reference renamed normalization helper.

…eanup

Delete crashed fix_projects CLI (CRITICAL — imported non-existent pipeline/index.py),
delete deprecated index-md command, rewrite all 14 MCP tool descriptions with
Returns/Use-when guidance, add schema constraints on all numeric params, fix all
error paths to use _error_result(isError=True), replace print() with logging in
dashboard and client, generalize docs to remove internal project references,
make daemon health check URLs env-var driven, narrow broad exception handlers.

15 files changed, net code reduction (-344/+189 lines).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 21, 2026

Warning

Rate limit exceeded

@EtanHey has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 16 minutes and 19 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between d1b4ec3 and ac25898.

📒 Files selected for processing (6)
  • docs/data-locations.md
  • docs/enrichment-runbook.md
  • src/brainlayer/daemon.py
  • src/brainlayer/dashboard/views.py
  • src/brainlayer/mcp/__init__.py
  • src/brainlayer/store.py
📝 Walkthrough

Walkthrough

Renames an internal helper, standardizes logging across modules, removes deprecated CLI commands, updates daemon health-check URLs to use environment variables, adjusts MCP schemas and error shapes, and updates documentation and backup procedures to use local SQLite VACUUM INTO backups.

Changes

Cohort / File(s) Summary
Documentation & Config
/.gitignore, docs/data-locations.md, docs/enrichment-runbook.md, pyproject.toml
Add ignores for docs/plan/ and claude.scratchpad.md; update archive/service names and examples to my-project/com.brainlayer.session-archiver; switch backup/restore docs to SQLite VACUUM INTO workflow; add Documentation URL metadata.
CLI Cleanup
src/brainlayer/cli/__init__.py
Remove deprecated CLI commands index-md and two fix_projects implementations; minor docstring example update.
Logging Additions
src/brainlayer/client.py, src/brainlayer/dashboard/search.py, src/brainlayer/dashboard/views.py, src/brainlayer/store.py
Introduce module-level loggers and replace prints with logger.info/logger.warning for error/info reporting.
Daemon Health Checks
src/brainlayer/daemon.py
Health-checks now derive Ollama and ML model base URLs from environment variables (with defaults) and trim API path suffixes before calling endpoints.
MCP Refactor & Schema
src/brainlayer/mcp/__init__.py
Rename normalize_project_name_normalize_project_name; change _STORE_OUTPUT_SCHEMA field idchunk_id and required fields; unify error-return shapes via _error_result; strengthen input validation for recall/fetch flows.
Tests Updated
tests/test_normalize_project.py, tests/test_phase3_qa.py
Tests now import _normalize_project_name as normalize_project_name to match the renamed helper.
Minor Docstring/Notes
src/brainlayer/clustering.py, src/brainlayer/pipeline/session_enrichment.py
Expanded docstring on serialize_f32 duplication note; update CLI example project name in session enrichment docstring.
Other
requirements.txt
Small manifest/test-related tweak referenced in test manifest block.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant CLI as "CLI / User"
participant Daemon as "brainlayer daemon"
participant Env as "Environment"
participant Ollama as "Ollama service"
participant ML as "ML model service"
CLI->>Daemon: start / health-check
Daemon->>Env: read OLLAMA_BASE / ML_BASE (with defaults)
Daemon->>Ollama: GET {OLLAMA_BASE trimmed}/ready
Daemon->>ML: GET {ML_BASE trimmed}/v1/ready
Ollama-->>Daemon: 200/health status
ML-->>Daemon: 200/health status
Daemon-->>CLI: aggregated health result

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Possibly related PRs

Poem

🐰 I hopped through logs and schema lines,
I nudged old commands into decline.
Env vars whisper where services hide,
VACUUM INTO keeps backups snide.
A little hop, a tidy crate—hooray, new state!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: dead code removal (CLI commands), MCP schema hardening, and documentation cleanup—all verified in the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 97.06% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/phase-8b-bl-code-quality

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread src/brainlayer/store.py
# "-Users-janedev-Gits-golems" → "golems"
# "-Users-janedev-Desktop-Gits-rudy-monorepo" → "rudy-monorepo"
# "-Users-username-Gits-myproject" → "myproject"
# "-Users-username-Desktop-Gits-my-monorepo" → "my-monorepo"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed public normalize_project_name API

Low Severity

Renaming normalize_project_name to _normalize_project_name removes a previously importable symbol from brainlayer.mcp, which can break external callers that relied on the function even though in-repo references were updated.

Fix in Cursor Fix in Web

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/brainlayer/mcp/__init__.py (1)

1016-1016: 🧹 Nitpick | 🔵 Trivial

Return type annotations are inconsistent after adding _error_result() paths.

Seven helper functions are annotated -> list[TextContent] but now return a CallToolResult on error (via _error_result()). This is a new mismatch introduced by this PR:

Function Annotation Error return
_list_projects (line 1016) -> list[TextContent] CallToolResult (line 1032)
_context (line 1035) -> list[TextContent] CallToolResult (lines 1042–1044, 1063)
_file_timeline (line 1066) -> list[TextContent] CallToolResult (line 1093)
_operations (line 1096) -> list[TextContent] CallToolResult (line 1132)
_regression (line 1135) -> list[TextContent] CallToolResult (line 1182)
_plan_links (line 1185) -> list[TextContent] CallToolResult (line 1248)
_sessions (line 1356) -> list[TextContent] CallToolResult (line 1378)

The MCP SDK is duck-typed at runtime so this won't crash, but it will produce type-checker errors and can mislead readers about the contract.

♻️ Proposed annotation fix (apply to each affected function)
-async def _list_projects() -> list[TextContent]:
+async def _list_projects() -> list[TextContent] | CallToolResult:

Apply the same pattern to the other six functions.

Also applies to: 1035-1035, 1066-1070, 1096-1098, 1135-1138, 1185-1189, 1356-1360

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/brainlayer/mcp/__init__.py` at line 1016, The annotated return types for
helper functions like _list_projects, _context, _file_timeline, _operations,
_regression, _plan_links, and _sessions are incorrect because they sometimes
return a CallToolResult via _error_result(); update each function's return
annotation to reflect both success and error shapes (e.g., change ->
list[TextContent] to a union type that includes CallToolResult such as
list[TextContent] | CallToolResult or typing.Union[list[TextContent],
CallToolResult] depending on project typing style) so the type checker and
readers see the correct contract.
📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 042bd84 and 40bc688.

📒 Files selected for processing (15)
  • .gitignore
  • docs/data-locations.md
  • docs/enrichment-runbook.md
  • pyproject.toml
  • src/brainlayer/cli/__init__.py
  • src/brainlayer/client.py
  • src/brainlayer/clustering.py
  • src/brainlayer/daemon.py
  • src/brainlayer/dashboard/search.py
  • src/brainlayer/dashboard/views.py
  • src/brainlayer/mcp/__init__.py
  • src/brainlayer/pipeline/session_enrichment.py
  • src/brainlayer/store.py
  • tests/test_normalize_project.py
  • tests/test_phase3_qa.py
🧰 Additional context used
📓 Path-based instructions (9)
src/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use ruff for linting and formatting Python code in src/ directory

Files:

  • src/brainlayer/dashboard/views.py
  • src/brainlayer/clustering.py
  • src/brainlayer/dashboard/search.py
  • src/brainlayer/client.py
  • src/brainlayer/daemon.py
  • src/brainlayer/pipeline/session_enrichment.py
  • src/brainlayer/cli/__init__.py
  • src/brainlayer/store.py
  • src/brainlayer/mcp/__init__.py
**/dashboard/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use textual framework for building interactive TUI dashboard components

Files:

  • src/brainlayer/dashboard/views.py
  • src/brainlayer/dashboard/search.py
**/clustering.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use HDBSCAN + UMAP for topic clustering in brain graph generation

Files:

  • src/brainlayer/clustering.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Run pytest for all test execution and maintain test coverage

Files:

  • tests/test_phase3_qa.py
  • tests/test_normalize_project.py
**/daemon.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/daemon.py: FastAPI daemon should expose a REST API on configurable port (default 8787) for dashboard communication
Structure search results to support filtering by source (claude_code, whatsapp, youtube, all), content_type, intent, importance_min, and tag

Files:

  • src/brainlayer/daemon.py
**/pipeline/session_enrichment.py

📄 CodeRabbit inference engine (CLAUDE.md)

Perform session-level LLM analysis to extract decisions, corrections, learnings, mistakes, patterns, outcome, and quality scores

Files:

  • src/brainlayer/pipeline/session_enrichment.py
**/cli/__init__.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use Typer framework for CLI command implementation in cli/ module

Files:

  • src/brainlayer/cli/__init__.py
**/{setup.py,pyproject.toml,requirements*.txt,Pipfile,*.lock}

📄 CodeRabbit inference engine (CLAUDE.md)

Install and use uv for Python dependency management instead of pip

Files:

  • pyproject.toml
**/mcp/__init__.py

📄 CodeRabbit inference engine (CLAUDE.md)

Implement MCP server with 14 tools for Claude integration (search, stats, context, file_timeline, operations, regression, plan_links, session_summary, etc.)

Files:

  • src/brainlayer/mcp/__init__.py
🧠 Learnings (7)
📚 Learning: 2026-02-20T18:33:59.833Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Maintain separation between read-only source conversations in ~/.claude/projects/ and mutable indexed database in ~/.local/share/brainlayer/

Applied to files:

  • .gitignore
  • docs/data-locations.md
📚 Learning: 2026-02-20T18:33:59.833Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Expose daemon via Unix socket by default (/tmp/brainlayer.sock) and HTTP port (8787) for optional use by dashboards

Applied to files:

  • src/brainlayer/client.py
📚 Learning: 2026-02-20T18:33:59.833Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Applies to **/pipeline/enrichment.py : Use environment variable BRAINLAYER_ENRICH_BACKEND to switch between Ollama and MLX LLM backends

Applied to files:

  • src/brainlayer/daemon.py
  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-02-20T18:33:59.833Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Applies to **/pipeline/session_enrichment.py : Perform session-level LLM analysis to extract decisions, corrections, learnings, mistakes, patterns, outcome, and quality scores

Applied to files:

  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-02-20T18:33:59.833Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Applies to **/pipeline/enrichment.py : Enrich chunks with 10 structured metadata fields: summary, tags, importance (1-10), intent, primary_symbols, resolved_query, epistemic_level, version_scope, debt_impact, external_deps

Applied to files:

  • src/brainlayer/pipeline/session_enrichment.py
📚 Learning: 2026-02-20T18:33:59.833Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Applies to **/vector_store.py : Use sqlite-vec with APSW backend for vector storage with WAL mode enabled

Applied to files:

  • src/brainlayer/store.py
📚 Learning: 2026-02-20T18:33:59.833Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Applies to **/mcp/__init__.py : Implement MCP server with 14 tools for Claude integration (search, stats, context, file_timeline, operations, regression, plan_links, session_summary, etc.)

Applied to files:

  • src/brainlayer/mcp/__init__.py
🧬 Code graph analysis (4)
src/brainlayer/dashboard/search.py (1)
src/brainlayer/embeddings.py (1)
  • EmbeddingModel (29-102)
tests/test_phase3_qa.py (2)
src/brainlayer/cli/__init__.py (1)
  • _normalize_project_name (132-154)
src/brainlayer/mcp/__init__.py (1)
  • _normalize_project_name (29-83)
src/brainlayer/mcp/__init__.py (1)
src/brainlayer/cli/__init__.py (1)
  • _normalize_project_name (132-154)
tests/test_normalize_project.py (2)
src/brainlayer/cli/__init__.py (1)
  • _normalize_project_name (132-154)
src/brainlayer/mcp/__init__.py (1)
  • _normalize_project_name (29-83)
🪛 markdownlint-cli2 (0.21.0)
docs/data-locations.md

[warning] 86-86: Fenced code blocks should be surrounded by blank lines

(MD031, blanks-around-fences)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
🔇 Additional comments (15)
.gitignore (1)

38-39: docs/plan/ ignore entry looks correct.

The mid-path slash anchors the pattern to the repository root and the trailing slash ensures only a directory is matched — no issues here.

src/brainlayer/pipeline/session_enrichment.py (1)

19-19: LGTM — removing the internal project reference.

The docstring example swap from the internal project name to the generic my-project is a clean documentation improvement with no logic impact.

pyproject.toml (1)

77-77: LGTM.

Straightforward metadata addition.

src/brainlayer/clustering.py (1)

54-59: LGTM — the rationale for the intentional duplication is clear and well-documented.

src/brainlayer/dashboard/views.py (1)

228-228: LGTM — correct logging style.

Using logger.warning("Search error: %s", e) with lazy %s formatting is the right approach here.

src/brainlayer/daemon.py (1)

275-289: Health-check URL derivation logic is correct for standard URL shapes.

The rsplit("/api/", 1)[0] / rsplit("/v1/", 1)[0] approach correctly strips the API-path suffix regardless of whether the env var is set to the full API endpoint or just the base URL, and the f-string construction of the actual probe URLs (/api/tags, /v1/models) is consistent with Ollama and OpenAI-compatible API conventions.

src/brainlayer/client.py (2)

3-12: LGTM — clean logging setup and correct placement after all imports.


63-63: LGTM — correct severity and lazy-format style.

docs/enrichment-runbook.md (1)

160-171: LGTM — restore steps are correct and safe.

Stopping the daemon before copying and clearing the lock file are the right pre-conditions for a restore.

src/brainlayer/dashboard/search.py (1)

3-11: LGTM — logging migration is clean.

%s lazy-format specifiers are used correctly throughout, the fallback behaviors in fit_store, search, and _semantic_search_only are preserved, and the module-level getLogger(__name__) placement follows standard practice.

Also applies to: 127-128, 207-208, 228-229

src/brainlayer/cli/__init__.py (1)

231-232: LGTM — docstring example generalized correctly.

src/brainlayer/mcp/__init__.py (1)

810-818: LGTM — recall input validation and unknown-tool guard are correctly implemented.

The brainlayer_recall validation (lines 811–812) correctly catches the empty-input case that the inputSchema alone can't express via JSON Schema constraints. The _error_result(f"Unknown tool: {name}") fallback (line 845) is a good defensive addition.

Also applies to: 843-846

tests/test_phase3_qa.py (1)

17-17: LGTM — aliasing _normalize_project_name as normalize_project_name cleanly preserves all test names.

tests/test_normalize_project.py (1)

5-5: LGTM — import alias is the right approach to maintain test coverage of the now-private function.

docs/data-locations.md (1)

88-91: ⚠️ Potential issue | 🟡 Minor

Backup command path won't match most existing users' actual database location.

The active-data table (line 9) shows the primary database at ~/.local/share/zikaron/zikaron.db for all existing installs. The backup command hard-codes the canonical path (brainlayer/brainlayer.db), which does not exist on those machines. Running this command as written would silently do nothing (or error on older SQLite) for the majority of users.

Suggest adding a note to resolve the path first:

-Before any bulk operation, back up the database:
+Before any bulk operation, check your actual database path (`brainlayer stats` shows it) and back it up:
 ```bash
 # WAL-safe copy
-sqlite3 ~/.local/share/brainlayer/brainlayer.db "VACUUM INTO '/path/to/backup/brainlayer-$(date +%Y%m%d).db'"
+# Canonical path (fresh installs):
+sqlite3 ~/.local/share/brainlayer/brainlayer.db "VACUUM INTO '/path/to/backup/brainlayer-$(date +%Y%m%d).db'"
+# Legacy path (existing installs — check if this file exists first):
+# sqlite3 ~/.local/share/zikaron/zikaron.db "VACUUM INTO '/path/to/backup/zikaron-$(date +%Y%m%d).db'"

Based on learnings: "Maintain separation between read-only source conversations in `~/.claude/projects/` and mutable indexed database in `~/.local/share/brainlayer/`" — the mutable database path resolution covers both the legacy and canonical location, so backup docs should account for both.




<details>
<summary>⛔ Skipped due to learnings</summary>

Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T18:33:59.833Z
Learning: Maintain separation between read-only source conversations in ~/.claude/projects/ and mutable indexed database in ~/.local/share/brainlayer/


</details>

</blockquote></details>

</blockquote></details>

</details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.gitignore:

  • Around line 45-46: Update the .gitignore entry for claude.scratchpad.md so it
    only matches the repo root file: replace the unanchored pattern
    "claude.scratchpad.md" with an anchored pattern (add a leading slash) so the
    ignore only applies to "/claude.scratchpad.md"; this avoids accidentally
    ignoring files with the same name in subdirectories.

In @docs/data-locations.md:

  • Around line 85-89: Add a blank line between the preceding prose sentence and
    the fenced code block to satisfy markdownlint rule MD031; specifically edit the
    docs/data-locations.md content so the line before the bash fenced code block is an empty line (i.e., insert one blank line immediately before the bash
    block shown in the diff) so the code fence no longer immediately follows the
    paragraph.

In @docs/enrichment-runbook.md:

  • Around line 152-157: The VACUUM INTO snippet will fail silently if the target
    backups directory doesn't exist; update the runbook guidance around the "VACUUM
    INTO" command to first ensure the destination directory
    (~/.local/share/brainlayer/backups) exists (e.g., instruct to create it with a
    directory-create step such as mkdir -p) before running sqlite3 "VACUUM INTO ..."
    and mention validating the command's exit status or showing how to surface
    errors; reference the "VACUUM INTO" command and the backups path in the text so
    readers know where to create the directory.

In @src/brainlayer/daemon.py:

  • Around line 278-279: The current fallback from BRAINLAYER_MLX_URL to the
    generic MLX_URL can silently pick up unrelated MLX endpoints; update the
    mlx_base resolution (variable mlx_base) to detect when MLX_URL is used as a
    fallback and emit a visible deprecation/warning log (using the module logger or
    a provided logger) that clearly states MLX_URL is deprecated for this project
    and suggests setting BRAINLAYER_MLX_URL instead, while keeping the existing
    behavior of trimming "/v1/" or trailing slashes.

In @src/brainlayer/dashboard/views.py:

  • Around line 7-9: The file currently places the statement logger =
    logging.getLogger(name) between import statements (splitting the rich
    imports and causing isort/ruff issues); move the logger assignment so it comes
    after all import lines (i.e., after the imports that include from rich import
    box and from rich.align import Align) to restore a single contiguous import
    block and avoid ruff/isort reordering.

In @src/brainlayer/mcp/__init__.py:

  • Around line 597-610: The JSON Schema constraints for days, limit, and
    importance are not enforced server-side—mirror the defense-in-depth used for
    num_results by clamping these values in the call_tool code path before calling
    the store/session dispatch; specifically, in the call_tool function (the block
    that prepares arguments for brainlayer_sessions and brainlayer_store), compute
    days = max(1, min(arguments.get("days", 7), 365)), limit = max(1,
    min(arguments.get("limit", 20), 100)), and normalize importance to None or
    max(1, min(importance, 10)) before passing them into the store/session methods
    so the server enforces the schema bounds.
  • Around line 408-419: The schema for the "before" and "after" fields lacks a
    "minimum" constraint allowing negatives to reach store.get_context; add
    "minimum": 0 to both schema entries and mirror the existing num_results clamping
    in the call_tool handler by clamping inputs to the [0,50] range (e.g., use
    max(0, min(arguments.get("before", 3), 50)) and same for "after") before passing
    to store.get_context; update references in init.py where "before"/"after"
    schema and call_tool argument handling occur to ensure both schema-level and
    runtime validation prevent negative values.

In @src/brainlayer/store.py:

  • Around line 154-156: The _find_related helper currently only catches
    apsw.Error and RuntimeError around the whole try block, leaving result parsing
    (accessing results["documents"][0], results["metadatas"][0], and the zip)
    unguarded; refactor by splitting the try: first wrap the call to
    store.search(...) in a try/except that catches apsw.Error and RuntimeError and
    logs the existing warning, then separately wrap the parsing/iteration logic
    (accessing results, indexing, zip) in a second try/except that catches KeyError,
    IndexError, TypeError, and AttributeError (or a small tuple of these) and logs
    the same "Related memory search failed: %s" warning and returns an empty list;
    reference _find_related and store.search to locate where to apply the two
    guarded blocks.

Outside diff comments:
In @src/brainlayer/mcp/__init__.py:

  • Line 1016: The annotated return types for helper functions like
    _list_projects, _context, _file_timeline, _operations, _regression, _plan_links,
    and _sessions are incorrect because they sometimes return a CallToolResult via
    _error_result(); update each function's return annotation to reflect both
    success and error shapes (e.g., change -> list[TextContent] to a union type that
    includes CallToolResult such as list[TextContent] | CallToolResult or
    typing.Union[list[TextContent], CallToolResult] depending on project typing
    style) so the type checker and readers see the correct contract.

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment thread .gitignore
Comment on lines +45 to +46
# Claude scratchpad
claude.scratchpad.md
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider anchoring claude.scratchpad.md to the repo root.

Without a leading /, gitignore will suppress any file named claude.scratchpad.md found anywhere in the directory tree, not just at the root. If the scratchpad lives exclusively at the repo root (which the comment implies), use a leading slash so the intent is explicit and no accidental matches occur in subdirectories.

✨ Proposed fix
 # Claude scratchpad
-claude.scratchpad.md
+/claude.scratchpad.md
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Claude scratchpad
claude.scratchpad.md
# Claude scratchpad
/claude.scratchpad.md
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore around lines 45 - 46, Update the .gitignore entry for
claude.scratchpad.md so it only matches the repo root file: replace the
unanchored pattern "claude.scratchpad.md" with an anchored pattern (add a
leading slash) so the ignore only applies to "/claude.scratchpad.md"; this
avoids accidentally ignoring files with the same name in subdirectories.

Comment thread docs/data-locations.md
Comment thread docs/enrichment-runbook.md
Comment thread src/brainlayer/daemon.py Outdated
Comment on lines +278 to +279
mlx_base = os.environ.get("BRAINLAYER_MLX_URL", os.environ.get("MLX_URL", "http://127.0.0.1:8080/v1/chat/completions"))
mlx_base = mlx_base.rsplit("/v1/", 1)[0] if "/v1/" in mlx_base else mlx_base.rstrip("/")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

MLX_URL fallback uses a generic, non-namespaced env var — could silently collide.

The Apple MLX framework itself and various ML ops tools use MLX_URL as a common variable name. If any such tool is active in the same environment, the health check would redirect to an unintended endpoint without any warning.

🔧 Suggested fix
-    mlx_base = os.environ.get("BRAINLAYER_MLX_URL", os.environ.get("MLX_URL", "http://127.0.0.1:8080/v1/chat/completions"))
+    mlx_base = os.environ.get("BRAINLAYER_MLX_URL", "http://127.0.0.1:8080/v1/chat/completions")

If backward compatibility with MLX_URL is needed, add an explicit deprecation log:

-    mlx_base = os.environ.get("BRAINLAYER_MLX_URL", os.environ.get("MLX_URL", "http://127.0.0.1:8080/v1/chat/completions"))
+    mlx_base = os.environ.get("BRAINLAYER_MLX_URL") or os.environ.get("MLX_URL") or "http://127.0.0.1:8080/v1/chat/completions"
+    if not os.environ.get("BRAINLAYER_MLX_URL") and os.environ.get("MLX_URL"):
+        logger.warning("MLX_URL is deprecated; use BRAINLAYER_MLX_URL instead")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
mlx_base = os.environ.get("BRAINLAYER_MLX_URL", os.environ.get("MLX_URL", "http://127.0.0.1:8080/v1/chat/completions"))
mlx_base = mlx_base.rsplit("/v1/", 1)[0] if "/v1/" in mlx_base else mlx_base.rstrip("/")
mlx_base = os.environ.get("BRAINLAYER_MLX_URL", "http://127.0.0.1:8080/v1/chat/completions")
mlx_base = mlx_base.rsplit("/v1/", 1)[0] if "/v1/" in mlx_base else mlx_base.rstrip("/")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/brainlayer/daemon.py` around lines 278 - 279, The current fallback from
BRAINLAYER_MLX_URL to the generic MLX_URL can silently pick up unrelated MLX
endpoints; update the mlx_base resolution (variable mlx_base) to detect when
MLX_URL is used as a fallback and emit a visible deprecation/warning log (using
the module logger or a provided logger) that clearly states MLX_URL is
deprecated for this project and suggests setting BRAINLAYER_MLX_URL instead,
while keeping the existing behavior of trimming "/v1/" or trailing slashes.

Comment thread src/brainlayer/dashboard/views.py Outdated
Comment thread src/brainlayer/mcp/__init__.py
Comment thread src/brainlayer/mcp/__init__.py
Comment thread src/brainlayer/store.py Outdated
Comment on lines +154 to +156
except (apsw.Error, RuntimeError) as e:
# Don't let related search failure block the store
logger.warning("Related memory search failed: %s", e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Narrowed except leaves post-search result-parsing unguarded.

The entire try body — including results["documents"][0], results["metadatas"][0], and the zip iteration — falls under the same handler. If store.search() returns an unexpected structure (e.g., missing keys, non-list values), the resulting KeyError, IndexError, or AttributeError would propagate out of _find_related and surface as an unhandled exception in store_memory(), breaking the write for a situation that the docstring explicitly calls non-fatal.

The simplest fix that preserves the PR's intent to avoid a bare except Exception while still covering all failure modes in this helper is to split the guard:

🔧 Suggested fix
-    try:
-        results = store.search(
-            query_embedding=embedding,
-            n_results=limit,
-            project_filter=project,
-        )
-        related = []
-        for doc, meta in zip(results["documents"][0], results["metadatas"][0]):
-            item: Dict[str, Any] = {"content": doc[:300]}
-            if meta.get("summary"):
-                item["summary"] = meta["summary"]
-            if meta.get("project"):
-                item["project"] = meta["project"]
-            if meta.get("content_type"):
-                item["type"] = meta["content_type"]
-            if meta.get("created_at"):
-                item["date"] = meta["created_at"][:10]
-            related.append(item)
-        return related
-    except (apsw.Error, RuntimeError) as e:
-        # Don't let related search failure block the store
-        logger.warning("Related memory search failed: %s", e)
-        return []
+    try:
+        results = store.search(
+            query_embedding=embedding,
+            n_results=limit,
+            project_filter=project,
+        )
+    except (apsw.Error, RuntimeError) as e:
+        logger.warning("Related memory search failed: %s", e)
+        return []
+
+    try:
+        related = []
+        for doc, meta in zip(results["documents"][0], results["metadatas"][0]):
+            item: Dict[str, Any] = {"content": doc[:300]}
+            if meta.get("summary"):
+                item["summary"] = meta["summary"]
+            if meta.get("project"):
+                item["project"] = meta["project"]
+            if meta.get("content_type"):
+                item["type"] = meta["content_type"]
+            if meta.get("created_at"):
+                item["date"] = meta["created_at"][:10]
+            related.append(item)
+        return related
+    except (KeyError, IndexError, TypeError, AttributeError) as e:
+        logger.warning("Related memory result parsing failed: %s", e)
+        return []
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/brainlayer/store.py` around lines 154 - 156, The _find_related helper
currently only catches apsw.Error and RuntimeError around the whole try block,
leaving result parsing (accessing results["documents"][0],
results["metadatas"][0], and the zip) unguarded; refactor by splitting the try:
first wrap the call to store.search(...) in a try/except that catches apsw.Error
and RuntimeError and logs the existing warning, then separately wrap the
parsing/iteration logic (accessing results, indexing, zip) in a second
try/except that catches KeyError, IndexError, TypeError, and AttributeError (or
a small tuple of these) and logs the same "Related memory search failed: %s"
warning and returns an empty list; reference _find_related and store.search to
locate where to apply the two guarded blocks.

EtanHey and others added 2 commits February 21, 2026 02:32
Cursor Bugbot correctly flagged that narrowing to (apsw.Error, RuntimeError)
could let unexpected exceptions (ValueError, TypeError) propagate and abort
store_memory(). The intent is that related-search failures never block the
store, so the broad catch is correct here. Added logging and explanatory comment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add minimum: 0 to before/after schema + clamp in call_tool handler
- Add server-side clamping for days/limit/importance params
- Drop generic MLX_URL fallback (collision risk), use BRAINLAYER_MLX_URL only
- Add mkdir -p before VACUUM INTO in enrichment-runbook.md
- Add blank line before code block in data-locations.md (MD031)
- Move logger after all imports in views.py (isort compliance)
- Fix import sorting in store.py after apsw removal

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@EtanHey EtanHey merged commit 0f0db16 into main Feb 21, 2026
4 of 5 checks passed
@EtanHey EtanHey deleted the feature/phase-8b-bl-code-quality branch February 21, 2026 00:35
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 40bc688 and d1b4ec3.

📒 Files selected for processing (1)
  • src/brainlayer/store.py
🧰 Additional context used
📓 Path-based instructions (1)
src/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use ruff for linting and formatting Python code in src/ directory

Files:

  • src/brainlayer/store.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
🔇 Additional comments (1)
src/brainlayer/store.py (1)

21-21: Logging setup is clean and idiomatic.

Module-level logging.getLogger(__name__) is the correct pattern here.

Also applies to: 29-30

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/brainlayer/store.py`:
- Around line 134-156: Split the broad try/except around the search+parse into
two focused guards: wrap the call to store.search(...) in its own try that
catches the expected DB-level errors (e.g., apsw.Error, RuntimeError) and logs a
clear message via logger.warning that the search failed; then separate the
parsing/iteration logic that builds related from results (the for doc, meta in
zip(results["documents"][0], results["metadatas"][0]) block and accesses
results["documents"], results["metadatas"], meta.get(...)) into a second try
that only catches structural/parse errors (e.g., KeyError, TypeError,
IndexError, ValueError) and logs a different warning for parse failures before
returning []; leave other exceptions to propagate so programming errors surface.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

return [TextContent(type="text", text=f"Context error: {result['error']}")]
return _error_result(
f"Unknown chunk_id '{chunk_id[:20]}...'. Use chunk_id from brainlayer_search results."
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Context error message masks distinct error conditions

Medium Severity

The _context function now replaces all get_context errors with a hardcoded "Unknown chunk_id" message, but get_context returns two distinct error conditions: "Chunk not found" (chunk_id doesn't exist) and "Chunk has no conversation context" (chunk exists but lacks conversation_id/position). When a valid chunk_id has no conversation context, the user now gets a misleading "Unknown chunk_id" error instead of the accurate explanation. The previous code correctly forwarded result['error'] which preserved the distinction.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant