feat: MLX auto-detection, stall detection, current_context fix#14
feat: MLX auto-detection, stall detection, current_context fix#14
Conversation
… fix Phase 2 of Layers v2 plan: - Config-driven backend: auto-detect MLX on arm64 Mac, Ollama fallback - Enrichment stall detection: log warnings when chunks exceed timeout - Heartbeat logging: periodic progress updates for launchd observability - Runtime backend fallback: if MLX unavailable, fall back to Ollama - Fix current_context: proper hours→days conversion, chunks table fallback for projects/files when session_context is sparse - Launchd plists: auto-indexing (30min) and enrichment (1hr) templates - 18 new tests covering all changes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughIntroduces macOS launchd automation for background indexing and enrichment processes, enhances Changes
Sequence Diagram(s)sequenceDiagram
participant Run as run_enrichment()
participant Backend as Backend Detection
participant MLX as MLX Backend
participant Ollama as Ollama Backend
participant Enrich as enrich_batch()
participant VectorStore as VectorStore
Run->>Backend: Detect backend (platform/env)
alt MLX configured on arm64 Darwin
Backend-->>Run: ENRICH_BACKEND = mlx
Run->>MLX: Start MLX
alt MLX available
MLX-->>Run: Ready
Run->>Enrich: Process batches (backend=mlx)
else MLX unavailable
MLX-->>Run: Error
Run->>Ollama: Fallback to Ollama
Ollama-->>Run: Ready
Run->>Enrich: Process batches (backend=ollama)
end
else Default/Linux
Backend-->>Run: ENRICH_BACKEND = ollama
Run->>Ollama: Start Ollama
Ollama-->>Run: Ready
Run->>Enrich: Process batches (backend=ollama)
end
Enrich->>VectorStore: Fetch chunk batch
VectorStore-->>Enrich: Chunks
loop Per-chunk with heartbeat
Enrich->>Backend: call_llm(prompt, backend override)
Backend-->>Enrich: Enriched output
Note over Enrich: Log heartbeat & stall check
end
Enrich-->>Run: Batch results
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/brainlayer/engine.py (1)
394-398:⚠️ Potential issue | 🟠 Major
format()discards chunk-derived data whenrecent_sessionsis empty.The
current_context()function now populatesactive_projectsandrecent_filesfrom thechunkstable (lines 33–97) even whenrecent_sessionshas no rows. However,format()at line 7 returns"No recent session context available."immediately whenrecent_sessionsis empty, preventing display of the projects and files gathered from chunks. This undermines the fallback logic added in this PR.The fix is straightforward—only return early if all context sources are empty:
Proposed fix
def format(self) -> str: """Format as concise markdown — designed for voice/quick context.""" - if not self.recent_sessions: + if not self.recent_sessions and not self.active_projects and not self.recent_files: return "No recent session context available."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/brainlayer/engine.py` around lines 394 - 398, The format() method currently returns early whenever recent_sessions is empty, which hides chunk-derived context populated by current_context() (active_projects and recent_files); change the early-return in format() to only return "No recent session context available." when recent_sessions, active_projects, and recent_files are all empty (i.e., check self.recent_sessions, self.active_projects, and self.recent_files together), so that projects/files gathered from the chunks table are included when recent_sessions is empty; update any related conditional logic in format() to build output from active_projects/recent_files when present.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
scripts/launchd/com.brainlayer.enrich.plistscripts/launchd/com.brainlayer.index.plistscripts/launchd/install.shsrc/brainlayer/engine.pysrc/brainlayer/pipeline/enrichment.pytests/test_phase2.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{py,js,ts,tsx,jsx,java,cpp,c,go,rust,rb,php}
📄 CodeRabbit inference engine (CLAUDE.md)
Use AST-aware chunking with tree-sitter for code files, targeting ~500 tokens per chunk
Files:
tests/test_phase2.pysrc/brainlayer/pipeline/enrichment.pysrc/brainlayer/engine.py
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Run linting and formatting with ruff:
ruff check src/ && ruff format src/
Files:
tests/test_phase2.pysrc/brainlayer/pipeline/enrichment.pysrc/brainlayer/engine.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Run tests with pytest
Files:
tests/test_phase2.py
src/brainlayer/pipeline/enrichment.py
📄 CodeRabbit inference engine (CLAUDE.md)
Set 'think': false in GLM-4.7 API calls during enrichment to avoid unnecessary thinking mode overhead (350+ tokens, 20s delay)
Files:
src/brainlayer/pipeline/enrichment.py
🧠 Learnings (3)
📚 Learning: 2026-02-19T18:41:31.203Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-19T18:41:31.203Z
Learning: Set BRAINLAYER_ENRICH_BACKEND environment variable to select between 'ollama' or 'mlx' backends for local LLM enrichment
Applied to files:
src/brainlayer/pipeline/enrichment.py
📚 Learning: 2026-02-19T18:41:31.203Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-19T18:41:31.203Z
Learning: Applies to src/brainlayer/pipeline/enrichment.py : Set 'think': false in GLM-4.7 API calls during enrichment to avoid unnecessary thinking mode overhead (350+ tokens, 20s delay)
Applied to files:
src/brainlayer/pipeline/enrichment.py
📚 Learning: 2026-02-19T18:41:31.203Z
Learnt from: CR
Repo: EtanHey/brainlayer PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-19T18:41:31.203Z
Learning: Applies to src/brainlayer/vector_store.py : Use thread-local VectorStore connections in parallel enrichment mode to avoid database connection conflicts
Applied to files:
src/brainlayer/pipeline/enrichment.py
🧬 Code graph analysis (3)
tests/test_phase2.py (2)
src/brainlayer/pipeline/enrichment.py (4)
_detect_default_backend(61-75)call_llm(414-423)_enrich_one(505-562)parse_enrichment(426-502)src/brainlayer/vector_store.py (2)
update_enrichment(875-939)VectorStore(72-1546)
src/brainlayer/pipeline/enrichment.py (1)
src/brainlayer/storage.py (1)
store(37-43)
src/brainlayer/engine.py (2)
tests/test_engine.py (2)
store(334-340)store(379-385)tests/test_think_recall_integration.py (1)
store(19-23)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Cursor Bugbot
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
🔇 Additional comments (3)
scripts/launchd/com.brainlayer.enrich.plist (1)
1-45: LGTM!Good configuration choices:
RunAtLoadisfalse(avoids immediate enrichment on login),Niceis 15 (lower priority than the indexing job), andProcessTypeisBackground. TheBRAINLAYER_STALL_TIMEOUTenvironment variable aligns with the enrichment pipeline's stall detection. Same log rotation concern from the index plist applies here.src/brainlayer/pipeline/enrichment.py (2)
55-78: LGTM — clean auto-detection with env override precedence.The detection logic is straightforward: explicit env var → platform check → Ollama fallback. Module-level
ENRICH_BACKENDis set once at import, andrun_enrichmentcorrectly uses a local_run_backendto avoid mutating global state when falling back.
533-544: LGTM — stall detection adds useful observability.Per-chunk timing with a configurable threshold is a good pattern for long-running batch jobs. The stall warning goes to stderr, keeping it separate from regular output.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/launchd/com.brainlayer.index.plist`:
- Around line 17-20: The plist currently writes to StandardOutPath and
StandardErrorPath (__HOME__/.local/share/brainlayer/logs/index.log and
index.err) with no rotation; update the system so logs are rotated: either
implement rotation in the brainlayer CLI (e.g., use Python's
logging.handlers.RotatingFileHandler or TimedRotatingFileHandler for the logger
used by the index process) or add a periodic maintenance launchd job (or a
cleanup step in install.sh) that rotates/truncates/archives those files and
keeps a bounded history; ensure any new launchd job references the same paths
(StandardOutPath/StandardErrorPath) and that rotation preserves file
ownership/permissions.
In `@scripts/launchd/install.sh`:
- Around line 29-32: The sed substitutions in scripts/launchd/install.sh using
'|' as delimiter can fail if $HOME or $BRAINLAYER_BIN contain '|' characters;
update the sed invocation in the install.sh script (the sed command that writes
from "$src" to "$dst") to use a safer delimiter (e.g., '@' or ':' ) or escape
the variables before interpolation so any '|' in $HOME/$BRAINLAYER_BIN won't
break the pattern; change the -e "s|__HOME__|$HOME|g" and -e
"s|__BRAINLAYER_BIN__|$BRAINLAYER_BIN|g" occurrences accordingly.
- Line 14: Ensure BRAINLAYER_BIN actually points to an existing executable in
install.sh: after the BRAINLAYER_BIN="${BRAINLAYER_BIN:-...}" assignment, test
that the resolved path is executable (e.g., [ -x "$BRAINLAYER_BIN" ]), and if
not, print a clear error to stderr and exit non‑zero so the plist isn’t
installed; reference the BRAINLAYER_BIN variable and the install.sh script when
making this change.
In `@src/brainlayer/engine.py`:
- Around line 458-471: The SQL using SELECT DISTINCT project ... ORDER BY
created_at is non-deterministic because created_at isn't selected; update the
query used in cursor.execute (which populates chunk_projects) to aggregate per
project and sort by the latest timestamp instead—e.g., use GROUP BY project and
ORDER BY MAX(created_at) DESC (or equivalent) so the result order is
deterministic while still returning distinct projects from the chunks table.
In `@src/brainlayer/pipeline/enrichment.py`:
- Around line 610-613: The heartbeat code in enrichment.py can raise
ZeroDivisionError when BRAINLAYER_HEARTBEAT_INTERVAL is 0; update both heartbeat
checks (the block using HEARTBEAT_INTERVAL at the shown lines and the second
block around 627-634) to guard against zero by only using the modulo expression
when HEARTBEAT_INTERVAL > 0 (e.g., check HEARTBEAT_INTERVAL > 0 && done %
HEARTBEAT_INTERVAL == 0), otherwise rely solely on the time-based condition (now
- last_heartbeat > 60); reference the HEARTBEAT_INTERVAL variable and the
surrounding heartbeat print logic to locate and modify the two occurrences.
- Around line 660-673: The Ollama base URL is being constructed brittlely with
OLLAMA_URL.replace("/api/generate", "") inside the MLX fallback block; instead
derive a canonical OLLAMA_BASE_URL once (at module init alongside
MLX_BASE_URL/MLX_BASE_URL construction) that normalizes/removes any trailing
path or slash, then replace the inline call in the fallback health check (where
OLLAMA_URL.replace(...) is used) with OLLAMA_BASE_URL and use that constant for
all Ollama requests (e.g., in the requests.get call and any future health
checks) so the URL logic is centralized and robust.
In `@tests/test_phase2.py`:
- Around line 178-186: The test test_hours_to_days_precision should exercise the
actual implementation in engine.py rather than re-implementing the formula: call
current_context with specific hours values and assert that sessions() is invoked
with the computed days (1,2,2,1) by mocking or patching the sessions function
(e.g., using unittest.mock.patch or pytest monkeypatch) to capture its
arguments; update the assertions to check the mocked sessions call arguments for
the expected days for hours=4,25,48,1, and remove the standalone formula
assertions so the test fails if current_context's conversion changes.
- Around line 199-238: Tests test_chunks_provide_project_fallback and
test_source_files_fallback create a VectorStore and write directly to the chunks
table without guaranteeing schema initialization and they call store.close()
inline which can leak on failure; update the tests to ensure VectorStore's
schema is initialized before raw inserts (e.g., create or call an init method on
the VectorStore instance) and ensure deterministic cleanup by using a pytest
fixture or a try/finally around the VectorStore lifecycle (referencing
VectorStore and current_context) so the DB connection is always closed on test
failure.
---
Outside diff comments:
In `@src/brainlayer/engine.py`:
- Around line 394-398: The format() method currently returns early whenever
recent_sessions is empty, which hides chunk-derived context populated by
current_context() (active_projects and recent_files); change the early-return in
format() to only return "No recent session context available." when
recent_sessions, active_projects, and recent_files are all empty (i.e., check
self.recent_sessions, self.active_projects, and self.recent_files together), so
that projects/files gathered from the chunks table are included when
recent_sessions is empty; update any related conditional logic in format() to
build output from active_projects/recent_files when present.
| <key>StandardOutPath</key> | ||
| <string>__HOME__/.local/share/brainlayer/logs/index.log</string> | ||
| <key>StandardErrorPath</key> | ||
| <string>__HOME__/.local/share/brainlayer/logs/index.err</string> |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Log files will grow unbounded — consider rotation.
StandardOutPath and StandardErrorPath are append-only with no built-in rotation in launchd. Over time, index.log and index.err will grow without limit. Consider either rotating logs within the brainlayer CLI itself (e.g., logging.handlers.RotatingFileHandler) or adding a periodic cleanup step in install.sh / a separate launchd job.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/launchd/com.brainlayer.index.plist` around lines 17 - 20, The plist
currently writes to StandardOutPath and StandardErrorPath
(__HOME__/.local/share/brainlayer/logs/index.log and index.err) with no
rotation; update the system so logs are rotated: either implement rotation in
the brainlayer CLI (e.g., use Python's logging.handlers.RotatingFileHandler or
TimedRotatingFileHandler for the logger used by the index process) or add a
periodic maintenance launchd job (or a cleanup step in install.sh) that
rotates/truncates/archives those files and keeps a bounded history; ensure any
new launchd job references the same paths (StandardOutPath/StandardErrorPath)
and that rotation preserves file ownership/permissions.
| sed \ | ||
| -e "s|__HOME__|$HOME|g" \ | ||
| -e "s|__BRAINLAYER_BIN__|$BRAINLAYER_BIN|g" \ | ||
| "$src" > "$dst" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Sed delimiter collision if paths contain |.
The sed substitution uses | as the delimiter. If $HOME or $BRAINLAYER_BIN contain a literal |, the substitution will break. This is unlikely for typical paths but a latent fragility. Consider using a less common delimiter or escaping.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/launchd/install.sh` around lines 29 - 32, The sed substitutions in
scripts/launchd/install.sh using '|' as delimiter can fail if $HOME or
$BRAINLAYER_BIN contain '|' characters; update the sed invocation in the
install.sh script (the sed command that writes from "$src" to "$dst") to use a
safer delimiter (e.g., '@' or ':' ) or escape the variables before interpolation
so any '|' in $HOME/$BRAINLAYER_BIN won't break the pattern; change the -e
"s|__HOME__|$HOME|g" and -e "s|__BRAINLAYER_BIN__|$BRAINLAYER_BIN|g" occurrences
accordingly.
| def test_hours_to_days_precision(self): | ||
| """Hours to days conversion uses ceiling division.""" | ||
| # hours=4 should produce days=1 (not 0) | ||
| # hours=25 should produce days=2 (not 1) | ||
| # hours=48 should produce days=2 | ||
| assert max(1, -(-4 // 24)) == 1 | ||
| assert max(1, -(-25 // 24)) == 2 | ||
| assert max(1, -(-48 // 24)) == 2 | ||
| assert max(1, -(-1 // 24)) == 1 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test validates the formula in isolation, not the actual current_context implementation.
These assertions replicate the ceiling-division formula rather than exercising the code path in engine.py. If someone changes the formula in current_context but not here, both would still pass independently. Consider invoking current_context with specific hours values and asserting the resulting days parameter was passed to sessions() (e.g., by mocking sessions).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_phase2.py` around lines 178 - 186, The test
test_hours_to_days_precision should exercise the actual implementation in
engine.py rather than re-implementing the formula: call current_context with
specific hours values and assert that sessions() is invoked with the computed
days (1,2,2,1) by mocking or patching the sessions function (e.g., using
unittest.mock.patch or pytest monkeypatch) to capture its arguments; update the
assertions to check the mocked sessions call arguments for the expected days for
hours=4,25,48,1, and remove the standalone formula assertions so the test fails
if current_context's conversion changes.
| def test_chunks_provide_project_fallback(self, tmp_path): | ||
| """Projects are found from chunks table even without session_context entries.""" | ||
| from brainlayer.engine import current_context | ||
| from brainlayer.vector_store import VectorStore | ||
|
|
||
| db_path = tmp_path / "test.db" | ||
| store = VectorStore(db_path) | ||
|
|
||
| # Insert a chunk with a recent created_at and project | ||
| now = datetime.now().isoformat() | ||
| cursor = store.conn.cursor() | ||
| cursor.execute( | ||
| """INSERT INTO chunks (id, content, metadata, source_file, project, content_type, char_count, created_at) | ||
| VALUES (?, ?, ?, ?, ?, ?, ?, ?)""", | ||
| ("chunk-1", "test content", "{}", "test.py", "my-project", "user_message", 100, now), | ||
| ) | ||
|
|
||
| result = current_context(store, hours=24) | ||
| assert "my-project" in result.active_projects | ||
| store.close() | ||
|
|
||
| def test_source_files_fallback(self, tmp_path): | ||
| """Recent files come from chunks.source_file when file_interactions is empty.""" | ||
| from brainlayer.engine import current_context | ||
| from brainlayer.vector_store import VectorStore | ||
|
|
||
| db_path = tmp_path / "test.db" | ||
| store = VectorStore(db_path) | ||
|
|
||
| now = datetime.now().isoformat() | ||
| cursor = store.conn.cursor() | ||
| cursor.execute( | ||
| """INSERT INTO chunks (id, content, metadata, source_file, project, content_type, char_count, created_at) | ||
| VALUES (?, ?, ?, ?, ?, ?, ?, ?)""", | ||
| ("chunk-1", "test content", "{}", "src/auth.py", "my-project", "user_message", 100, now), | ||
| ) | ||
|
|
||
| result = current_context(store, hours=24) | ||
| assert "src/auth.py" in result.recent_files | ||
| store.close() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tests create VectorStore but rely on implicit schema initialization.
These integration-style tests insert directly into the chunks table via raw SQL. If the VectorStore schema ever changes (e.g., column rename or new NOT NULL column), these tests will break with a cryptic SQL error rather than a clear assertion failure. This is acceptable for now but worth noting.
Also: store.close() in each test should ideally be in a finally block or a fixture to avoid leaking connections on test failure.
Example using a fixture
import pytest
`@pytest.fixture`
def store(self, tmp_path):
from brainlayer.vector_store import VectorStore
db_path = tmp_path / "test.db"
s = VectorStore(db_path)
yield s
s.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_phase2.py` around lines 199 - 238, Tests
test_chunks_provide_project_fallback and test_source_files_fallback create a
VectorStore and write directly to the chunks table without guaranteeing schema
initialization and they call store.close() inline which can leak on failure;
update the tests to ensure VectorStore's schema is initialized before raw
inserts (e.g., create or call an init method on the VectorStore instance) and
ensure deterministic cleanup by using a pytest fixture or a try/finally around
the VectorStore lifecycle (referencing VectorStore and current_context) so the
DB connection is always closed on test failure.
…, binary validation - CRITICAL: HEARTBEAT_INTERVAL uses max(1, ...) to prevent ZeroDivisionError - MAJOR: format() no longer returns early when recent_sessions is empty but active_projects/recent_files are populated from chunks fallback - MINOR: install.sh validates brainlayer binary exists before installing plists - Derive OLLAMA_BASE_URL once at module level instead of ad-hoc replace() - Use GROUP BY + MAX(created_at) for deterministic project ordering - Add test: projects_without_sessions_still_shown Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…om full conversations Upgrade enrichment from chunk-level to session-level analysis: - Conversation reconstruction from ordered chunks with role prefixes - Single-pass LLM analysis extracting 12 structured fields (summary, intent, outcome, decisions, corrections, learnings, mistakes, patterns, tags, etc.) - Hybrid schema: flat columns for filterable fields + JSON for variable arrays - FTS5 full-text search on session narratives - CLI: `brainlayer enrich-sessions` with Rich progress bar and --stats mode - MCP: `brainlayer_session_summary` tool (#14) + session context in search results - 33 TDD tests covering schema, CRUD, reconstruction, parsing, and full pipeline Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: session-level enrichment — decisions, corrections, learnings from full conversations Upgrade enrichment from chunk-level to session-level analysis: - Conversation reconstruction from ordered chunks with role prefixes - Single-pass LLM analysis extracting 12 structured fields (summary, intent, outcome, decisions, corrections, learnings, mistakes, patterns, tags, etc.) - Hybrid schema: flat columns for filterable fields + JSON for variable arrays - FTS5 full-text search on session narratives - CLI: `brainlayer enrich-sessions` with Rich progress bar and --stats mode - MCP: `brainlayer_session_summary` tool (#14) + session context in search results - 33 TDD tests covering schema, CRUD, reconstruction, parsing, and full pipeline Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address Bugbot review findings — since filter, dead code, mutation - Apply `since` filter to session_context discovery path (Method 1) - Remove unused `projects` variable in enrich_session - Copy dict before JSON serialization to avoid mutating caller's input - Remove unused `batch_size` CLI parameter Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Details
Backend auto-detection (
enrichment.py)_detect_default_backend()checks platform: arm64 Darwin → mlx, else → ollamaBRAINLAYER_ENRICH_BACKENDenv var still overridescall_llm()acceptsbackendparameter for runtime overrideStall detection & heartbeat
BRAINLAYER_STALL_TIMEOUT(300s)BRAINLAYER_HEARTBEAT_INTERVALchunks (25) or 60scurrent_context fix (
engine.py)Launchd templates (
scripts/launchd/)com.brainlayer.index.plist— index every 30mincom.brainlayer.enrich.plist— enrich every 1hr (max 500 chunks/run)install.sh— placeholder substitution + launchctl bootstrapTest plan
test_phase2.py(all passing)🤖 Generated with Claude Code
Note
Medium Risk
Touches enrichment execution flow (backend selection/fallback and logging) and introduces background job templates, which can impact runtime behavior and resource usage if misconfigured.
Overview
Adds macOS
launchdtemplates plus aninstall.shhelper to runbrainlayer indexevery 30 minutes andbrainlayer enrichhourly, with placeholder substitution for__HOME__/__BRAINLAYER_BIN__and log routing.Improves enrichment runtime behavior by auto-detecting the default backend (preferring MLX on arm64 macOS), supporting
BRAINLAYER_OLLAMA_URL, falling back from MLX to Ollama when MLX is unavailable, and adding per-chunk stall warnings plus periodic heartbeat progress logs.Fixes
current_contextto computehours→dayscorrectly and to fall back to thechunkstable for recent projects and files whensession_context/file_interactionsare empty; updates and adds tests covering these behaviors.Written by Cursor Bugbot for commit 5c79125. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
Release Notes
New Features
Tests