Skip to content

Wellbrito29/braito

Repository files navigation

Braito — Operational context for codebases

Operational context for codebases.
Braito analyzes TypeScript/JavaScript repos and generates structured knowledge sidecars per file — powered by static analysis, git intelligence, and optional LLM synthesis.

Built with Bun TypeScript Status

Documentation  ·  Leia em Português


What it does

Braito scans your codebase and generates a .ai-notes/ directory with one .json + .md sidecar per file. Each note contains:

Field Description
purpose What the file does
invariants Contracts and assumptions that must hold
sensitiveDependencies Risky imports, env vars, external APIs
importantDecisions Non-obvious architectural choices
knownPitfalls Common failure modes
impactValidation Where to verify before shipping — including real coverage data
criticalityScore 0–1 heuristic — drives LLM prioritization

Every field separates observed (static analysis, git, tests) from inferred (LLM synthesis). No hallucination hiding in the facts.


Pipeline

repo → scanner → AST analyzer → graph engine → git intelligence
     → [cache check] → static note → [LLM synthesis] → .ai-notes/

Key constraint: LLM is only invoked when criticalityScore >= llmThreshold (default 0.4). The rest of the pipeline is fully deterministic and auditable.


Quickstart

bun install
bun run scan              # discover files
bun run generate          # full pipeline → .ai-notes/
bun run generate:force    # bypass cache
bun run generate:dry      # preview without writing
bun run generate:v        # verbose — per-file signals + phase timers
bun run watch             # regenerate on file change
bun run ui                # web UI at http://localhost:7842
bun run mcp               # MCP server (Cursor / Claude Code)
bun run init:agent        # generate .claude/commands/ slash commands
bun test

Configuration

Create an ai-notes.config.ts at the root of your project:

// Ollama — local, no API key needed
export default {
  llm: { provider: 'ollama', model: 'llama3', llmThreshold: 0.4, temperature: 0.2 },
  language: 'en',
}

// Anthropic — set ANTHROPIC_API_KEY env var
export default {
  llm: { provider: 'anthropic', model: 'claude-sonnet-4-6', llmThreshold: 0.4 },
  language: 'pt-BR',
}

// OpenAI — set OPENAI_API_KEY env var
export default {
  llm: { provider: 'openai', model: 'gpt-4o', llmThreshold: 0.4 },
}

// Claude CLI — uses your logged-in Claude Code session (no API key needed)
// Requires the `claude` binary on PATH — see https://docs.claude.com/en/docs/claude-code
export default {
  llm: { provider: 'claude-cli', model: 'claude-sonnet-4-6', llmThreshold: 0.4 },
}

// Tiered models — cheap default, premium model for the top-criticality files only
export default {
  llm: {
    provider: 'claude-cli',
    model: 'claude-sonnet-4-6',      // default: score >= llmThreshold and < highThreshold
    highModel: 'claude-opus-4-6',    // premium: score >= highThreshold
    highThreshold: 0.7,              // default 0.7 when highModel is set
    llmThreshold: 0.4,               // below this, no LLM at all
  },
}

Security: API keys must be set via environment variables only. Never put them in ai-notes.config.ts. The claude-cli provider skips the API-key path entirely — it authenticates via your local Claude Code session.

Multi-language output — LLM-synthesized content (inferred fields) is generated in the configured language. The --language CLI flag overrides the config:

bun src/cli/index.ts generate --root ./ --language pt-BR
bun src/cli/index.ts generate --root ./ --language es

Supported: any BCP 47 language tag (en, pt-BR, es, fr, de, etc.).

Stale note detection:

export default { staleThresholdDays: 14 }

Multi-language source support — Python and Go opt-in:

import { MULTI_LANGUAGE_INCLUDE } from './src/core/config/defaults.ts'
export default { include: MULTI_LANGUAGE_INCLUDE }

Teach braito about your internal SDKs — the analysis block merges with the built-in defaults (observability, queues, schedulers, realtime, caches, feature flags, etc.). Useful when a private package name drives side effects or a custom HTTP client should count as an API call:

export default {
  analysis: {
    sideEffectPackages: ['my-corp-tracing', 'internal-queue-client'],
    apiCallPatterns: [
      "myHttpClient\\.(?:get|post|put|delete)\\s*\\(\\s*['\"]([^'\"]+)['\"]",
    ],
  },
}

Generated output

.ai-notes/
  src/
    core/
      scanner/discoverFiles.ts.json
      scanner/discoverFiles.ts.md
  index.json
  index.md

cache/
  hashes.json

MCP server

bun src/cli/index.ts mcp --root ./

# Multi-repo mode — serve multiple projects in one MCP server
bun src/cli/index.ts mcp --roots "api=/path/to/api,web=/path/to/web"

In multi-repo mode, each tool call accepts a repo argument (use list_repos to enumerate aliases). With a single repo registered, repo is optional.

Tool Description
list_repos List repositories registered with this MCP server (multi-repo mode)
get_file_note Get the full note for a specific file
get_index Get the full ranked index
get_impact Blast radius of a file — transitive dependents with optional notes
search BM25 ranked full-text search across all note fields (fuzzy + prefix)
get_domain All files in a domain, sorted by criticality
search_by_criticality List files above a criticality threshold
get_architecture_context Synthesized architectural overview — top files, domain breakdown, stats
get_business_rules Extract business rules, domain constraints, and policy enforcement patterns from a source file
get_governance_context Detected governance docs (Docs/, Workflows/, Quality/), style, domain mappings, and constraints
get_divergences Structural mismatches between governance docs and the codebase — missing files, forbidden deps, undeclared domains, undocumented hotspots

Add to your MCP client config (e.g. ~/.cursor/mcp.json or ~/.claude/config.json):

{
  "mcpServers": {
    "braito": {
      "command": "bun",
      "args": ["src/cli/index.ts", "mcp", "--root", "/path/to/your/project"]
    }
  }
}

Web UI

bun src/cli/index.ts ui --root ./
# → http://localhost:7842

Browse notes grouped by domain with four tabs per file: Note (purpose, invariants, pitfalls, decisions), Debug (score breakdown, evidence trail), Tests (coverage, related tests), and Graph (interactive D3.js force-directed dependency visualization with zoom, drag, neighbor highlight, and score filter).


VS Code Extension

The vscode-extension/ directory contains a native VS Code extension:

  • File decorations on high-criticality files, on stale notes
  • Hover provider — hovering over an import shows the purpose and criticality of the imported file
  • Command: braito: Show Note for Current File

Architecture

Layer Path Responsibility
CLI src/cli/ Command orchestration — scan, generate, watch, mcp, ui
Scanner src/core/scanner/ File discovery via Bun.Glob
AST src/core/ast/ ts-morph for TS/JS; Python and Go analyzers
Graph src/core/graph/ Dependency graphs; bundler alias resolution; cycle detection
Git src/core/git/ Churn score, recent commits, co-changed files
Tests src/core/tests/ Test discovery; lcov/c8 coverage integration
Cache src/core/cache/ SHA-1 per file, stale detection
LLM src/core/llm/ Provider abstraction, retry/timeout, prompt builder, Zod validation
Output src/core/output/ JSON/Markdown serialization, domain-grouped index, BM25 search index
Governance src/core/governance/ Detect project docs (Docs/, Workflows/, Quality/); inject doc evidence

CI integration

.github/workflows/ai-notes.yml triggers on push to main/master when source files change. Requires fetch-depth: 0 for accurate git signals.


Principles

  1. Static analysis first — LLM enriches, not replaces.
  2. Reduced context per file — never send the entire repo to the model.
  3. Observed vs inferred — always separated, always explicit.
  4. Sidecar, not inline — notes live in .ai-notes/, not as code comments.
  5. Criticality-driven — high-churn, high-consumer, hook-heavy files are prioritized.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages