Skip to content

drknowhow/C3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Code Context Control (C3)

Sponsor

C3 is a local code-intelligence layer for AI coding tools. The useful core is narrow: retrieve less, read less, and offload heavy analysis locally when that actually saves context.

Recommended Default

New installs should use the guided init flow with direct MCP mode:

pip install -r requirements.txt
python cli/c3.py init /path/to/project

c3 init now walks through IDE selection, optional local git init, and optional MCP installation.

If you want the same behavior without prompts:

python cli/c3.py init /path/to/project --force --git --ide codex --mcp-mode direct

direct points the IDE straight at cli/mcp_server.py.

proxy is still available, but it is now an advanced mode for teams that explicitly want dynamic tool filtering experiments:

python cli/c3.py install-mcp /path/to/project --mcp-mode proxy

install-mcp also accepts IDE shorthand positionally when you are already in the project directory:

python cli/c3.py install-mcp claude
python cli/c3.py install-mcp codex
python cli/c3.py install-mcp . gemini

Lean Workflow

Use these tools by default:

  • c3_recall when the topic may have prior history
  • c3_search to locate code
  • c3_file_map before larger code reads
  • c3_compress for understanding-only passes
  • c3_extract before .log, .txt, or .jsonl
  • c3_delegate for heavy non-editing analysis
  • c3_session_log and c3_remember for durable decisions and conventions

What Changed

  • Direct MCP mode is the recommended install path.
  • c3 init now provides a step-wise setup menu for IDE, local Git, and MCP.
  • Proxy mode is optional and documented as advanced.
  • Savings footers, nudges, and response padding are disabled by default.
  • Generated instruction files now describe a pragmatic workflow instead of a maximal ritual.
  • c3 init and install-mcp now sync CLAUDE.md, AGENTS.md, and GEMINI.md into the project root.
  • install-mcp now creates project-local .codex/config.toml and .gemini/settings.json session configs for new projects.
  • The context-budget agent now warns before threshold crossings and automatically captures a snapshot at L2 so recovery is faster after /clear.

Tiered Local AI (Hybrid Intelligence)

C3 now features a sophisticated three-tier local intelligence system powered by Ollama:

  • Tier 1 (Nano): Ultra-fast intent classification and routing using qwen2:0.5b. Sub-100ms classification ensures the right tool is used for every task.
  • Tier 2 (Micro): Efficient Q&A and summarization using models like deepseek-r1:1.5b. Ideal for "last-turn" context retrieval and session summaries.
  • Tier 3 (Base): Complex code analysis and technical reasoning using llama3.2:3b or larger.

Advanced Optimizations

  • Real-time Streaming: Token-by-token response delivery via SSE for an instant, responsive UI experience.
  • Semantic Caching: Persistent disk-based cache for LLM results reduces latency for repeated tasks to zero.
  • Dynamic Context Control: Automatic num_ctx optimization right-sizes the model context window for every task type.

High-Value Tools

  • c3_search: narrow code retrieval
  • c3_read: surgical reading of symbols (classes, functions) or line ranges
  • c3_file_map: structural map for targeted reads
  • c3_compress: token-reduced file understanding
  • c3_extract: log/data pre-filtering
  • c3_delegate: local Ollama offload for heavy analysis

Benchmarking

python cli/c3.py benchmark /path/to/project

When local Ollama is available, c3_delegate is measured and included in the main benchmark scorecard rather than being treated as an optional side metric.

Permissions (Claude Code only)

C3 can manage Claude Code permission tiers via .claude/settings.local.json. This feature is specific to Claude Code β€” other IDEs (Codex, Gemini, VS Code Copilot) do not have equivalent permission systems.

python cli/c3.py permissions show          # Show current tier
python cli/c3.py permissions standard      # Apply standard tier
python cli/c3.py permissions read-only     # Apply read-only tier
python cli/c3.py permissions permissive    # Apply permissive tier

Permissions are also integrated into the setup flow:

python cli/c3.py init . --force --ide claude --permissions standard
python cli/c3.py install-mcp claude --permissions standard

During interactive c3 init, a permissions step is offered as Step 4 when the IDE is Claude Code.

Tiers

Tier Description
read-only Safe exploration β€” no file writes, no git writes, no installs
standard Normal dev workflow β€” edit, build, test, local git (recommended)
permissive Full trust β€” everything except dangerous/destructive operations

All tiers always allow C3 MCP tools (c3_compress, c3_search, etc.) and include a deny list that blocks destructive operations (rm -rf, sudo, git push --force, etc.).

Permissions can also be set from the Web UI (Settings > Permissions) or the Project Hub. The section only appears when the project IDE is Claude Code.

Advanced / Optional

These remain available, but they are not part of the recommended default path:

  • Proxy-driven dynamic tool filtering
  • c3_route
  • c3_summarize
  • c3_raw
  • c3_why_context
  • c3_token_stats
  • c3_context_status
  • c3_notifications
  • CLAUDE.md lifecycle tools

Web UI

python cli/c3.py ui /path/to/project

The UI now treats direct MCP mode as the recommended default and labels proxy mode as advanced.

Support C3

C3 is a labor of love by a heavy AI power user dedicated to building the tools I wish already existed. If C3 saves you time (and context tokens), consider supporting its development:

  • πŸš€ Sponsor on GitHub: github.com/sponsors/drknowhow
  • β˜• One-time Support: Every coffee helps fund the thousands of API test runs required to keep C3 stable.
  • ⭐ Star the Repo: If you find this useful, a star helps others discover C3.
  • πŸ› οΈ Contribute: As an AI-first builder, I'm always looking for better workflows. Open an issue or a PR if you have ideas!

Sponsorship Tiers

  • $5/mo Supporter: Coffee & API Token Fund.
  • $20/mo AI Power User: Your name in SPONSORS.md + early access to experiments.
  • $75/mo Builder Tier: Priority on feature requests for your AI workflows.
  • $200/mo Partner Tier: Logo in README + monthly AI strategy call.

Notes

  • Claude Code hooks still enforce large-read and log-read guardrails when installed.
  • --git runs a local-only git init; it does not add remotes or use any hosted service.
  • Existing installs are not automatically migrated; rerun install-mcp or init --force to switch defaults.
  • Legacy SHOW_SAVINGS_SUMMARY config is still honored for compatibility.

About

πŸš€ Local context intelligence layer for AI coding agents. Stop token waste with surgical retrieval, structural mapping, and automated budget management. Powered by Hybrid Local AI (Ollama). Works with Claude Code, Gemini CLI, and more. 🐍 🏠

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Contributors