Headless agentic engine with 46+ callable tools spanning code editing, forensic analysis, document parsing, threat intelligence, browser automation, and more. Built in Go as a single binary with zero external runtime dependencies.
cc runs an HTTP/SSE server. Clients send prompts, and the agent autonomously executes a harness loop: stream LLM responses, execute tool calls, feed results back, repeat until done.
Client (curl / frontend / another agent)
|
POST /prompt {"message": "..."}
|
v
cc server (:3100)
|
Harness Loop:
1. Send messages + tool definitions to LLM (streaming SSE)
2. Accumulate text deltas and tool call chunks
3. If finish = tool-calls -> execute tools in parallel, append results
4. If finish = stop -> return final response
5. Loop back to 1
|
SSE events streamed back: text, tool, tool_result, done
Every tool result is saved to disk ($TMPDIR/cc/tool-output/) and a summary is returned to the agent. Large outputs are truncated with a file reference so the agent can read the full content when needed.
export OPENAI_API_KEY=sk-...
./build.sh
./cccc starts on port 3100 by default.
cc looks for cc.json in the working directory. All fields are optional.
{
"port": 3100,
"model": "gpt-4o",
"base_url": "",
"system_prompt": "",
"max_tokens": 16384,
"context_limit": 128000,
"log_file": "",
"db_path": "cc.db",
"bash_timeout": 30,
"skill_dirs": [".cursor/skills", ".claude/skills", ".agents/skills"],
"compaction_model": "",
"auto_compact": true,
"compaction_threshold": 0,
"debug": false,
"debug_lsp": false,
"context_paths": [],
"shell": {
"path": "/bin/bash",
"args": []
},
"rag": {
"enabled": false,
"model": "text-embedding-3-small",
"chunk_size": 1000,
"overlap": 200,
"auto_index": false
},
"agents": {
"default": { "model": "gpt-4o", "max_tokens": 16384 }
},
"providers": {
"anthropic": { "api_key": "" },
"gemini": { "api_key": "" }
},
"mcp_servers": {},
"lsp_servers": {},
"plugins": [],
"formatters": {},
"permissions": {}
}| Variable | Config field | Default |
|---|---|---|
OPENAI_API_KEY |
(env only) | required |
OPENAI_BASE_URL |
base_url |
https://api.openai.com |
CC_MODEL |
model |
gpt-4o |
CC_PORT |
port |
3100 |
CC_MAX_TOKENS |
max_tokens |
16384 |
CC_CONTEXT_LIMIT |
context_limit |
128000 |
CC_SKILL_DIRS |
skill_dirs |
.cursor/skills:.claude/skills:.agents/skills |
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Health check |
POST |
/prompt |
Send a message to the agent (SSE stream) |
POST |
/answer |
Answer a pending question from the agent |
GET |
/events |
Global SSE event stream (diagnostics, LSP) |
GET |
/doc |
API documentation |
| Method | Endpoint | Description |
|---|---|---|
GET |
/sessions |
List all sessions |
GET |
/sessions/:id |
Get session details and messages |
POST |
/sessions/:id/fork |
Fork a session |
POST |
/sessions/:id/revert |
Revert session to earlier state |
GET |
/sessions/:id/export |
Export session |
POST |
/sessions/:id/cancel |
Cancel a running prompt |
DELETE |
/sessions/:id |
Delete a session |
| Method | Endpoint | Description |
|---|---|---|
GET |
/tools |
List all registered tools |
GET |
/skills |
List available skills |
GET |
/agents |
List configured agents |
| Method | Endpoint | Description |
|---|---|---|
GET |
/memories |
List persistent memories |
POST |
/memories |
Create a memory |
PUT |
/memories/:id |
Update a memory |
DELETE |
/memories/:id |
Delete a memory |
| Method | Endpoint | Description |
|---|---|---|
GET |
/mcp/resources |
List MCP resources |
GET |
/mcp/prompts |
List MCP prompts |
| Method | Endpoint | Description |
|---|---|---|
POST |
/project/init |
Initialize project (cc.md) |
POST |
/config/validate |
Validate configuration |
GET |
/commands |
List custom commands |
POST |
/commands/:name/run |
Run a custom command |
curl -N -X POST http://localhost:3100/prompt \
-H "Content-Type: application/json" \
-d '{"message": "analyze this codebase"}'Fields:
message(required): the prompt textsession_id: continue an existing sessionagent: agent configuration to useimages: array of{url, base64, mime_type}for vision
| Event | Data | Description |
|---|---|---|
session |
{"session_id": "..."} |
Session created/resumed |
text |
text delta | Agent text streaming |
tool |
{"name", "call_id", "status"} |
Tool execution status (running/done) |
tool_result |
output text | Tool output |
done |
{"finish", "session_id"} |
Agent finished |
error |
error message | Error occurred |
| Tool | Description |
|---|---|
read_file |
Read file contents |
write |
Write file contents |
edit |
Apply a targeted edit to a file |
multiedit |
Apply multiple edits to a file in one call |
apply_patch |
Apply a unified diff patch |
list_files |
List directory entries |
restore_file |
Restore file to a previous version |
glob |
Find files matching a glob pattern |
grep |
Search file contents with regex |
bash |
Execute a shell command |
lsp |
Language server operations (hover, goto def, format) |
diagnostics |
Get LSP diagnostics for a file |
| Tool | Description |
|---|---|
fast_search |
Platform-native fast file discovery (Spotlight/locate/walk) |
search_in_files |
Search for a term inside parsed document contents across a directory |
enumerate_files |
Walk a directory, matching by extension/MIME/keyword |
detect_file_type |
Magic-byte file type identification (400+ formats) |
detect_file_match |
Check if a file matches extension/MIME/keyword filters |
websearch |
Web search via Exa API |
webfetch |
Fetch and extract content from a URL |
codesearch |
Search code across repositories |
sourcegraph |
Sourcegraph code search |
report_finder |
Web search, download top results, convert to markdown |
| Tool | Description |
|---|---|
doc_query |
Extract text from documents (PDF, DOCX, XLSX, HTML, RTF, OLE) or URLs |
parse_document |
Full document parsing with metadata extraction |
extract_metadata |
Extract EXIF, IPTC, XMP, GPS from images/fonts/media |
tokenize_text |
NLP tokenization and frequency distribution |
process_content |
Load/normalize/defang/prune content pipeline |
| Tool | Description |
|---|---|
extract_observables |
Extract IPs, URLs, domains, hashes, emails, CIDRs from text |
build_stix_bundle |
Build STIX 2.x JSON bundles from observable evidence |
generate_yara_rules |
Generate YARA rules from STIX indicators |
| Tool | Description |
|---|---|
parse_forensic_artifact |
Parse EVTX, MFT, prefetch, LNK, registry, PST, iOS backups |
read_disk_image |
Open disk images (raw, E01, VMDK, VHD, QCOW2), list partitions |
analyze_memory_dump |
Volatility-style memory forensics (pslist, netscan, modules) |
| Tool | Description |
|---|---|
graph_store |
Store/query graph nodes and edges with full-text search |
insight_analytics |
Query aggregated artifact analytics (counts by type, host, daily) |
diff_artifacts |
Diff forensic artifact snapshots between timepoints |
git_history |
Retrieve files at specific commits/timestamps, diff between commits |
| Tool | Description |
|---|---|
memory |
Store/recall persistent memories (project/user/session scoped) |
rag_search |
Semantic search over indexed content |
rag_index |
Index content for RAG retrieval |
task |
Spawn a sub-agent for complex tasks |
question |
Ask the user a question |
batch |
Execute multiple tool calls in parallel |
todo |
Manage a task list |
skill |
Look up or list available skills |
vision |
Analyze images |
browser |
Browser automation (navigate, click, type, snapshot) |
Place JSON tool definitions in .cc/tools/:
{
"name": "my_tool",
"description": "What this tool does",
"command": ["python3", "my_script.py"],
"parameters": {
"type": "object",
"properties": {
"input": {"type": "string"}
}
}
}Configure in cc.json:
{
"mcp_servers": {
"my-server": {
"command": ["node", "server.js"],
"environment": {"API_KEY": "..."}
}
}
}MCP tools are auto-discovered and registered. When the total tool count exceeds 128, tools are auto-partitioned into function sets.
Skills are SKILL.md files with YAML frontmatter, scanned from skill_dirs:
---
name: my-skill
description: Does something useful
---
# Instructions for the agentPlace markdown files in .cc/commands/:
---
name: deploy
description: Deploy the application
---
Run the deployment pipeline...Run via POST /commands/deploy/run.
External processes registered in cc.json:
{
"plugins": [
{"name": "my-plugin", "description": "...", "command": ["./plugin"]}
]
}- Tracks token usage after each step
- At 80% of
context_limit, triggers proactive compaction via LLM summarization - Replaces old messages with a summary + the 10 most recent messages
- Falls back to blind truncation if summarization fails
- Reactive overflow handling catches API context-length errors
- Retries on 429, 500, 502, 503, 529 with exponential backoff (up to 5 retries)
- Doom loop detection stops after 3 identical consecutive tool calls
- Max 25 steps per prompt
- Tool output saved to disk, truncated for LLM context
SQLite database (cc.db) stores:
- Sessions and messages
- File version snapshots
- Permission rules
- Memories (project/user/session scoped)
- RAG embeddings and indexes
- Cost tracking
cc includes several pure-Go libraries ported from companion projects:
| Library | Location | Capability |
|---|---|---|
| gofile | pkg/gofile/ |
Magic-byte file type detection (400+ formats) |
| gometadata | pkg/gometadata/ |
Metadata extraction (EXIF, IPTC, XMP) for 300+ formats |
| parsers | pkg/parsers/ |
Document parsing (PDF, DOCX, XLSX, HTML, RTF, OLE, CSV) |
| tskgo | pkg/tskewf/ |
Forensic disk image analysis (TSK reimplementation) |
| voltaire | pkg/voltaire/ |
Memory forensics (Volatility reimplementation) |
| plago | pkg/plago/ |
Forensic artifact parsers (EVTX, MFT, registry, PST, LNK) |
| gonltk | pkg/gonltk/ |
NLP tokenization and frequency distribution |
| somehing | pkg/somehing/ |
Everything-style indexed file search |
export OPENAI_API_KEY=sk-...
./build.sh # produces ./cc binary
./cc # starts server on :3100Requires Go 1.25+. No CGO dependencies.
Private.