Skip to content

calimelo/cc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

cc

Headless agentic engine with 46+ callable tools spanning code editing, forensic analysis, document parsing, threat intelligence, browser automation, and more. Built in Go as a single binary with zero external runtime dependencies.

Architecture

cc runs an HTTP/SSE server. Clients send prompts, and the agent autonomously executes a harness loop: stream LLM responses, execute tool calls, feed results back, repeat until done.

Client (curl / frontend / another agent)
  |
  POST /prompt  {"message": "..."}
  |
  v
cc server (:3100)
  |
  Harness Loop:
    1. Send messages + tool definitions to LLM (streaming SSE)
    2. Accumulate text deltas and tool call chunks
    3. If finish = tool-calls -> execute tools in parallel, append results
    4. If finish = stop -> return final response
    5. Loop back to 1
  |
  SSE events streamed back: text, tool, tool_result, done

Every tool result is saved to disk ($TMPDIR/cc/tool-output/) and a summary is returned to the agent. Large outputs are truncated with a file reference so the agent can read the full content when needed.

Quick Start

export OPENAI_API_KEY=sk-...
./build.sh
./cc

cc starts on port 3100 by default.

Configuration

cc looks for cc.json in the working directory. All fields are optional.

{
  "port": 3100,
  "model": "gpt-4o",
  "base_url": "",
  "system_prompt": "",
  "max_tokens": 16384,
  "context_limit": 128000,
  "log_file": "",
  "db_path": "cc.db",
  "bash_timeout": 30,
  "skill_dirs": [".cursor/skills", ".claude/skills", ".agents/skills"],
  "compaction_model": "",
  "auto_compact": true,
  "compaction_threshold": 0,
  "debug": false,
  "debug_lsp": false,
  "context_paths": [],
  "shell": {
    "path": "/bin/bash",
    "args": []
  },
  "rag": {
    "enabled": false,
    "model": "text-embedding-3-small",
    "chunk_size": 1000,
    "overlap": 200,
    "auto_index": false
  },
  "agents": {
    "default": { "model": "gpt-4o", "max_tokens": 16384 }
  },
  "providers": {
    "anthropic": { "api_key": "" },
    "gemini": { "api_key": "" }
  },
  "mcp_servers": {},
  "lsp_servers": {},
  "plugins": [],
  "formatters": {},
  "permissions": {}
}

Environment Variables

Variable Config field Default
OPENAI_API_KEY (env only) required
OPENAI_BASE_URL base_url https://api.openai.com
CC_MODEL model gpt-4o
CC_PORT port 3100
CC_MAX_TOKENS max_tokens 16384
CC_CONTEXT_LIMIT context_limit 128000
CC_SKILL_DIRS skill_dirs .cursor/skills:.claude/skills:.agents/skills

API Reference

Core

Method Endpoint Description
GET /health Health check
POST /prompt Send a message to the agent (SSE stream)
POST /answer Answer a pending question from the agent
GET /events Global SSE event stream (diagnostics, LSP)
GET /doc API documentation

Sessions

Method Endpoint Description
GET /sessions List all sessions
GET /sessions/:id Get session details and messages
POST /sessions/:id/fork Fork a session
POST /sessions/:id/revert Revert session to earlier state
GET /sessions/:id/export Export session
POST /sessions/:id/cancel Cancel a running prompt
DELETE /sessions/:id Delete a session

Tools and Skills

Method Endpoint Description
GET /tools List all registered tools
GET /skills List available skills
GET /agents List configured agents

Memory

Method Endpoint Description
GET /memories List persistent memories
POST /memories Create a memory
PUT /memories/:id Update a memory
DELETE /memories/:id Delete a memory

MCP (Model Context Protocol)

Method Endpoint Description
GET /mcp/resources List MCP resources
GET /mcp/prompts List MCP prompts

Project

Method Endpoint Description
POST /project/init Initialize project (cc.md)
POST /config/validate Validate configuration
GET /commands List custom commands
POST /commands/:name/run Run a custom command

Prompt Request

curl -N -X POST http://localhost:3100/prompt \
  -H "Content-Type: application/json" \
  -d '{"message": "analyze this codebase"}'

Fields:

  • message (required): the prompt text
  • session_id: continue an existing session
  • agent: agent configuration to use
  • images: array of {url, base64, mime_type} for vision

SSE Events

Event Data Description
session {"session_id": "..."} Session created/resumed
text text delta Agent text streaming
tool {"name", "call_id", "status"} Tool execution status (running/done)
tool_result output text Tool output
done {"finish", "session_id"} Agent finished
error error message Error occurred

Tools (46 built-in)

Code and Files

Tool Description
read_file Read file contents
write Write file contents
edit Apply a targeted edit to a file
multiedit Apply multiple edits to a file in one call
apply_patch Apply a unified diff patch
list_files List directory entries
restore_file Restore file to a previous version
glob Find files matching a glob pattern
grep Search file contents with regex
bash Execute a shell command
lsp Language server operations (hover, goto def, format)
diagnostics Get LSP diagnostics for a file

Search and Discovery

Tool Description
fast_search Platform-native fast file discovery (Spotlight/locate/walk)
search_in_files Search for a term inside parsed document contents across a directory
enumerate_files Walk a directory, matching by extension/MIME/keyword
detect_file_type Magic-byte file type identification (400+ formats)
detect_file_match Check if a file matches extension/MIME/keyword filters
websearch Web search via Exa API
webfetch Fetch and extract content from a URL
codesearch Search code across repositories
sourcegraph Sourcegraph code search
report_finder Web search, download top results, convert to markdown

Document Processing

Tool Description
doc_query Extract text from documents (PDF, DOCX, XLSX, HTML, RTF, OLE) or URLs
parse_document Full document parsing with metadata extraction
extract_metadata Extract EXIF, IPTC, XMP, GPS from images/fonts/media
tokenize_text NLP tokenization and frequency distribution
process_content Load/normalize/defang/prune content pipeline

Threat Intelligence

Tool Description
extract_observables Extract IPs, URLs, domains, hashes, emails, CIDRs from text
build_stix_bundle Build STIX 2.x JSON bundles from observable evidence
generate_yara_rules Generate YARA rules from STIX indicators

Forensics

Tool Description
parse_forensic_artifact Parse EVTX, MFT, prefetch, LNK, registry, PST, iOS backups
read_disk_image Open disk images (raw, E01, VMDK, VHD, QCOW2), list partitions
analyze_memory_dump Volatility-style memory forensics (pslist, netscan, modules)

Data and Analytics

Tool Description
graph_store Store/query graph nodes and edges with full-text search
insight_analytics Query aggregated artifact analytics (counts by type, host, daily)
diff_artifacts Diff forensic artifact snapshots between timepoints
git_history Retrieve files at specific commits/timestamps, diff between commits

Agent and Memory

Tool Description
memory Store/recall persistent memories (project/user/session scoped)
rag_search Semantic search over indexed content
rag_index Index content for RAG retrieval
task Spawn a sub-agent for complex tasks
question Ask the user a question
batch Execute multiple tool calls in parallel
todo Manage a task list
skill Look up or list available skills
vision Analyze images
browser Browser automation (navigate, click, type, snapshot)

Extensibility

Custom Tools

Place JSON tool definitions in .cc/tools/:

{
  "name": "my_tool",
  "description": "What this tool does",
  "command": ["python3", "my_script.py"],
  "parameters": {
    "type": "object",
    "properties": {
      "input": {"type": "string"}
    }
  }
}

MCP Servers

Configure in cc.json:

{
  "mcp_servers": {
    "my-server": {
      "command": ["node", "server.js"],
      "environment": {"API_KEY": "..."}
    }
  }
}

MCP tools are auto-discovered and registered. When the total tool count exceeds 128, tools are auto-partitioned into function sets.

Skills

Skills are SKILL.md files with YAML frontmatter, scanned from skill_dirs:

---
name: my-skill
description: Does something useful
---
# Instructions for the agent

Custom Commands

Place markdown files in .cc/commands/:

---
name: deploy
description: Deploy the application
---
Run the deployment pipeline...

Run via POST /commands/deploy/run.

Plugins

External processes registered in cc.json:

{
  "plugins": [
    {"name": "my-plugin", "description": "...", "command": ["./plugin"]}
  ]
}

Context Management

  • Tracks token usage after each step
  • At 80% of context_limit, triggers proactive compaction via LLM summarization
  • Replaces old messages with a summary + the 10 most recent messages
  • Falls back to blind truncation if summarization fails
  • Reactive overflow handling catches API context-length errors

Resilience

  • Retries on 429, 500, 502, 503, 529 with exponential backoff (up to 5 retries)
  • Doom loop detection stops after 3 identical consecutive tool calls
  • Max 25 steps per prompt
  • Tool output saved to disk, truncated for LLM context

Persistence

SQLite database (cc.db) stores:

  • Sessions and messages
  • File version snapshots
  • Permission rules
  • Memories (project/user/session scoped)
  • RAG embeddings and indexes
  • Cost tracking

Bundled Libraries

cc includes several pure-Go libraries ported from companion projects:

Library Location Capability
gofile pkg/gofile/ Magic-byte file type detection (400+ formats)
gometadata pkg/gometadata/ Metadata extraction (EXIF, IPTC, XMP) for 300+ formats
parsers pkg/parsers/ Document parsing (PDF, DOCX, XLSX, HTML, RTF, OLE, CSV)
tskgo pkg/tskewf/ Forensic disk image analysis (TSK reimplementation)
voltaire pkg/voltaire/ Memory forensics (Volatility reimplementation)
plago pkg/plago/ Forensic artifact parsers (EVTX, MFT, registry, PST, LNK)
gonltk pkg/gonltk/ NLP tokenization and frequency distribution
somehing pkg/somehing/ Everything-style indexed file search

Building

export OPENAI_API_KEY=sk-...
./build.sh    # produces ./cc binary
./cc          # starts server on :3100

Requires Go 1.25+. No CGO dependencies.

License

Private.

About

Minimal AI coding agent with OpenAI harness loop, tool calling, and skill loading

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages