An LLM-powered coding agent in ~280 lines of Python. Give it a prompt, and it decides which tools to call (Read, Write, Edit, Bash, Glob, Grep), actually runs them on your filesystem, feeds the results back to the model, and keeps looping until the task is done. The tool surface mirrors the real Claude Code CLI.
Built in under 48 hours with Claude Code itself, while reaching CodeCrafters Python leaderboard rank #13.
If you searched for "build an LLM agent from scratch", "OpenAI tool calling Python", "AI coding assistant implementation", or "agent loop tutorial" — this repo is a compact, readable starting point.
export OPENROUTER_API_KEY=...
./your_program.sh -p "Summarize what app/main.py does and propose 2 refactors."The agent will:
- Call Read on
app/main.py - Think about the content
- Possibly call Grep or Glob to explore the project
- Produce its final answer
Or:
./your_program.sh -p "Add a --verbose flag to app/main.py that prints each tool call."And it will edit the file in place with Edit — because the model chose to.
| Tool | Purpose |
|---|---|
Read |
Return the contents of a file |
Write |
Write content to a file (overwrite) |
Edit |
Exact string replace with uniqueness / replace_all checks |
Bash |
Run a shell command, return stdout/stderr/exit code |
Glob |
Find files by glob pattern, sorted by mtime |
Grep |
Regex search across files with optional glob filter |
- OpenAI-compatible chat completions via OpenRouter (defaults to
anthropic/claude-haiku-4.5, override withCLAUDE_CODE_MODEL) - Tools advertised as JSON Schemas — same shape OpenAI's function-calling uses
- Agent loop: send messages → if the model returns tool calls, execute them all, append results as
role: "tool", loop; otherwise print the assistant message and exit - Each tool is a small pure Python function; errors are returned to the model as
Error: ...strings so it can correct itself
- Understandable: the whole agent is one flat file. You can read it in 10 minutes.
- Model-agnostic: any OpenAI-compatible tool-calling endpoint works (OpenRouter, OpenAI, Together, local vLLM with an OpenAI proxy…).
- Extensible: add a new tool by appending one schema entry and one branch in
execute_tool_call.
export OPENROUTER_API_KEY=sk-or-...
./your_program.sh -p "What does this project do?"Optional env vars:
OPENROUTER_BASE_URL— override the API base (default:https://openrouter.ai/api/v1)CLAUDE_CODE_MODEL— override the model (default:anthropic/claude-haiku-4.5)
Requires uv and Python 3.14.
Every AI product claims "agentic" these days, but the actual mechanism is just a while True: loop around chat completions with tool results appended. This repo is the 280-line version of that insight — once you've seen it, agent frameworks stop feeling magical.
Part of the CodeCrafters "Build Your Own Claude Code" challenge. Inspired by Anthropic's Claude Code.