The best way to understand how agents work is to build one.
~300 lines. No frameworks. Easier than you think.
A bloomery is where raw ore becomes iron for the first time. Crude, but real. You understand it differently once you've made it yourself.
You're already inside one. That agent is going to guide you to build one from scratch. Raw HTTP calls, no SDKs, eight steps. When you write the agentic loop yourself, something clicks that no article ever gave you.
A working coding agent in ~300 lines of code. No frameworks, no SDKs. Just raw HTTP calls to your LLM of choice. By the end, your agent will have:
- A conversational interface with multi-turn memory
- A system prompt that gives it identity and purpose
- Four tools: list files, read files, run shell commands, edit files
- An agentic loop that keeps calling the API until the model stops requesting tools
- A coding agent that supports the Agent Skills standard (see Install below)
- An API key for your chosen LLM provider (see below)
- Your language of choice (TypeScript, Python, Go, Ruby, or anything that can do HTTP + JSON)
Pick whichever LLM API you want to build against:
| Provider | Model | Free tier | Get a key |
|---|---|---|---|
| Google Gemini | gemini-2.5-flash |
✅ Yes | aistudio.google.com/apikey |
| OpenAI | gpt-4o |
❌ Paid | platform.openai.com/api-keys |
| OpenAI-compatible | Any | Varies | Ollama (local/free), Together AI, Groq, LM Studio, etc. |
| Anthropic | claude-sonnet-4-6 |
❌ Paid | console.anthropic.com |
The tutorial adapts to your provider. The concepts are the same, only the wire format differs.
This skill works with any coding agent that supports the Agent Skills standard. The installer will ask you which agent you use and set everything up:
npx skills add mgratzer/bloomeryThen open your agent and invoke the skill (usually /bloomery or $bloomery depending on your agent).
⚠️ GitHub Copilot CLI: The Copilot CLI did not reliably follow the skill's structured instructions in our tests. It tends to improvise its own project setup instead of using the provided scaffold. If you have a Copilot subscription, use VS Code with GitHub Copilot instead, which works correctly.
Note: Different models can produce different output. We've tested with the latest models from Anthropic, OpenAI, and Google, all work well. Agents tested: Claude Code, Codex CLI, Gemini CLI, Pi, VS Code with GitHub Copilot, and OpenCode.
The skill will:
- Ask you to pick an LLM provider (Gemini, OpenAI/compatible, or Anthropic), your language, name your agent, and pick a track (Guided ~60-90min or Fast Track ~30-45min)
- Scaffold the starter project for you (boilerplate stdin loop,
.envfile, imports, the boring stuff) - Walk you through 8 incremental steps, validating your code at each one
- Surface "meta moments" connecting what you're building to how the agent you're using works
| Step | What you build | Key concept |
|---|---|---|
| 1 | Basic chat REPL | HTTP POST, response parsing, stdin loop |
| 2 | Multi-turn conversation | Message accumulation, conversation history |
| 3 | System prompt | Agent identity, proactive tool use |
| 4 | Tool definition & detection | Declaring tools, detecting tool calls in responses |
| 5 | Tool execution & agentic loop | Executing tools, sending results, the agent loop |
| 6 | Read File tool | Tool dispatcher pattern |
| 7 | Bash tool | Subprocess execution, timeouts |
| 8 | Edit File tool (optional) | File creation and find-and-replace |
By default, the skill coaches you. It doesn't write code for you. It uses a 4-level hint system:
- Conceptual nudge
- Structural hint
- Pseudocode
- Small snippet (last resort)
If you're stuck or just want to move on, ask the agent to implement a step for you. It'll confirm first, then do it. Some people learn by reading code too.
Based on Jeff Huntley's agent workshop.