Codeable is a tool that lets you wrap existing tools into a code-writing environment. Instead of relying on an LLM to make multiple sequential tool calls (which can be fragile and token-expensive), Codeable allows the agent to write a single script that orchestrates these tools to achieve a goal.
Note: This concept is inspired by Cloudflare's Code Mode, but Codeable runs locally in your environment rather than on a Cloudflare Worker.
AI agents are often better at writing code than they are at managing complex tool calling chains.
- Standard Tool Calling: The LLM must decide to call a tool, wait for the result, feed it back into context, and decide the next step. This involves multiple round-trips and context switching.
- Codeable: The LLM writes a standard TypeScript/JavaScript function that calls the tools directly. It can use loops, variables, and logic natively. The code is then executed safely in your local environment.
npm install codeable
# or
pnpm add codeableCodeable is designed to work seamlessly with the Vercel AI SDK.
First, define the tools your agent needs using the AI SDK's tool() function.
import { z } from "zod";
import { tool } from "ai";
const weatherTool = tool({
description: "Get the weather for a location",
inputSchema: z.object({ location: z.string() }),
outputSchema: z.object({ temperature: z.number(), condition: z.string() }),
execute: async ({ location }) => {
// Mock implementation
return { temperature: 72, condition: "Sunny" };
},
});Wrap your tools using the codeable helper.
import { openai } from "@ai-sdk/openai";
import { codeable } from "codeable/ai-sdk";
const myCodeable = codeable({
systemPrompt: "You are a helpful assistant.",
llm: openai("gpt-4o"), // Model used to write the code
tools: {
weather: weatherTool,
// ... add other tools here
},
});Pass the codeable tool and its system prompt to your AI generation function.
import { streamText } from "ai";
const result = await streamText({
model: openai("gpt-4o"),
// The codeable system prompt helps the model understand available tools
system: myCodeable.system,
tools: {
codeable: myCodeable.tool, // Expose the meta-tool
},
prompt:
"Check the weather in Tokyo and New York, then tell me which is warmer.",
});- Prompting: When you ask a question, the LLM (via
codeable) receives a description of all available tools as TypeScript definitions. - Code Generation: Instead of calling
weatherdirectly, the LLM writes a script:async function main({ tools }) { const tokyo = await tools.weather({ location: "Tokyo" }); const ny = await tools.weather({ location: "New York" }); return tokyo.temperature > ny.temperature ? "Tokyo" : "New York"; }
- Execution: Codeable executes this script locally, handling the tool calls and returning the final result to the main agent.