Build AI tools first. Compose agents when you need them.
Quick Start · Mental Model · Choose a Primitive · Capability Ladder · Providers · Examples · Docs
openFunctions is an MIT-licensed TypeScript framework for building AI-callable tools and exposing them through MCP, chat adapters, workflows, and agents. Its core runtime is simple:
ToolDefinition -> ToolRegistry -> AIAdapter
Everything else composes on top of that:
workflowsare deterministic orchestration around toolsagentsare LLM loops over a filtered registrystructured outputis a synthetic tool patternmemoryandragare stateful systems that can be wrapped back into tools
If you understand the tool runtime, the rest of the framework stays legible.
defineTool() -> registry.register() -> adapter/server executes tool
-> workflows compose tools
-> agents use filtered tools
-> memory/rag expose more tools
git clone https://github.com/Tom-R-Main/openFunctions.git
cd openFunctions
bash setup.sh
cp .env.example .env
npm run test-toolsThe first thing to build is a tool, not an agent.
A tool is your business logic plus a schema the AI can read:
import { defineTool, ok } from "../framework/index.js";
export const rollDice = defineTool({
name: "roll_dice",
description: "Roll a dice with the given number of sides",
inputSchema: {
type: "object",
properties: {
sides: { type: "number", description: "Number of sides (default 6)" },
},
},
handler: async ({ sides }) => {
const rolled = Math.floor(Math.random() * ((sides as number) || 6)) + 1;
return ok({ rolled });
},
});That one definition can be:
- executed directly by
registry.execute() - exposed to Claude/Desktop over MCP
- used inside the interactive chat loop
- composed into workflows
- filtered into agent-specific registries
Read more: Architecture
| Use this | When you want | What it really is |
|---|---|---|
defineTool() |
callable AI-facing business logic | the core primitive |
pipe() |
deterministic orchestration | code-driven tool/LLM pipeline |
defineAgent() |
adaptive multi-step tool use | an LLM loop over a filtered registry |
createConversationMemory() / createFactMemory() |
thread/fact state | persistence plus memory tools |
createRAG() |
semantic document retrieval | pgvector + embeddings + tools |
createStore() / createPgStore() |
persistence | storage layer, not retrieval |
Rule of thumb:
- Start with a tool.
- Use a workflow when you know the sequence.
- Use an agent only when the model needs to choose what to do next.
- Add memory for state you control.
- Add RAG for document retrieval by meaning.
npm run create-tool expense_trackerEdit src/my-tools/expense_tracker.ts, then run:
npm run test-tools
npm testnpm start
npm run chat -- geminiThe same registry powers both.
Workflows are the default “advanced” primitive because the control flow stays explicit:
import { pipe, toolStep, llmStep } from "./framework/index.js";
const research = pipe(toolStep(registry, "define_word"))
.then(async (result) => result.data?.meanings?.[0] ?? "")
.then(llmStep(adapter, registry, "Explain this simply: {{input}}"));
await research.run({ word: "ephemeral" });Agents use the same tools, but through a filtered registry and a reasoning loop:
import { defineAgent } from "./framework/index.js";
const researcher = defineAgent({
name: "researcher",
role: "Research Analyst",
goal: "Find accurate information using available tools",
toolTags: ["search"],
});Use crews when multiple specialized agents need to collaborate.
Persistence:
const tasks = createStore<Task>("tasks");
const tasksPg = await createPgStore<Task>("tasks");Memory:
const conversations = createConversationMemory();
const facts = createFactMemory();
registry.registerAll(createMemoryTools(conversations, facts));RAG:
const rag = await createRAG({ embeddingProvider: "gemini" });
registry.registerAll(rag.createTools());RAG docs: docs/RAG.md
npm run test-tools # Interactive CLI — test tools locally
npm run dev # Dev mode — auto-restarts on save
npm test # Run tool-defined automated tests
npm run chat # Chat with AI using your tools
npm run chat -- gemini # Force a specific provider
npm run create-tool <name> # Scaffold a new tool
npm run docs # Generate tool reference docs
npm run inspect # MCP Inspector web UI
npm start # Start MCP server for Claude Desktop / CursorSet one API key in .env and the chat loop will auto-detect the provider.
| Provider | Default Model | API |
|---|---|---|
| Gemini | gemini-3-flash-preview |
Function calling |
| OpenAI | gpt-5.4 |
Responses API |
| Anthropic | claude-sonnet-4-6 |
Messages + tool_use |
| xAI | grok-4.20-0309-reasoning |
Responses API |
| OpenRouter | google/gemini-3-flash-preview |
OpenAI-compatible |
Examples:
npm run chat
npm run chat -- gemini
npm run chat -- openai gpt-5.4-pro
npm run chat -- gemini --prompt study-buddyTests live with tool definitions:
defineTool({
name: "create_task",
// ...
tests: [
{ name: "creates a task", input: { title: "Read ch5", subject: "Bio" }, expect: { success: true } },
{ name: "fails without subject", input: { title: "Read ch5" }, expect: { success: false } },
],
});The registry validates parameters before handlers run, so schema errors are surfaced clearly enough for both humans and LLMs to recover.
| Domain | Tools | Pattern |
|---|---|---|
| Study Tracker | create_task, list_tasks, complete_task |
CRUD + Store |
| Bookmark Manager | save_link, search_links, tag_link |
Arrays + Search |
| Recipe Keeper | save_recipe, search_recipes, get_random |
Nested Data + Random |
| Expense Splitter | add_expense, split_bill, get_balances |
Math + Calculations |
| Workout Logger | log_workout, get_stats, suggest_workout |
Date Filtering + Stats |
| Dictionary | define_word, find_synonyms |
External API (no key) |
| Quiz Generator | create_quiz, answer_question, get_score |
Stateful Game |
| AI Tools | summarize_text, generate_flashcards |
Tool Calls an LLM |
| Utilities | calculate, convert_units, format_date |
Stateless Helpers |
- Architecture: the runtime model, filtered registries, synthetic tools, and execution paths
- RAG: semantic chunking, Gemini/OpenAI embeddings, pgvector schema, HNSW search, and tool integration
openFunctions/
├── src/
│ ├── framework/ # Core runtime + composition layers
│ ├── examples/ # Reference tool patterns
│ ├── my-tools/ # Your tools
│ └── index.ts # MCP entrypoint
├── docs/ # Architecture docs
├── scripts/ # chat, create-tool, docs
├── test-client/ # CLI tester + test runner
├── system-prompts/ # Prompt presets
└── package.json
MIT — see LICENSE