A unified TypeScript/Node.js SDK for building AI-powered applications with multiple providers, 77 built-in tools, a workflow engine, and a flexible mode system — all through a single API.
Website: https://toolpacksdk.com
- Unified API — Single interface for OpenAI, Anthropic, Google Gemini, Ollama, and custom providers
- Streaming — Real-time response streaming across all providers
- Type-Safe — Comprehensive TypeScript types throughout
- Multimodal — Text and image inputs (vision) across all providers
- Embeddings — Vector generation for RAG applications (OpenAI, Gemini, Ollama)
- Workflow Engine — AI-driven planning and step-by-step task execution with progress events
- Mode System — Built-in Agent and Chat modes, plus
createMode()for custom modes with tool filtering - Custom Providers — Bring your own provider by implementing the
ProviderAdapterinterface - 77 Built-in Tools across 10 categories:
| Category | Tools | Description |
|---|---|---|
fs-tools |
18 | File system operations — read, write, search, tree, glob, batch read/write, etc. |
coding-tools |
12 | Code analysis — AST parsing, go to definition, find references, rename symbols, extract function |
git-tools |
9 | Version control — status, diff, log, blame, branch, commit, checkout |
db-tools |
7 | Database operations — query, schema, tables, count, insert, update, delete (SQLite, PostgreSQL, MySQL) |
exec-tools |
6 | Command execution — run, run shell, background processes, kill, read output |
http-tools |
5 | HTTP requests — GET, POST, PUT, DELETE, download |
web-tools |
9 | Web interaction — fetch, search (Tavily/Brave/DuckDuckGo), scrape, extract links, map, metadata, sitemap, feed, screenshot |
system-tools |
5 | System info — env vars, cwd, disk usage, system info, set env |
diff-tools |
3 | Patch operations — create, apply, and preview diffs |
cloud-tools |
3 | Deployments — deploy, status, list (via Netlify) |
- Node.js >= 20 is required
npm install toolpack-sdkimport { Toolpack } from 'toolpack-sdk';
// Initialize with one or more providers
const sdk = await Toolpack.init({
providers: {
openai: {}, // Reads OPENAI_API_KEY from env
anthropic: {}, // Reads ANTHROPIC_API_KEY from env
},
defaultProvider: 'openai',
tools: true, // Load all 77 built-in tools
defaultMode: 'agent', // Agent mode with workflow engine
});
// Generate a completion
const response = await sdk.generate('What is the capital of France?');
console.log(response.content);
// Stream a response
for await (const chunk of sdk.stream({
model: 'gpt-4.1',
messages: [{ role: 'user', content: 'Tell me a story' }],
})) {
process.stdout.write(chunk.delta);
}
// Switch providers on the fly
const anthropicResponse = await sdk.generate({
model: 'claude-sonnet-4-20250514',
messages: [{ role: 'user', content: 'Hello from Anthropic!' }],
}, 'anthropic');const sdk = await Toolpack.init({
provider: 'openai',
tools: true,
});| Provider | Models | Notes |
|---|---|---|
| OpenAI | GPT-4.1 Mini, GPT-4.1, GPT-5.1, GPT-5.2, GPT-5.4, GPT-5.4 Pro | Full support including reasoning models |
| Anthropic | Claude Sonnet 4, Claude 3.5 Haiku, Claude 3 Opus | No embeddings support |
| Google Gemini | Gemini 2.0 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash | Synthetic tool call IDs |
| Ollama | Auto-discovered from locally pulled models | Capability detection via probing |
| Capability | OpenAI | Anthropic | Gemini | Ollama |
|---|---|---|---|---|
| Chat completions | ✅ | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ | ✅ |
| Tool/function calling | ✅ | ✅ | ✅ | ✅ |
| Multi-round tool loop | ✅ | ✅ | ✅ | ✅ |
| Embeddings | ✅ | ❌ | ✅ | ✅ |
| Vision/images | ✅ | ✅ | ✅ | ✅ (model-dependent) |
| Tool name sanitization | ✅ (auto) | ✅ (auto) | ✅ (auto) | ✅ (auto) |
| Model discovery | Static list | Static list | Static list | Dynamic (/api/tags + /api/show) |
- OpenAI: Supports
reasoningTierandcostTieron model info for GPT-5.x reasoning models. API key read fromOPENAI_API_KEYorTOOLPACK_OPENAI_KEY. - Anthropic: Does not support embeddings. Tool results are converted to
tool_resultcontent blocks automatically.tool_choice: noneis handled by omitting tools from the request.max_tokensdefaults to4096if not specified. API key read fromANTHROPIC_API_KEYorTOOLPACK_ANTHROPIC_KEY. - Gemini: Uses synthetic tool call IDs (
gemini_<timestamp>_<random>) since the Gemini API doesn't return tool call IDs natively. Tool results are converted tofunctionResponseparts in chat history automatically. API key read fromGOOGLE_GENERATIVE_AI_KEYorTOOLPACK_GEMINI_KEY. - Ollama: Auto-discovers all locally pulled models when registered as
{ ollama: {} }. Uses/api/showand tool probing to detect capabilities (tool calling, vision, embeddings) per model. Models without tool support are automatically stripped of tools and given a system instruction to prevent hallucinated tool usage. Uses synthetic tool call IDs (ollama_<timestamp>_<random>). Embeddings use the modern/api/embedbatch endpoint. Legacy per-model registration ({ 'ollama-llama3': {} }) is also supported.
Bring your own provider (e.g., xAI/Grok, Cohere, Mistral) by extending ProviderAdapter:
import { Toolpack, ProviderAdapter, CompletionRequest, CompletionResponse, CompletionChunk, EmbeddingRequest, EmbeddingResponse, ProviderModelInfo } from 'toolpack-sdk';
class XAIAdapter extends ProviderAdapter {
name = 'xai';
getDisplayName(): string { return 'xAI'; }
async getModels(): Promise<ProviderModelInfo[]> { return [/* ... */]; }
async generate(req: CompletionRequest): Promise<CompletionResponse> { /* ... */ }
async *stream(req: CompletionRequest): AsyncGenerator<CompletionChunk> { /* ... */ }
async embed(req: EmbeddingRequest): Promise<EmbeddingResponse> { /* ... */ }
}
// Pass as array or record
const sdk = await Toolpack.init({
providers: { openai: {} },
customProviders: [new XAIAdapter()],
// or: customProviders: { xai: new XAIAdapter() }
});
// Use it
const response = await sdk.generate('Hello!', 'xai');// Nested list of all providers and their models
const providers = await sdk.listProviders();
// [
// {
// name: 'openai',
// displayName: 'OpenAI',
// type: 'built-in',
// models: [
// {
// id: 'gpt-4.1',
// displayName: 'GPT-4.1',
// capabilities: { chat: true, streaming: true, toolCalling: true, embeddings: false, vision: true },
// contextWindow: 1047576,
// maxOutputTokens: 32768,
// inputModalities: ['text', 'image'],
// outputModalities: ['text'],
// reasoningTier: null,
// costTier: 'medium',
// },
// ...
// ]
// },
// { name: 'ollama', displayName: 'Ollama', type: 'built-in', models: [...] },
// { name: 'xai', displayName: 'xAI', type: 'custom', models: [...] },
// ]
// Flat list across all providers
const allModels = await sdk.listModels();
// Filter by capability
const toolModels = allModels.filter(m => m.capabilities.toolCalling);
const visionModels = allModels.filter(m => m.capabilities.vision);
const reasoningModels = allModels.filter(m => m.reasoningTier);Modes control AI behavior by setting a system prompt, filtering available tools, and configuring the workflow engine. The SDK ships with two built-in modes and supports unlimited custom modes.
| Mode | Tools | Workflow | Description |
|---|---|---|---|
| Agent | All tools | Planning + step execution + dynamic steps | Full autonomous access — read, write, execute, browse |
| Chat | Web/HTTP only | Direct execution (no planning) | Conversational assistant with web access |
import { createMode, Toolpack } from 'toolpack-sdk';
// Read-only code reviewer
const reviewMode = createMode({
name: 'review',
displayName: 'Code Review',
systemPrompt: 'You are a senior code reviewer. Read files but NEVER modify them.',
allowedToolCategories: ['filesystem', 'coding', 'git'],
blockedTools: ['fs.write_file', 'fs.delete_file', 'fs.append_file'],
baseContext: {
includeWorkingDirectory: true,
includeToolCategories: true,
},
workflow: {
planning: { enabled: true },
steps: { enabled: true, retryOnFailure: true },
progress: { enabled: true },
},
});
// Pure conversation — no tools at all
const simpleChat = createMode({
name: 'simple-chat',
displayName: 'Simple Chat',
systemPrompt: 'You are a helpful assistant. Provide clear and concise responses.',
blockAllTools: true, // Disables all tool calls
});
const sdk = await Toolpack.init({
providers: { openai: {} },
tools: true,
customModes: [reviewMode, simpleChat],
defaultMode: 'agent',
});
// Switch modes at runtime
sdk.setMode('review');
sdk.setMode('simple-chat');
sdk.cycleMode(); // Cycles through all registered modes| Option | Type | Default | Description |
|---|---|---|---|
name |
string | required | Unique identifier |
displayName |
string | required | Human-readable label for UI |
systemPrompt |
string | required | System prompt injected into every request |
description |
string | displayName |
Short tooltip description |
allowedToolCategories |
string[] | [] (all) |
Tool categories to allow. Empty = all allowed |
blockedToolCategories |
string[] | [] |
Tool categories to block. Overrides allowed |
allowedTools |
string[] | [] (all) |
Specific tools to allow. Empty = all allowed |
blockedTools |
string[] | [] |
Specific tools to block. Overrides allowed |
blockAllTools |
boolean | false |
If true, disables all tools (pure conversation) |
baseContext |
object/false | undefined |
Controls working directory and tool category injection |
workflow |
WorkflowConfig | undefined |
Planning, step execution, and progress configuration |
The workflow engine enables AI agents to plan and execute complex tasks step-by-step, with progress tracking, retries, and dynamic step additions.
- Planning — The AI generates a structured step-by-step plan from the user's request
- Execution — Each step is executed sequentially with tool access
- Dynamic Steps — New steps can be added during execution based on results
- Retries — Failed steps are retried automatically (configurable)
- Progress — Events are emitted at each stage for UI integration
const sdk = await Toolpack.init({
providers: { openai: {} },
tools: true,
defaultMode: 'agent', // Agent mode has workflow enabled
});
// Complex tasks are automatically planned and executed step-by-step
const result = await sdk.generate('Build me a REST API with user authentication');
// Or stream the response
for await (const chunk of sdk.stream({
model: 'gpt-4.1',
messages: [{ role: 'user', content: 'Refactor this codebase' }],
})) {
process.stdout.write(chunk.delta);
}Workflow status is communicated via events (not in stream content), making it easy to build progress UIs:
const executor = sdk.getWorkflowExecutor();
// Progress updates (ideal for status bars / shimmer text)
executor.on('workflow:progress', (progress) => {
// progress.status: 'planning' | 'awaiting_approval' | 'executing' | 'completed' | 'failed'
// progress.currentStep, progress.totalSteps, progress.percentage
// progress.currentStepDescription — includes retry info if retrying
console.log(`[${progress.percentage}%] Step ${progress.currentStep}/${progress.totalSteps}: ${progress.currentStepDescription}`);
});
// Step lifecycle
executor.on('workflow:step_start', (step, plan) => {
console.log(`Starting: ${step.description}`);
});
executor.on('workflow:step_complete', (step, plan) => {
console.log(`Completed: ${step.description}`);
});
executor.on('workflow:step_failed', (step, error, plan) => {
console.log(`Failed: ${step.description} — ${error.message}`);
});
executor.on('workflow:step_retry', (step, attempt, plan) => {
console.log(`Retrying: ${step.description} (attempt ${attempt})`);
});
executor.on('workflow:step_added', (step, plan) => {
console.log(`Dynamic step added: ${step.description}`);
});
// Workflow completion
executor.on('workflow:completed', (plan, result) => {
console.log(`Done! ${result.metrics.stepsCompleted} steps in ${result.metrics.totalDuration}ms`);
});
executor.on('workflow:failed', (plan, error) => {
console.log(`Workflow failed: ${error.message}`);
});interface WorkflowConfig {
planning?: {
enabled: boolean; // Enable planning phase
requireApproval?: boolean; // Pause for user approval before executing
planningPrompt?: string; // Custom system prompt for plan generation
maxSteps?: number; // Max steps in a plan (default: 20)
};
steps?: {
enabled: boolean; // Enable step-by-step execution
retryOnFailure?: boolean; // Retry failed steps (default: true)
maxRetries?: number; // Max retries per step (default: 3)
allowDynamicSteps?: boolean; // Allow adding steps during execution
maxTotalSteps?: number; // Max total steps including dynamic (default: 50)
};
progress?: {
enabled: boolean; // Emit progress events (default: true)
reportPercentage?: boolean; // Include completion percentage
};
onFailure?: {
strategy: 'abort' | 'skip' | 'ask_user' | 'try_alternative';
};
}The SDK emits events for tool execution, useful for building tool activity logs:
const client = sdk.getClient();
// Detailed log of every tool execution
client.on('tool:log', (event) => {
console.log(`Tool: ${event.name} (${event.status}) — ${event.duration}ms`);
console.log(` Args: ${JSON.stringify(event.arguments)}`);
console.log(` Result: ${event.result.substring(0, 200)}...`);
});
// Progress events (started, completed, failed)
client.on('tool:started', (event) => { /* ... */ });
client.on('tool:completed', (event) => { /* ... */ });
client.on('tool:failed', (event) => { /* ... */ });In addition to the 77 built-in tools, you can create and register your own custom tool projects using createToolProject():
import { Toolpack, createToolProject } from 'toolpack-sdk';
// Define a custom tool project
const myToolProject = createToolProject({
key: 'my-tools',
name: 'my-tools',
displayName: 'My Custom Tools',
version: '1.0.0',
description: 'Custom tools for my application',
category: 'custom',
author: 'Your Name',
tools: [
{
name: 'my.hello',
displayName: 'Hello World',
description: 'A simple hello world tool',
category: 'custom',
parameters: {
type: 'object',
properties: {
name: { type: 'string', description: 'Name to greet' },
},
required: ['name'],
},
execute: async (args) => {
return `Hello, ${args.name}!`;
},
},
],
});
// Register custom tools at init
const sdk = await Toolpack.init({
provider: 'openai',
tools: true, // Load built-in tools
customTools: [myToolProject], // Add your custom tools
});| Field | Type | Required | Description |
|---|---|---|---|
key |
string | ✓ | Unique identifier (lowercase, hyphens only) |
name |
string | ✓ | Package name |
displayName |
string | ✓ | Human-readable name |
version |
string | ✓ | Semver version |
description |
string | ✓ | Short description |
category |
string | ✓ | Tool category for filtering |
author |
string | Author name | |
tools |
ToolDefinition[] | ✓ | Array of tool definitions |
dependencies |
Record<string, string> | npm dependencies (validated at load) |
The SDK supports multimodal inputs (text + images) across all vision-capable providers. Images can be provided in three formats:
import { Toolpack, ImageFilePart, ImageDataPart, ImageUrlPart } from 'toolpack-sdk';
const sdk = await Toolpack.init({ provider: 'openai' });
// 1. Local file path (auto-converted to base64)
const filePart: ImageFilePart = {
type: 'image_file',
image_file: { path: '/path/to/image.png', detail: 'high' }
};
// 2. Base64 data (inline)
const dataPart: ImageDataPart = {
type: 'image_data',
image_data: { data: 'base64...', mimeType: 'image/png', detail: 'auto' }
};
// 3. HTTP URL (passed through or downloaded depending on provider)
const urlPart: ImageUrlPart = {
type: 'image_url',
image_url: { url: 'https://example.com/image.png', detail: 'low' }
};
// Use in messages
const response = await sdk.generate({
model: 'gpt-4.1',
messages: [{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
filePart
]
}]
});| Provider | File Path | Base64 | URL |
|---|---|---|---|
| OpenAI | Converted to base64 | ✓ Native | ✓ Native |
| Anthropic | Converted to base64 | ✓ Native | Downloaded → base64 |
| Gemini | Converted to base64 | ✓ Native | Downloaded → base64 |
| Ollama | Converted to base64 | ✓ Native | Downloaded → base64 |
# Provider API keys (at least one required)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_GENERATIVE_AI_KEY="AIza..."
# SDK logging (override — prefer toolpack.config.json instead)
export TOOLPACK_SDK_LOG_FILE="./toolpack.log" # Log file path (also enables logging)
export TOOLPACK_SDK_LOG_VERBOSE="true" # Verbose logging (also enables logging)
export TOOLPACK_SDK_TOOL_RESULT_MAX_CHARS="20000" # Max chars per tool resultToolpack uses a hierarchical configuration system that separates build-time (SDK) and runtime (CLI) configurations.
-
Workspace Local (Highest Priority)
- Location:
<workspace>/.toolpack/config/toolpack.config.json - Purpose: Project-specific overrides for the CLI tool.
- Location:
-
Global Default (CLI First Run)
- Location:
~/.toolpack/config/toolpack.config.json - Purpose: Global default settings for the CLI tool across all projects. Created automatically on first run.
- Location:
-
Build Time / SDK Base
- Location:
toolpack.config.jsonin project root. - Purpose: Static configuration used when bundling the SDK or running it directly in an app.
- Location:
The CLI includes a settings screen to view the active configuration source and its location. Press Ctrl+S from the Home screen to access it.
The toolpack.config.json file supports several sections:
| Option | Default | Description |
|---|---|---|
systemPrompt |
- | Override the base system prompt |
baseContext |
true |
Agent context configuration ({ includeWorkingDirectory, includeToolCategories, custom } or false) |
modeOverrides |
{} |
Mode-specific system prompt and toolSearch overrides |
Create a toolpack.config.json in your project root:
{
"logging": {
"enabled": true,
"filePath": "./toolpack.log",
"verbose": true
}
}| Option | Default | Description |
|---|---|---|
enabled |
false |
Enable file logging |
filePath |
toolpack-sdk.log |
Log file path (relative to CWD) |
verbose |
false |
Include message previews and tool details |
Create a toolpack.config.json in your project root:
{
"tools": {
"enabled": true,
"autoExecute": true,
"maxToolRounds": 5,
"toolChoicePolicy": "auto",
"enabledTools": [],
"enabledToolCategories": [],
"additionalConfigurations": {
"webSearch": {
"tavilyApiKey": "tvly-...",
"braveApiKey": "BSA..."
}
},
"toolSearch": {
"enabled": false,
"alwaysLoadedTools": ["fs.read_file", "fs.write_file", "fs.list_dir"],
"alwaysLoadedCategories": [],
"searchResultLimit": 5,
"cacheDiscoveredTools": true
}
}
}| Option | Type | Default | Description |
|---|---|---|---|
enabled |
boolean | true |
Enable/disable tool system |
autoExecute |
boolean | true |
Auto-execute tool calls from AI |
maxToolRounds |
number | 5 |
Max tool execution rounds per request |
toolChoicePolicy |
string | "auto" |
"auto", "required", or "required_for_actions" |
enabledTools |
string[] | [] |
Whitelist specific tools (empty = all) |
enabledToolCategories |
string[] | [] |
Whitelist categories (empty = all) |
The web.search tool supports multiple search backends with automatic fallback:
- Tavily (recommended) — set
tavilyApiKeyin config. Free tier: 1000 searches/month. - Brave Search — set
braveApiKeyin config. Free tier: 2000 queries/month. - DuckDuckGo Lite — built-in fallback, no API key needed (may be rate-limited).
When you have many tools (50+), enable tool search to reduce token usage. The AI discovers tools on-demand via a built-in tool.search meta-tool using BM25 ranking:
{
"tools": {
"toolSearch": {
"enabled": true,
"alwaysLoadedTools": ["fs.read_file", "fs.write_file", "web.search"],
"searchResultLimit": 5,
"cacheDiscoveredTools": true
}
}
}import { Toolpack } from 'toolpack-sdk';
const sdk = await Toolpack.init(config: ToolpackInitConfig): Promise<Toolpack>
// Completions (routes through workflow engine if mode has workflow enabled)
await sdk.generate(request: CompletionRequest | string, provider?: string): Promise<CompletionResponse>
sdk.stream(request: CompletionRequest, provider?: string): AsyncGenerator<CompletionChunk>
await sdk.embed(request: EmbeddingRequest, provider?: string): Promise<EmbeddingResponse>
// Provider management
sdk.setProvider(name: string): void
await sdk.listProviders(): Promise<ProviderInfo[]>
await sdk.listModels(): Promise<(ProviderModelInfo & { provider: string })[]>
// Mode management
sdk.setMode(name: string): ModeConfig
sdk.getMode(): ModeConfig | null
sdk.getModes(): ModeConfig[]
sdk.cycleMode(): ModeConfig
sdk.registerMode(mode: ModeConfig): void
// Internal access
sdk.getClient(): AIClient
sdk.getWorkflowExecutor(): WorkflowExecutor
await sdk.disconnect(): Promise<void>import { AIClient } from 'toolpack-sdk';
// Direct client usage (without workflow engine)
await client.generate(request: CompletionRequest, provider?: string): Promise<CompletionResponse>
client.stream(request: CompletionRequest, provider?: string): AsyncGenerator<CompletionChunk>
await client.embed(request: EmbeddingRequest, provider?: string): Promise<EmbeddingResponse>interface CompletionRequest {
messages: Message[];
model: string;
temperature?: number;
max_tokens?: number;
tools?: ToolCallRequest[];
tool_choice?: 'auto' | 'none' | 'required';
}
interface CompletionResponse {
content: string | null;
usage?: Usage;
finish_reason?: 'stop' | 'length' | 'tool_calls' | 'content_filter' | 'error';
tool_calls?: ToolCallResult[];
}
interface CompletionChunk {
delta: string;
usage?: Usage;
finish_reason?: 'stop' | 'length' | 'tool_calls' | 'content_filter' | 'error';
tool_calls?: ToolCallResult[];
}
interface ProviderModelInfo {
id: string;
displayName: string;
capabilities: { chat, streaming, toolCalling, embeddings, vision, reasoning? };
contextWindow?: number;
maxOutputTokens?: number;
inputModalities?: string[]; // e.g., ['text', 'image']
outputModalities?: string[]; // e.g., ['text']
reasoningTier?: string | null; // e.g., 'standard', 'extended'
costTier?: string; // e.g., 'low', 'medium', 'high', 'premium'
}The SDK provides typed error classes for common failure scenarios:
import { AuthenticationError, RateLimitError, InvalidRequestError, ProviderError, ConnectionError, TimeoutError } from 'toolpack-sdk';
try {
await sdk.generate('Hello');
} catch (err) {
if (err instanceof AuthenticationError) { /* Invalid API key (401) */ }
if (err instanceof RateLimitError) { /* Rate limited (429), check err.retryAfter */ }
if (err instanceof InvalidRequestError) { /* Bad request (400) */ }
if (err instanceof ConnectionError) { /* Provider unreachable (503) */ }
if (err instanceof TimeoutError) { /* Request timed out (504) */ }
if (err instanceof ProviderError) { /* Generic provider error (500) */ }
}npm run buildnpm test # Run all tests
npm run test:watch # Watch modenpm run watchtoolpack-sdk/
├── src/
│ ├── toolpack.ts # Toolpack class — high-level facade
│ ├── client/ # AIClient — provider routing, tool execution, mode injection
│ ├── providers/ # Provider adapter implementations
│ │ ├── base/ # ProviderAdapter abstract class
│ │ ├── openai/ # OpenAI adapter
│ │ ├── anthropic/ # Anthropic adapter
│ │ ├── gemini/ # Google Gemini adapter
│ │ └── ollama/ # Ollama adapter + provider (auto-discovery)
│ ├── modes/ # Mode system (Agent, Chat, createMode)
│ ├── workflows/ # Workflow engine (planner, step executor, progress)
│ ├── tools/ # 72 built-in tools + registry + router + BM25 search
│ │ ├── fs-tools/ # File system (18 tools)
│ │ ├── coding-tools/ # Code analysis (12 tools)
│ │ ├── git-tools/ # Git operations (9 tools)
│ │ ├── db-tools/ # Database operations (6 tools)
│ │ ├── exec-tools/ # Command execution (6 tools)
│ │ ├── http-tools/ # HTTP requests (5 tools)
│ │ ├── web-tools/ # Web interaction (5 tools)
│ │ ├── system-tools/ # System info (5 tools)
│ │ ├── diff-tools/ # Patch operations (3 tools)
│ │ ├── cloud-tools/ # Deployments (3 tools)
│ │ ├── registry.ts # Tool registry and loading
│ │ ├── router.ts # Tool routing and filtering
│ │ └── search/ # BM25 tool discovery engine (internal)
│ ├── types/ # Core TypeScript interfaces
│ ├── errors/ # Typed error hierarchy
│ ├── mcp/ # MCP (Model Context Protocol) utilities
│ └── utils/ # Shared utilities
└── tests/ # 545 tests across 81 test files
Current Version: 0.1.0
- ✓ 4 Built-in Providers — OpenAI, Anthropic, Gemini, Ollama (+ custom provider API)
- ✓ 77 Built-in Tools — fs, exec, git, diff, web, coding, db, cloud, http, system
- ✓ Workflow Engine — AI-driven planning, step execution, retries, dynamic steps, progress events
- ✓ Mode System — Agent, Chat, and custom modes via
createMode()withblockAllToolssupport - ✓ Tool Search — BM25-based on-demand tool discovery for large tool libraries
- ✓ 545 Tests passing across 81 test files
Contributions welcome! Please read the contributing guide first.
Apache 2.0 © Sajeer
- 🐛 Issue Tracker (Please use our Bug Report or Feature Request templates)
- 💬 Discussions
Author: Sajeer