Skip to content

ramankrishna/forge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open Agentic Framework

Open Agentic Framework

Skills-as-markdown. Pluggable LLM providers. File-based memory.
Built for Bottensor.

npm MIT License GitHub stars

npm install @bottensor/forge

Why This Framework

Most agent frameworks are either too complex or too locked-in. We built this because we needed something that was simple to extend, transparent to debug, and free from vendor dependency.

  • Skills as Markdown — Extend your agent by writing a .md file. No code required. Hot-reloadable, human-readable, agent-authorable.
  • Pluggable Providers — OpenAI, Venice, Groq, Ollama, or any OpenAI-compatible API. Swap providers in a single line.
  • File-Based Memory — JSONL and Markdown files you can inspect, edit, and version control. No opaque vector databases.
  • Tool Calling — Register handlers, define tools in skill frontmatter, full lifecycle hooks at every stage.
  • Zero Lock-In — MIT licensed. No vendor dependencies. Works fully offline with Ollama.

Design Principles

The tools agents are built with should be:

  • Open — Fully readable, forkable, and auditable. MIT licensed.
  • Composable — Small, focused primitives that work together. Skills, providers, memory, and tools are all independent and swappable.
  • Portable — No dependency on any single provider, cloud service, or platform. Your agent runs wherever you want it to.
  • Transparent — Memory stored as files you can read. No black boxes.

Quick Start

Programmatic

import { Agent } from "@bottensor/forge";

const agent = new Agent({
  name: "my-agent",
  systemPrompt: "You are a helpful assistant.",
  provider: {
    name: "venice",
    apiKey: process.env.VENICE_API_KEY!,
    baseUrl: "https://api.venice.ai/api/v1",
    defaultModel: "qwen3-4b",
  },
});

const result = await agent.chat("Hello!");
console.log(result.response);

CLI

# Initialize a new agent project
npx @bottensor/forge init

# Start interactive chat
npx @bottensor/forge chat

# One-shot prompt
npx @bottensor/forge run "Explain quantum computing in 3 sentences"

Providers

Works with any OpenAI-compatible API out of the box:

import {
  veniceProvider,
  openaiProvider,
  groqProvider,
  ollamaProvider,
  OpenAIProvider,
} from "@bottensor/forge";

// Pre-configured providers
const venice = veniceProvider("your-api-key");
const openai = openaiProvider("sk-...");
const groq = groqProvider("gsk_...");
const ollama = ollamaProvider("llama3.2"); // local, no API key needed

// Custom provider
const custom = new OpenAIProvider({
  name: "my-provider",
  apiKey: "...",
  baseUrl: "https://my-api.com/v1",
  defaultModel: "my-model",
});

Use with an agent:

const agent = new Agent({
  name: "my-agent",
  systemPrompt: "You are helpful.",
  provider: { name: "ollama", baseUrl: "http://localhost:11434/v1", defaultModel: "llama3.2" },
});

// Or swap providers at runtime
agent.useProvider(veniceProvider("your-key"));

Skills (Markdown-Driven)

Skills are .md files with YAML frontmatter. Drop them in a skills/ directory and the agent loads them automatically.

Example Skill

Create skills/weather.md:

---
name: weather
description: Get current weather for a location
tools:
  - name: get_weather
    description: Fetch current weather data
    parameters:
      type: object
      properties:
        location:
          type: string
          description: City name or coordinates
      required:
        - location
---

# Weather Skill

When the user asks about weather, use the `get_weather` tool.
Present the results in a friendly, conversational format.
Include temperature, conditions, and any relevant alerts.

Register the tool handler:

const agent = new Agent({
  name: "weather-bot",
  systemPrompt: "You help people check the weather.",
  provider: { ... },
  skillsDir: "./skills",
  toolHandlers: {
    get_weather: async ({ location }) => {
      const res = await fetch(`https://wttr.in/${location}?format=j1`);
      const data = await res.json();
      return JSON.stringify(data.current_condition[0]);
    },
  },
});

Skill Discovery

Skills are loaded from:

  1. skillsDir — auto-discovers all .md files with YAML frontmatter
  2. skills array — explicit file paths or Skill objects
  3. Subdirectories — looks for SKILL.md inside each subdirectory
skills/
├── greeting.md          # Simple skill (single file)
├── weather.md           # Another simple skill
├── web-search/
│   └── SKILL.md         # Skill in a directory (can include extra files)
└── code-exec/
    └── SKILL.md

Memory

File-based memory that's human-readable and inspectable:

const agent = new Agent({
  name: "my-agent",
  systemPrompt: "You remember previous conversations.",
  provider: { ... },
  memoryDir: "./.agent/memory",  // default
});

// Memory is automatic — conversations are stored after each chat
await agent.chat("My name is Alice");
await agent.chat("What's my name?"); // "Your name is Alice"

Memory files:

  • memory.jsonl — append-only log (one JSON per line)
  • context.md — auto-generated markdown summary

Custom Memory Store

import { MemoryStore, MemoryEntry } from "@bottensor/forge";

class MyVectorMemory implements MemoryStore {
  async add(entry) { /* store in your vector DB */ }
  async search(query, limit) { /* semantic search */ }
  async list(limit) { /* return recent entries */ }
  async clear() { /* wipe */ }
}

const agent = new Agent({
  ...config,
  memory: new MyVectorMemory(),
});

Disable Memory

const agent = new Agent({ ...config, memory: false });

Tool Calling

Register tool handlers that the agent can invoke:

const agent = new Agent({
  name: "tool-agent",
  systemPrompt: "You can use tools to help users.",
  provider: { ... },
  toolHandlers: {
    calculate: async ({ expression }) => {
      return String(eval(expression));
    },
    fetch_url: async ({ url }) => {
      const res = await fetch(url);
      return await res.text();
    },
  },
});

// Or add tools after creation
agent.tool("greet", async ({ name }) => `Hello, ${name}!`);

Hooks

Intercept and modify behavior at every stage:

const agent = new Agent({
  ...config,
  hooks: {
    beforeRequest: async (messages, tools) => {
      console.log(`Sending ${messages.length} messages`);
      return messages; // can modify
    },
    afterResponse: async (response) => {
      console.log(`Got response: ${response.content.slice(0, 50)}...`);
      return response; // can modify
    },
    beforeToolCall: async (name, args) => {
      console.log(`Calling tool: ${name}`, args);
      return args; // can modify
    },
    afterToolCall: async (name, result) => {
      console.log(`Tool result: ${result.content.slice(0, 50)}...`);
      return result; // can modify
    },
    onError: (error) => {
      console.error("Agent error:", error);
    },
  },
});

Configuration (agent.yaml)

The CLI uses agent.yaml for configuration:

name: my-agent
systemPrompt: |
  You are a helpful AI assistant.

provider:
  name: venice
  apiKey: ${VENICE_API_KEY}
  baseUrl: https://api.venice.ai/api/v1
  defaultModel: qwen3-4b

skillsDir: ./skills
memoryDir: ./.agent/memory
temperature: 0.7
maxTokens: 4096

Environment variables are resolved with ${VAR_NAME} syntax.

Architecture

┌─────────────────────────────────────────────┐
│                   Agent                      │
│                                              │
│  ┌──────────┐  ┌──────────┐  ┌───────────┐ │
│  │  Skills   │  │  Memory  │  │   Hooks   │ │
│  │ (SKILL.md)│  │ (JSONL)  │  │           │ │
│  └─────┬─────┘  └─────┬────┘  └─────┬─────┘ │
│        │              │              │       │
│  ┌─────▼──────────────▼──────────────▼─────┐ │
│  │           Message Builder               │ │
│  │  system prompt + skills + memory + user  │ │
│  └─────────────────┬───────────────────────┘ │
│                    │                         │
│  ┌─────────────────▼───────────────────────┐ │
│  │         Provider (OpenAI-compat)        │ │
│  │  Venice │ OpenAI │ Groq │ Ollama │ ...  │ │
│  └─────────────────┬───────────────────────┘ │
│                    │                         │
│  ┌─────────────────▼───────────────────────┐ │
│  │           Tool Call Loop                │ │
│  │  LLM → tool calls → execute → repeat   │ │
│  └─────────────────────────────────────────┘ │
└─────────────────────────────────────────────┘

Built-in Skills

The package includes starter skills in the skills/ directory:

Skill Description
web-search Search the web for current information
code-exec Execute JavaScript code snippets
file-ops Read, write, and list files

Copy them to your project's skills/ directory and register the corresponding tool handlers.

API Reference

Agent

Method Description
chat(message) Send a message, get a response (with history + memory)
run(prompt, systemPrompt?) One-shot completion (no history/memory)
useProvider(provider) Swap the LLM provider
tool(name, handler) Register a tool handler
addSkill(skill) Add a skill at runtime
loadSkillFile(path) Load a skill from a file
getHistory() Get conversation history
clearHistory() Clear conversation history
getMemory() Get the memory store

RunResult

Field Type Description
response string The agent's response
messages ChatMessage[] Full message chain
toolCalls Array<{name, args, result}> Tool calls made
usage {promptTokens, completionTokens, totalTokens, llmCalls} Token usage
duration number Time in milliseconds

Contributing

We welcome contributions of all kinds. This project is early and there is significant room to shape its direction.

  • Write a skill — It is a markdown file. No build step, no boilerplate.
  • Add a provider — Implement the Provider interface for a new LLM backend.
  • Improve memory — Add semantic search, SQLite storage, or vector database adapters.
  • Report issues — Bug reports and feature requests help us prioritize.
  • Submit a pull request — Code contributions are reviewed promptly.

Please open an issue before starting large changes so we can align on direction.

License

MIT — Bottensor


Built for Bottensor.

About

Open agentic framework — skills-as-markdown, pluggable LLM providers, file-based memory. Built for agents.eco.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors