Skip to content

Diffumist/Tamago

Repository files navigation

Tamago

Personal LLM agent

Quick Start

To keep the Agent running in a secure sandbox with a complete runtime toolchain (for example curl, jq, python), Tamago recommends deployment with Docker.

# After cloning the repository, set required environment variables
export TAMAGO_LLM_API_KEY="sk-your-key"
export TAMAGO_TELEGRAM_TOKEN="123456..."

# Optional: specify model (default: openai:gpt-5.3-codex)
export TAMAGO_LLM_MODEL="openai:gpt-5.3-codex"

# Start with docker-compose in the background
# The default config mounts the current directory to /app/workspace in the container
# This allows the Agent to read and edit files in your host working directory
docker-compose up -d

# View logs
docker-compose logs -f

Environment Variables

LLM Configuration

Variable Default Description
TAMAGO_LLM_MODEL openai:gpt-5.3-codex Model identifier in provider:model format
TAMAGO_LLM_API_KEY Default API key (used for all providers)
TAMAGO_LLM_API_BASE Default API base URL

Model Identifier Format

Models are specified with the provider:model format:

# OpenAI
TAMAGO_LLM_MODEL="openai:gpt-5.3-codex"

Agent Configuration

Variable Default Description
TAMAGO_MAX_LOOP_ITERATIONS 10 Maximum iterations in one agent loop. Each iteration includes one LLM call and optional tool execution
TAMAGO_MAX_RETRIES 2 Maximum retries for one LLM call (retry only for temporary/provider errors)

Channel Configuration

Variable Default Description
TAMAGO_TELEGRAM_TOKEN Telegram bot token (Telegram channel is disabled when empty)

Path Configuration

Variable Default Description
TAMAGO_CONFIG_BASE ~/.tamago Root config directory. Skills, preferences, and database are all relative to this path

Directory Structure

~/.tamago/                          # TAMAGO_CONFIG_BASE
├── skills/                         # Skills directory
│   ├── core/                       # Global skills (available in all channels)
│   │   └── web-search/
│   │       └── SKILLS.md
│   ├── meta/                       # Meta skills (available in all channels)
│   │   └── skills-search/
│   │       └── SKILLS.md
│   └── context/                    # Channel-specific skills
│       └── telegram/
│           └── SKILLS.md
├── preferences/                    # Preferences directory
│   ├── PREFERENCES.md              # Global preferences (highest priority)
│   ├── PREFERENCES.default.md      # Global default preferences (used only when PREFERENCES.md is missing)
│   ├── telegram.md                 # Telegram channel preference
│   └── telegram/
│       └── chat/
│           └── PREFERENCES.md      # telegram/chat-level preference
└── tamago.db                       # SQLite database (tape + memory, auto-created)

Skills System

Skills are Markdown files (SKILLS.md) injected into the system prompt when the Agent handles messages.

Directory Layout

skills/
├── <group>/
│   └── <skill-name>/
│       └── SKILLS.md

Skill Groups

Group Injection Rule Purpose
core All channels Base capabilities (search, file operations, etc.)
meta All channels Meta capabilities (skill search, reflection, etc.)
context Match channel root Channel context (for example, context/telegram is injected only for Telegram)
messager Match channel root Channel message formatting (same matching rule as context)

Skill File Example

<!-- ~/.tamago/skills/core/web-search/SKILLS.md -->

## Web Search

You can use the bash tool to run curl for web search.

### How to use
1. Use `,bash curl -s "https://api.example.com/search?q=..."` to run search
2. Parse JSON response
3. Summarize relevant information for the user

Skill URI

Skills use the skills: URI format:

skills:core/web-search/SKILLS.md
skills:context/telegram/SKILLS.md

The URI layer has built-in path traversal protection and cannot reference files outside the skills root.


Preferences System

Preferences are Markdown files that define Agent behavior and style. They are injected into the system prompt inside <Preferences> tags.

Cascade Rules

Preferences are loaded in this order, and all applicable entries are appended (not overwritten):

1. preferences/PREFERENCES.md           <- global preference (if present, skip step 2)
2. preferences/PREFERENCES.default.md   <- global default (used only if step 1 is missing)
3. preferences/<prefix>.md              <- channel-prefix preference (progressive matching)
4. preferences/<prefix>/PREFERENCES.md  <- channel-subdirectory preference (progressive matching)

Channel prefix expansion example for telegram/chat/12345:

Check Order Path
3a preferences/telegram.md
4a preferences/telegram/PREFERENCES.md
3b preferences/telegram/chat.md
4b preferences/telegram/chat/PREFERENCES.md
3c preferences/telegram/chat/12345.md
4c preferences/telegram/chat/12345/PREFERENCES.md

Preferences File Example

<!-- ~/.tamago/preferences/PREFERENCES.md -->

## General Preferences

- Respond with concise and friendly language
- Provide code examples for technical questions
- Avoid excessive emoji use
<!-- ~/.tamago/preferences/telegram.md -->

## Telegram Channel Preferences

- Keep replies within 500 characters (Telegram has message length constraints)
- Use Markdown formatting

Memory System

Memory is automatically stored in the memory_records table inside <CONFIG_BASE>/tamago.db.

Memory Record Structure

{
  "id": "AAABjK...",
  "in_channel": "telegram/chat/12345",
  "out_channel": "",
  "parents": ["AAABjJ..."],
  "children": [],
  "input": "User message",
  "output": "Agent reply",
  "input_intents": "",
  "compacted_actions": ["Executed 2 command blocks"],
  "timestamp": "2026-02-28T14:00:00Z"
}

Characteristics

  • DAG structure: memory links by parents, supporting conversation branches
  • Channel-scoped query: loads latest 10 memories for the current channel + 3 levels of ancestors
  • Time-ordered ID: generated from timestamp, naturally sortable by time
  • SQLite persistence: shares the same DB file as tape; WAL mode ensures concurrent safety

Command System

Tamago uses a comma (,) prefix for commands. Both users and the LLM can issue commands.

Built-in Commands

Command Description
,quit / ,exit Exit
,help Show help
,bash <cmd> Execute shell command

Command Behavior

  • Success: output is returned directly without going through the LLM
  • Failure: wrapped into a <command> XML block and sent to the LLM for self-repair
  • Commands in LLM output: the LLM can embed comma commands; they are auto-detected and executed

Built-in Tools

When the Agent enters the LLM loop, the following tools are registered via OpenAI function-calling:

Tool Parameters Description
bash command (required), cwd (optional) Execute shell command
read_file path (required) Read file content
write_file path (required), content (required) Write file (auto-create parent directory)

All tool output is limited to 16KB to avoid context-window overflow.


Telegram Channel Configuration

Basic Setup

Set TAMAGO_TELEGRAM_TOKEN to enable Telegram. Get the token from @BotFather.

Behavior

Feature Default Description
Trigger @mention / reply Requires @bot mention or a reply to a bot message
Debounce 1s (mention) / 3s (follow-up) Multiple messages are aggregated before processing
Active Window 60s Follow-up messages are auto-responded within 60s after mention
Poll Timeout 30s Long-poll wait duration

Channel Path Format

telegram/chat/<chat_id>

Programming Interface (Go SDK)

Tamago's agent package can be used as a standalone SDK:

package main

import (
    "context"
    goagent "github.com/diffumist/tamago/agent"
    "github.com/diffumist/tamago/agent/tools"
    "github.com/diffumist/tamago/provider"
)

func main() {
    llm, _ := goagent.New(goagent.Options{
        Model:      "openai:gpt-5.3-codex",
        APIKey:     "sk-xxx",
        MaxRetries: 2,
        Factory:    provider.OpenAIFactory(),
    })

    // Simple chat
    text, _ := llm.Chat(context.Background(), "hello", goagent.ChatOptions{
        SystemPrompt: "You are a helpful assistant.",
    })

    // Tool-enabled loop
    result := llm.RunTools(context.Background(), "list files", goagent.ToolOptions{
        Definitions: tools.AllBuiltinTools(nil),
    })

    // Conversation session (Tape)
    session := llm.Tape("my-session", nil)
    session.Chat(context.Background(), "first message", goagent.ChatOptions{})
    session.Chat(context.Background(), "follow up", goagent.ChatOptions{})

    // Structured decisions
yes, _ := llm.If(context.Background(), "The sky is blue", "Is this true?", ...)
label, _ := llm.Classify(context.Background(), "I love this!", []string{"positive", "negative"}, ...)
}

Credits

About

(WIP) Personal LLM agent bot

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages