Skip to content

cloudlinqed/myAIDE

Repository files navigation

myaide

Token-efficient, self-healing AI agent — personal assistant and embeddable MicroSaaS worker.

myaide is a from-scratch reimagination of the AI agent pattern, built for two jobs:

  1. Personal Agent — an intelligent, persistent assistant that lives on your machine or VPS, with real memory, self-healing error recovery, and native MCP/A2A protocols.
  2. MicroSaaS Worker — a drop-in module you embed inside any app or website to handle AI-powered workflow steps, with multi-tenant session isolation and webhook support.

Why myaide?

OpenClaw myaide
Skill injection ~30,000 chars every request Top 5-8 relevant skills, ≤ 12,000 chars (BM25)
Context overflow Reactive 3-stage recovery, up to 5 min hang Proactive at 70% usage, 30s max compact time
Retries 160 brute-force attempts Circuit breaker with semantic failure detection
MCP support CLI shim via mcporter skill Native @modelcontextprotocol/sdk
Worker/embed mode Not supported First-class WorkerAgent + TenantAgentPool
Memory File-backed single-node SQLite + FTS5 hierarchical (core/recall/archival)
Token awareness None Budget manager checks before every API call

Features

  • Proactive token budget management — checks context usage before every request, compacts at 70%, emergency compacts at 88%
  • Dynamic skill injection — BM25 selects only relevant skills per request (saves 15k–28k tokens vs bulk injection)
  • Hierarchical memory — core (always-on), recall (FTS5 search), archival (temporal decay)
  • Semantic circuit breaker — API failures + semantic failures (2× weight), automatic recovery
  • Parallel tool execution — all tool calls in a response run concurrently
  • Native MCP — connect to any Model Context Protocol server; expose myaide as an MCP server
  • A2A protocol — receive tasks from other agents, delegate tasks to other agents
  • Worker SDK — embed myaide in any app: stateless, per-session, or multi-tenant pool
  • OpenAI-compatible API/v1/chat/completions SSE endpoint for drop-in compatibility
  • Prompt caching — system blocks marked cache_control: ephemeral (Anthropic 5-min KV cache)
  • Telegram channel — built-in Telegram bot integration

Prerequisites

Requirement Version Notes
Bun ≥ 1.1 Required to run the agent (uses bun:sqlite)
pnpm ≥ 9 Workspace package manager
Anthropic API key Get one at console.anthropic.com
Node.js ≥ 22 Required only for running tests

Quick Start

1. Install Bun

# macOS / Linux
curl -fsSL https://bun.sh/install | bash

# Windows (PowerShell)
powershell -c "irm bun.sh/install.ps1 | iex"

2. Clone and install

git clone https://github.com/cloudlinqed/myAIDE.git
cd myAIDE
pnpm install

3. Configure

cp .env.example .env

Edit .env and add your Anthropic API key:

ANTHROPIC_API_KEY=sk-ant-...

4. Run the CLI

bun run apps/cli/src/index.ts
  ╔═══════════════════════════════════╗
  ║         myaide  v0.1.0            ║
  ║   token-efficient AI assistant    ║
  ╚═══════════════════════════════════╝
  Session: cli-1709123456789
  Model:   claude-sonnet-4-6

You › hello!

CLI Commands

Command Description
/help Show available commands
/memory Display current core memory blocks
/skills [query] List loaded skills (filtered by query)
/compact Manually trigger context compaction
/clear Clear conversation history for this session
/quit Exit the CLI

CLI flags:

bun run apps/cli/src/index.ts --model claude-opus-4-6
bun run apps/cli/src/index.ts --session my-project

Standalone Server

Run myaide as an HTTP server with REST, SSE streaming, and optional A2A endpoints:

bun run apps/server/src/index.ts
myaide server v0.1.0
Listening on http://127.0.0.1:3000

API Endpoints

POST /v1/run — single-shot request

curl -X POST http://localhost:3000/v1/run \
  -H "Content-Type: application/json" \
  -d '{"message": "Summarize the last 3 commits in this repo", "sessionId": "dev"}'
{
  "text": "...",
  "totalTurns": 2,
  "usage": { "inputTokens": 1234, "outputTokens": 456, "cacheReadTokens": 890 }
}

POST /v1/run/stream — SSE streaming

curl -N -X POST http://localhost:3000/v1/run/stream \
  -H "Content-Type: application/json" \
  -d '{"message": "Write a short poem", "sessionId": "test"}'

Events: text_delta, tool_start, tool_end, budget_warning, compaction, usage, done, error

POST /v1/chat/completions — OpenAI-compatible

curl -X POST http://localhost:3000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-6",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

GET /v1/sessions — list active sessions

DELETE /v1/sessions/:id — delete a session

GET /v1/health — health check

A2A endpoints (set MYAIDE_ENABLE_A2A=true)

GET  /.well-known/agent.json   — agent capability card
POST /a2a/tasks                — submit a task
GET  /a2a/tasks/:id            — get task status
GET  /a2a/tasks/:id/stream     — SSE task stream

Server Authentication

Set MYAIDE_AUTH_TOKEN to require a Bearer token on all requests:

MYAIDE_AUTH_TOKEN=my-secret bun run apps/server/src/index.ts
curl -H "Authorization: Bearer my-secret" http://localhost:3000/v1/run ...

Embedding as a Worker (SDK)

Install the package (once published) or use the workspace directly:

import { MyAide } from 'myaide'

// Personal agent with persistent memory
const aide = new MyAide({
  system: 'You are a helpful assistant for Acme Corp.',
  sessionId: 'user-alice',
})
await aide.init()

const result = await aide.run('What orders do I have pending?')
console.log(result.text)

Worker mode (stateless, for API/webhook use):

const aide = new MyAide({
  mode: 'worker',
  memory: 'stateless',
  model: 'claude-haiku-4-5-20251001',
})
await aide.init()

// Handle incoming webhook
app.post('/ai', async (req, res) => {
  const result = await aide.run({ message: req.body.text, sessionId: req.body.userId })
  res.json({ reply: result.text })
})

Multi-tenant pool (isolated memory per tenant):

const aide = new MyAide({ mode: 'pool', memory: 'per-session' })
await aide.init()

app.post('/chat/:tenantId', async (req, res) => {
  const result = await aide.run({ message: req.body.message, sessionId: req.params.tenantId })
  res.json({ reply: result.text })
})

See docs/sdk.md for the full SDK reference.


Channels

Telegram

pnpm add grammy  # already included in extensions/telegram
import { TelegramChannel } from '@myaide/telegram'

const bot = new TelegramChannel({
  token: process.env.TELEGRAM_BOT_TOKEN!,
  agentConfig: {
    model: 'claude-sonnet-4-6',
    system: 'You are a helpful Telegram assistant.',
  },
})

await bot.start()

Telegram configuration:

Option Description
token Bot token from @BotFather
allowedUserIds Optional array of Telegram user IDs to whitelist
requireMention Only respond when bot is @mentioned in groups
historyLimit Max messages to keep per conversation (default: 50)

Custom Skills

Skills are Markdown files that get injected into the system prompt when relevant. Create ~/.myaide/skills/<name>/SKILL.md:

---
name: my-skill
description: Help with my-specific-domain tasks
keywords: [keyword1, keyword2, keyword3]
userInvocable: true
version: 1.0.0
---

## My Skill

Instructions and context injected when this skill is selected...

Skills are selected automatically per-request using BM25 scoring. The agent selects up to 8 skills within a 12,000-character budget.


Connecting MCP Servers

const aide = new MyAide({
  mcpServers: [
    {
      name: 'filesystem',
      transport: 'stdio',
      command: 'npx',
      args: ['-y', '@modelcontextprotocol/server-filesystem', '/workspace'],
    },
  ],
})
await aide.init()  // discovers and registers all MCP tools automatically

Or add them to your server config file (~/.myaide/server-config.json). See docs/configuration.md.


Configuration

Full reference: docs/configuration.md

Variable Default Description
ANTHROPIC_API_KEY (required) Anthropic API key
MYAIDE_DEFAULT_MODEL claude-sonnet-4-6 Default model
MYAIDE_DATA_DIR ~/.myaide Storage directory
MYAIDE_LOG_LEVEL info silent | error | warn | info | debug
MYAIDE_PORT 3000 Server port
MYAIDE_AUTH_TOKEN (none) Bearer token for server authentication
MYAIDE_ENABLE_A2A false Enable A2A endpoints
MYAIDE_PUBLIC_URL (none) Public URL for A2A agent card

Development

Run tests

pnpm test         # 176 tests, ~900ms
pnpm test:watch   # watch mode

Type check

pnpm lint                                              # all packages
npx tsc --noEmit --project packages/core/tsconfig.json  # single package

Project structure

packages/
  core/       — agent runtime, token budget, context manager, runner
  memory/     — hierarchical memory: core, recall (FTS5), archival
  tools/      — tool registry, parallel executor, circuit breaker
  skills/     — dynamic BM25 skill injection and loader
  mcp/        — native MCP client + server
  a2a/        — Agent-to-Agent protocol client + server
  worker/     — MicroSaaS worker SDK, webhook handlers, tenant pool
  channels/   — base channel abstraction
  sdk/        — unified MyAide public facade
apps/
  cli/        — interactive REPL
  server/     — HTTP/SSE/A2A gateway (Hono)
extensions/
  telegram/   — Telegram bot channel (grammy)
skills/
  coding/     — built-in coding skill
  web-search/ — built-in web search skill
test/
  __mocks__/  — bun:sqlite → better-sqlite3 bridge for tests

Architecture

See docs/architecture.md for a deep dive into:

  • Proactive token budget management
  • Dynamic skill injection vs OpenClaw's bulk approach
  • Hierarchical memory layers
  • Semantic circuit breaker
  • Parallel tool execution
  • Worker mode and multi-tenant pool

Roadmap

  • Discord channel extension
  • Slack channel extension
  • Vector embeddings for semantic memory search (Qdrant)
  • Redis-backed distributed sessions
  • Community skill registry
  • Plugin system for channel extensions

Contributing

See CONTRIBUTING.md for setup instructions, code style, and how to add skills, channels, or packages.


License

MIT © 2026 cloudlinqed

About

Multi-agent AI coding assistant CLI

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors