Intelligent routing layer that connects user needs to Skills and MCPs — no installation required.
Users express their intent in plain language. AgentOctopus automatically selects, invokes, and returns results from the best-matching Skill or MCP — with zero setup required by the end user.
User: "Translate hello to French"
│
▼
AgentOctopus ← intent routing + rating-aware selection
│
▼
Translation Skill (cloud or local)
│
▼
"Bonjour"
- Semantic routing — understands natural language intent
- Multi-hop planner — decomposes complex queries into parallel sub-tasks with dependency tracking
- Confidence scoring — normalized 0-1 confidence on every routing result
- Rating system — skills are ranked by user feedback; better ones win
- Skill marketplace — built-in marketplace to publish, browse, and install community skills
- ClaWHub integration — install skills from clawhub.ai with
octopus add - Web UI — chat interface with skills sidebar, dark/light mode, and marketplace browser
- Multi-channel — CLI, REST API, IM bots (Slack/Discord/Telegram), agent-to-agent
- Hybrid execution — skills run in cloud or locally
- Flexible LLM — OpenAI, Gemini, or local Ollama
- Stateful sessions — conversation context persists across turns on all channels
# All-in-one (CLI + full library)
npm install -g agentoctopus
octopus ask "translate hello to French"
octopus listOr install individual packages if you only need a subset:
npm install @agentoctopus/gateway # IM bots + agent protocol
npm install @agentoctopus/core # router + executor + LLM client
npm install @agentoctopus/cli # CLI only# Install globally
npm install -g agentoctopus
# Run the interactive setup wizard
octopus onboard
# Start the gateway server
octopus start
# → Agent gateway on http://localhost:3002/agent/health# Or use the CLI directly
octopus ask "translate hello to French"
octopus list# Install dependencies once
pnpm install
# Run the interactive setup wizard
pnpm exec octopus onboard
# Start the gateway server
pnpm exec octopus start
# → Agent gateway on http://localhost:3002/agent/health# Or use the CLI directly
pnpm build
pnpm exec octopus ask "translate hello to French"
pnpm exec octopus listFirst-time setup is guided by an interactive wizard:
octopus onboardThe wizard walks you through:
- LLM Provider — choose OpenAI (or compatible), Google Gemini, or Ollama (local)
- Embedding Config — same provider or separate
- Execution Mode — local only, cloud only, or hybrid
- Skill Selection — enable/disable installed skills
- Review & Save — writes
.envfor you
If you run octopus ask or octopus start without a .env file, the wizard launches automatically.
Start the gateway and call the API:
octopus start
# Route a query
curl -X POST http://localhost:3002/agent/ask \
-H 'Content-Type: application/json' \
-d '{"query": "translate hello to French"}'
# → { "success": true, "skill": "translation", "confidence": 0.97, "response": "Bonjour" }
# Submit feedback
curl -X POST http://localhost:3000/api/feedback \
-H 'Content-Type: application/json' \
-d '{"skillName": "translation", "positive": true}'
# List installed skills
curl http://localhost:3000/api/skills
# → { "skills": [{ "name": "translation", "rating": 4.5, ... }] }
# Search the marketplace
curl http://localhost:3000/api/marketplace?q=weather
# → { "skills": [...], "total": 1 }
# Publish a skill to the marketplace
curl -X POST http://localhost:3000/api/marketplace \
-H 'Content-Type: application/json' \
-d '{"slug": "my-skill", "name": "My Skill", "description": "...", "author": "me", "skillMd": "---\nname: my-skill\n..."}'
# Install a skill from the marketplace
curl -X POST http://localhost:3000/api/marketplace/install \
-H 'Content-Type: application/json' \
-d '{"slug": "my-skill"}'Each platform adapter bootstraps the same routing engine and maintains per-user sessions (30-minute TTL, last 50 messages).
import { startSlackGateway } from 'agentoctopus';
await startSlackGateway({
appOptions: {
token: process.env.SLACK_BOT_TOKEN,
signingSecret: process.env.SLACK_SIGNING_SECRET,
socketMode: true,
appToken: process.env.SLACK_APP_TOKEN,
},
});
// Responds to @mentions and direct messagesimport { startDiscordGateway } from 'agentoctopus';
await startDiscordGateway({ token: process.env.DISCORD_TOKEN });
// Responds to @mentions in guilds and all DMsimport { startTelegramGateway } from 'agentoctopus';
await startTelegramGateway({ token: process.env.TELEGRAM_BOT_TOKEN });
// /ask <request> or plain text messagesAgentOctopus provides an OpenClaw-compatible HTTP API for agent-to-agent communication. External agents can route queries to specialized skills, maintain sessions, and receive direct LLM answers when no skill matches.
Quick Start:
# Install and run
npx @agentoctopus/gateway
# Or install globally
npm install -g @agentoctopus/gateway
agentoctopus-gatewayBasic Usage:
# Route a query
curl -X POST http://localhost:3002/agent/ask \
-H 'Content-Type: application/json' \
-d '{"query": "translate hello to French", "agentId": "openclaw"}'For complete integration guide including deployment options, API documentation, examples, and troubleshooting, see OPENCLAW_INTEGRATION.md.
For complex queries that involve multiple skills, the Planner decomposes the request into sub-tasks, runs them in parallel (or sequentially if there are dependencies), and synthesizes a single answer:
import { Planner, Router, Executor, SkillRegistry, createChatClient, createEmbedClient } from 'agentoctopus';
// ... set up registry, router, executor as usual ...
const planner = new Planner(chatClient, router, executor);
const result = await planner.run(
'translate hello to French and check the weather in Paris',
registry.getAll(),
);
console.log(result.finalAnswer);
// → "Bonjour! The weather in Paris is 22°C and sunny."
console.log(result.plan.isMultiHop); // true
console.log(result.stepResults.length); // 2
result.stepResults.forEach(s => {
console.log(`${s.skill || 'LLM'}: ${s.output} (confidence: ${s.confidence})`);
});Steps without dependencies run in parallel. When a step depends on a prior step's output, it waits and receives the context automatically.
The easiest way to configure AgentOctopus is with the setup wizard:
octopus onboardOr manually copy .env.example and fill in your values:
cp .env.example .env# LLM backend
LLM_PROVIDER=openai # openai | gemini | ollama
LLM_MODEL=gpt-4o-mini
OPENAI_API_KEY=sk-...
OPENAI_BASE_URL=https://your-openai-compatible-base-url/v1
# Embeddings and reranking
EMBED_PROVIDER=openai # defaults to LLM_PROVIDER
EMBED_MODEL=text-embedding-3-small
EMBED_API_KEY=
EMBED_BASE_URL=https://your-embedding-base-url/v1
RERANK_MODEL=gpt-4o-mini
# Execution mode
EXECUTION_MODE=local # local | cloud | hybrid
CLOUD_GATEWAY_URL=https://api.agentoctopus.dev
CLOUD_API_KEY=
# Optional alternate providers
GEMINI_API_KEY=
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
# Registry paths (optional, defaults to ./registry/)
REGISTRY_PATH=./registry/skills
RATINGS_PATH=./registry/ratings.json
# Security (cloud gateway)
AUTH_ENABLED=true
RATE_LIMIT_ENABLED=true
# IM bot tokens
SLACK_BOT_TOKEN=xoxb-...
SLACK_SIGNING_SECRET=...
SLACK_APP_TOKEN=xapp-...
DISCORD_TOKEN=...
TELEGRAM_BOT_TOKEN=...General questions that do not match a registered skill fall back to the configured chat model directly. Skill routing uses embeddings plus an LLM reranker, so if you split providers you should ensure both the chat and embedding endpoints are reachable.
The agent gateway (/agent/* endpoints) includes built-in security for production deployment:
All authenticated endpoints require an API key:
# Register for a free API key
curl -X POST https://your-gateway/agent/register \
-H 'Content-Type: application/json' \
-d '{"email": "you@example.com"}'
# → { "apiKey": "ak_...", "tier": "free", "limits": { ... } }
# Use the key in requests
curl -X POST https://your-gateway/agent/ask \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer ak_...' \
-d '{"query": "translate hello to French"}'Keys can also be passed via X-API-Key header or ?apiKey= query parameter.
Tier-based sliding-window rate limiting with standard headers:
| Tier | Requests/min | Requests/day | Price |
|---|---|---|---|
| Free | 10 | 100 | $0/mo |
| Pro | 60 | 5,000 | $19/mo |
| Enterprise | 300 | 50,000 | $99/mo |
Response headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset.
All requests are logged to logs/audit.jsonl with:
- Timestamp, HTTP method, path, IP address
- Masked API key, user ID, tier
- Status code, response time, query content
| Variable | Default | Description |
|---|---|---|
AUTH_ENABLED |
true |
Enable/disable API key authentication |
RATE_LIMIT_ENABLED |
true |
Enable/disable rate limiting |
CORS_ALLOWED_ORIGINS |
* |
Comma-separated allowed origins |
API_KEYS_PATH |
./api-keys.json |
Path to API keys store |
AUDIT_LOG_DIR |
./logs |
Directory for audit log files |
AgentOctopus/
├── apps/
│ ├── cli/ # CLI entry point (`octopus ask/list/add/publish/onboard`)
│ └── web/ # Next.js web UI, REST API, and marketplace
│ ├── / # Chat interface with skills sidebar
│ └── /marketplace # Skill marketplace browser
├── packages/
│ ├── agentoctopus/ # Umbrella package — re-exports everything
│ ├── core/ # Router + Executor + Planner + LLM client
│ ├── registry/ # Skill manifest loader + rating store + remote catalog
│ ├── adapters/ # HTTP, MCP stdio, subprocess adapters
│ └── gateway/ # IM bots + agent protocol + security middleware
│ ├── auth-middleware.ts # API key authentication + tier management
│ ├── rate-limiter.ts # Sliding-window rate limiting
│ └── audit-logger.ts # Structured request logging (JSONL)
└── registry/
├── skills/ # Built-in SKILL.md manifests
└── marketplace/ # Published skills + index.json
| Package | Description |
|---|---|
agentoctopus |
All-in-one install — includes everything below |
@agentoctopus/cli |
CLI (octopus ask, list, add, search, publish) |
@agentoctopus/core |
Router, Executor, LLM client |
@agentoctopus/gateway |
Slack/Discord/Telegram bots, agent HTTP API |
@agentoctopus/registry |
Skill manifest loader, rating store |
@agentoctopus/adapters |
HTTP, MCP, subprocess adapters |
The built-in marketplace lets you publish, browse, and install skills via the web UI or CLI:
# Browse the marketplace web UI
# Start the server, then visit http://localhost:3000/marketplace
# Publish your own skill
cd my-skill/ # folder containing SKILL.md
octopus publish --author "your-name"
# → Published to the marketplace at http://localhost:3000/marketplace
# Install from marketplace via API
curl -X POST http://localhost:3000/api/marketplace/install \
-H 'Content-Type: application/json' \
-d '{"slug": "my-skill"}'Skill author workflow:
1. Create a folder with SKILL.md (YAML frontmatter + instructions)
2. Run `octopus publish --author "you"` to push to the marketplace
3. Users browse /marketplace, click Install, restart the server
4. The skill is now available for routing queries
Browse the ClaWHub skill registry and install with one command:
# Search for skills
octopus search "self-improving"
# Install a skill from ClaWHub
octopus add self-improving-agent
# Remove a skill
octopus remove self-improving-agentCreate a new folder under registry/skills/<skill-name>/ with a SKILL.md:
---
name: my-skill
description: What this skill does and when to use it.
tags: [tag1, tag2]
version: 1.0.0
endpoint: https://api.example.com/invoke
adapter: http
---
## Instructions
...AgentOctopus supports two deployment modes: cloud (centralized server for all users) and local (self-hosted, free, with skill sync from cloud).
# Cloud deployment — gateway + web UI
docker compose --profile cloud up --build
# → Gateway on http://localhost:3002, Web UI on http://localhost:3000
# Local deployment — gateway only, syncs skills from cloud
CLOUD_URL=https://your-cloud-instance:3002 docker compose --profile local up --build
# → Gateway on http://localhost:3002Runs the full gateway + web UI. All skills are served and available for local instances to sync from.
DEPLOY_MODE=cloud AGENT_GATEWAY_PORT=3002 agentoctopus-gatewayRuns the gateway only. Optionally syncs skills from a cloud instance on startup.
# With auto-sync from cloud
DEPLOY_MODE=local CLOUD_URL=https://cloud:3002 agentoctopus-gateway
# Manual sync via CLI
octopus sync --cloud-url https://cloud:3002
# Manual sync via API
curl -X POST http://localhost:3002/agent/sync \
-H 'Content-Type: application/json' \
-d '{"cloudUrl": "https://cloud:3002"}'Local instances can pull skills from a cloud instance:
- On startup: set
CLOUD_URLenv var (enabled by default, disable withSYNC_ON_STARTUP=false) - On demand:
POST /agent/syncoroctopus sync --cloud-url <url> - Force update: use
--forceflag or{"force": true}to overwrite existing skills
The cloud instance exposes GET /agent/skills/export which returns full skill data (SKILL.md + scripts) for sync.
| Variable | Default | Description |
|---|---|---|
DEPLOY_MODE |
local |
cloud or local |
CLOUD_URL |
— | Cloud instance URL for skill sync |
SYNC_ON_STARTUP |
true |
Auto-sync on gateway boot |
AGENT_GATEWAY_PORT |
3002 |
Gateway listen port |
pnpm install # install all dependencies
pnpm build # build all packages
pnpm test # run all tests (40+ tests across 6 packages)
pnpm dev # watch mode for all packages| Package | Tests |
|---|---|
packages/registry |
15 |
packages/adapters |
3 |
packages/core |
14 |
apps/cli |
3 |
apps/web |
6 |
packages/gateway |
11 |
Apache 2.0