🌍 Language: English | 中文 | Français 📖 Docs: Installation · 安装 · Installation FR · Advanced usage · API docs
An AI agent SDK for PHP — run the full agentic loop (LLM turn → tool call → tool result → next turn) in-process, with thirteen providers, real-time streaming, multi-agent orchestration, and a machine-readable wire protocol. Usable as a standalone CLI or as a Laravel library.
superagent "fix the login bug in src/Auth/"$agent = new SuperAgent\Agent([
'provider' => 'openai-responses',
'model' => 'gpt-5',
]);
$result = $agent->run('Summarise docs/ADVANCED_USAGE.md in one paragraph');
echo $result->text();- Quick Start
- Providers & Authentication
- OpenAI Responses API
- Cross-provider handoff
- DeepSeek V4
- Agent Loop
- Tools & Multi-Agent
- Agent Definitions
- Skills
- MCP Integration
- Wire Protocol
- Retry, Errors & Observability
- Guardrails & Checkpoints
- Standalone CLI
- Laravel Integration
- Configuration reference
Every feature section ends with a Since line pointing at the release that introduced it. Full release notes live in CHANGELOG.md.
Install:
# As a standalone CLI:
composer global require forgeomni/superagent
# Or as a Laravel dependency:
composer require forgeomni/superagentSee INSTALL.md for the full matrix (system requirements, auth setup, IDE bridges, CI integration).
Smallest possible agent run:
$agent = new SuperAgent\Agent(['provider' => 'anthropic']);
$result = $agent->run('what day is it?');
echo $result->text();Smallest agent run with tools:
$agent = (new SuperAgent\Agent(['provider' => 'openai']))
->loadTools(['read', 'write', 'bash']);
$result = $agent->run('inspect composer.json and tell me what PHP version this project targets');
echo $result->text();One-shot via CLI:
export ANTHROPIC_API_KEY=sk-...
superagent "inspect composer.json and tell me what PHP version this project targets"Thirteen registry-backed providers, with region-aware base URLs and multiple auth modes per provider. All implement the same LLMProvider contract, so swapping one for another is one line.
| Registry key | Provider | Notes |
|---|---|---|
anthropic |
Anthropic | API key or stored Claude Code OAuth |
openai |
OpenAI Chat Completions (/v1/chat/completions) |
API key, OPENAI_ORGANIZATION / OPENAI_PROJECT |
openai-responses |
OpenAI Responses API (/v1/responses) |
Dedicated section below |
openrouter |
OpenRouter | API key |
gemini |
Google Gemini | API key |
kimi |
Moonshot Kimi | API key; regions intl / cn / code (OAuth) |
qwen |
Alibaba Qwen (OpenAI-compat default) | API key; regions intl / us / cn / hk / code (OAuth + PKCE) |
qwen-native |
Alibaba Qwen (DashScope-native body) | Kept for parameters.thinking_budget callers |
glm |
BigModel GLM | API key; regions intl / cn |
minimax |
MiniMax | API key; regions intl / cn |
deepseek |
DeepSeek V4 | API key; regions default / beta (since v0.9.6) |
bedrock |
AWS Bedrock | AWS SigV4 |
ollama |
Local Ollama daemon | No auth — localhost:11434 by default |
lmstudio |
Local LM Studio server | Placeholder auth — localhost:1234 by default (since v0.9.1) |
Auth options, by priority:
- API key from environment —
ANTHROPIC_API_KEY,OPENAI_API_KEY,KIMI_API_KEY,QWEN_API_KEY,GLM_API_KEY,MINIMAX_API_KEY,DEEPSEEK_API_KEY,OPENROUTER_API_KEY,GEMINI_API_KEY. - Stored OAuth credentials at
~/.superagent/credentials/<name>.json. Device-code flow — runsuperagent auth login <name>:claude-code— reuses an existing Claude Code logincodex— reuses a Codex CLI logingemini— reuses a Gemini CLI loginkimi-code— RFC 8628 device flow againstauth.kimi.com(since v0.9.0)qwen-code— device flow with PKCE S256 + per-accountresource_url(since v0.9.0)
- Explicit config —
api_key/access_token/account_idon the agent options.
OAuth refresh is serialised across processes via CredentialStore::withLock() — parallel queue workers sharing one credential file don't race on refresh (since v0.9.0).
new Agent([
'provider' => 'openai',
'env_http_headers' => [
'OpenAI-Project' => 'OPENAI_PROJECT', // sent only when env set + non-empty
'OpenAI-Organization' => 'OPENAI_ORGANIZATION',
],
'http_headers' => [
'x-app' => 'my-host-app', // static header
],
]);Since v0.9.1
Every provider ships with model-id + pricing metadata bundled in resources/models.json. Refresh to the vendor's live /models endpoint at any time:
superagent models refresh # every provider with env creds
superagent models refresh openai # one provider
superagent models list # show merged catalog
superagent models status # catalog source + ageSince v0.9.0
Dedicated provider at provider: 'openai-responses'. Hits /v1/responses with the full modern OpenAI shape.
Why use it over openai:
| Feature | Responses | Chat Completions |
|---|---|---|
previous_response_id continuation |
✅ — server holds state; new turn skips resending context | ❌ — must re-send messages[] every turn |
reasoning.effort (minimal / low / medium / high / xhigh) |
✅ native | ❌ requires model-id hacks for o-series |
reasoning.summary |
✅ native | ❌ |
prompt_cache_key (server-side cache pinning) |
✅ native | ❌ |
text.verbosity (low / medium / high) |
✅ native | ❌ |
service_tier (priority / default / flex / scale) |
✅ native | ❌ |
| Classified error types | ✅ via response.failed event codes |
Pattern-matched on HTTP body |
$agent = new Agent([
'provider' => 'openai-responses',
'model' => 'gpt-5',
]);
$result = $agent->run('analyse this codebase and propose refactors', [
'reasoning' => ['effort' => 'high', 'summary' => 'auto'],
'verbosity' => 'low',
'prompt_cache_key' => 'session:42',
'service_tier' => 'priority',
'store' => true, // required to use previous_response_id next turn
]);
// Continue the conversation without resending history:
$provider = $agent->getProvider();
$nextAgent = new Agent([
'provider' => 'openai-responses',
'options' => ['previous_response_id' => $provider->lastResponseId()],
]);
$nextResult = $nextAgent->run('now go one level deeper on the auth layer');Pass access_token (or set auth_mode: 'oauth') to auto-route through chatgpt.com/backend-api/codex — so Plus / Pro / Business subscribers bill against their subscription instead of getting rejected at api.openai.com.
new Agent([
'provider' => 'openai-responses',
'access_token' => $token,
'account_id' => $accountId, // adds chatgpt-account-id header
]);Six base-URL markers auto-flip the provider into Azure mode. api-version query string is added (default 2025-04-01-preview, overridable); api-key header is set alongside Authorization.
new Agent([
'provider' => 'openai-responses',
'base_url' => 'https://my-resource.openai.azure.com/openai/deployments/gpt-5',
'api_key' => $azureKey,
'azure_api_version' => '2024-12-01-preview', // optional override
]);Inject W3C traceparent into client_metadata so OpenAI-side logs correlate with your distributed trace:
$tc = SuperAgent\Support\TraceContext::fresh(); // mint fresh
// OR: SuperAgent\Support\TraceContext::parse($headerValue); // from incoming HTTP header
$agent->run($prompt, ['trace_context' => $tc]);
// OR: $agent->run($prompt, ['traceparent' => '00-0af7-...', 'tracestate' => 'v=1']);Since v0.9.1
Agent::switchProvider($name, $config, $policy) swaps the active provider mid-conversation. The message history is preserved and re-encoded into the new provider's wire format on the next request — so a tool history that ran against Claude can continue under Kimi without losing parallel tool calls or tool_use_id correlation.
use SuperAgent\Conversation\HandoffPolicy;
$agent = new Agent(['provider' => 'anthropic', 'api_key' => $key, 'model' => 'claude-opus-4-7']);
$agent->run('analyse this codebase');
// Hand off to a cheaper / faster model for the next phase:
$agent->switchProvider('kimi', ['api_key' => $kimiKey, 'model' => 'kimi-k2-6'])
->run('write the unit tests');
// Token-window check after switching — different tokenizers count
// the same history differently (Anthropic vs GPT-4 drift 20–30%):
$status = $agent->lastHandoffTokenStatus();
if ($status !== null && ! $status['fits']) {
// Trigger your existing IncrementalContext compression before the next call.
}HandoffPolicy::default() // keep tool history, drop signed thinking, append handoff marker
HandoffPolicy::preserveAll() // keep everything — useful when swap is temporary and you'll come back
HandoffPolicy::freshStart() // collapse history to (latest user turn) — fresh shot at a stuck conversationProvider-only artifacts the new wire shape can't carry (Anthropic signed thinking, Kimi prompt_cache_key, Responses-API encrypted reasoning, Gemini cachedContent refs) get parked under AssistantMessage::$metadata['provider_artifacts'][$providerKey] — HandoffPolicy::preserveAll() keeps them around so a later swap back to the originating family can re-stitch them; default() keeps them stashed but invisible to the new provider.
switchProvider() constructs the new provider before mutating any state. If construction fails (missing api_key, unknown region, network probe rejection) the agent stays on the old provider with its history untouched.
All conversion goes through Conversation\Transcoder, which dispatches by WireFamily enum: Anthropic (also bedrock's anthropic.* invocations), OpenAIChat (OpenAI/Kimi/GLM/MiniMax/Qwen/OpenRouter/LMStudio), OpenAIResponses, Gemini (the only family that correlates tool calls by name+order, no ids), DashScope, Ollama. Useful directly for offline transcoding:
use SuperAgent\Conversation\Transcoder;
use SuperAgent\Conversation\WireFamily;
$wire = (new Transcoder())->encode($messages, WireFamily::Gemini);Since v0.9.5
DeepSeek V4 (released 2026-04-24) ships two MoE models — deepseek-v4-pro (1.6T total / 49B active) and deepseek-v4-flash (284B / 13B active) — with 1M context as the default and a single-model thinking / non-thinking toggle. The same backend exposes both an OpenAI-wire and an Anthropic-wire endpoint, so the SDK supports two routes:
// OpenAI-wire: native DeepSeekProvider
$agent = new Agent([
'provider' => 'deepseek',
'api_key' => getenv('DEEPSEEK_API_KEY'),
'model' => 'deepseek-v4-pro', // or 'deepseek-v4-flash'
]);
// Anthropic-wire: reuse AnthropicProvider with a custom base_url
$agent = new Agent([
'provider' => 'anthropic',
'api_key' => getenv('DEEPSEEK_API_KEY'),
'base_url' => 'https://api.deepseek.com/anthropic',
'model' => 'deepseek-v4-pro',
]);Reasoning channel. V4-thinking, R1, Kimi-thinking, Qwen-reasoning and any future OpenAI-compat reasoner stream their internal monologue on delta.reasoning_content. The shared ChatCompletionsProvider SSE parser now surfaces it as a separate ContentBlock::thinking() block prepended to the assistant turn — callers render or hide it deliberately rather than mixing it into the user-facing answer.
$result = $agent->run('hard reasoning prompt', ['thinking' => true]);
foreach ($result->message()->content as $block) {
if ($block->type === 'thinking') {
// model's reasoning chain
} elseif ($block->type === 'text') {
// user-facing answer
}
}Deprecation lane. deepseek-chat and deepseek-reasoner retire 2026-07-24. The catalog flags both with deprecated_until and replaced_by fields; ModelResolver emits a one-shot warning per process recommending deepseek-v4-flash / deepseek-v4-pro respectively. Set SUPERAGENT_SUPPRESS_DEPRECATION=1 to silence.
Cache-aware billing. OpenAI-compat backends report prompt_tokens as gross (cache hits + misses). The parser now subtracts the cached portion before populating Usage::inputTokens, so the cache discount lands correctly — CostCalculator charges 10% of input price for read hits instead of effectively 110%. Affects every OpenAI-compat backend with caching (DeepSeek, Kimi, OpenAI itself).
Beta endpoint. Set region: 'beta' to route to https://api.deepseek.com/beta for FIM / prefix completion access on the same auth.
Since v0.9.6
Agent::run($prompt, $options) drives the full turn loop until the model stops emitting tool_use blocks. Each turn's cost, usage, and messages flow into AgentResult.
$result = $agent->run('...', [
'model' => 'claude-sonnet-4-5-20250929', // per-call override
'max_tokens' => 8192,
'temperature' => 0.3,
'response_format' => ['type' => 'json_schema', 'json_schema' => [...]],
'idempotency_key' => 'job-42:turn-7', // since v0.9.1
'system_prompt' => 'You are a precise analyst.',
]);
echo $result->text();
$result->turns(); // turn count
$result->totalUsage(); // Usage{inputTokens, outputTokens, cache*}
$result->totalCostUsd; // float, across all turns
$result->idempotencyKey; // passthrough for usage-log dedup (since v0.9.1)$agent = (new Agent(['provider' => 'openai']))
->withMaxTurns(50)
->withMaxBudget(5.00); // USD — hard cap; aborts mid-loop if breachedforeach ($agent->stream('...') as $assistantMessage) {
echo $assistantMessage->text();
}For machine-readable event streams (JSON / NDJSON for IDE / CI consumers) see the Wire Protocol section.
new Agent([
'provider' => 'anthropic',
'auto_mode' => true, // delegates to TaskAnalyzer to pick model + tools
]);$result = $agent->run($prompt, ['idempotency_key' => $queueJobId . ':' . $turnNumber]);
// $result->idempotencyKey is truncated to 80 chars; surfaces on the AgentResult
// so hosts that write ai_usage_logs can dedupe on it.Since v0.9.1
Tools are subclasses of SuperAgent\Tools\Tool. Built-in tools — read / write / edit / bash / glob / grep / search / fetch — auto-load unless the caller opts out. Custom tools register via $agent->registerTool(new MyTool()).
$agent = (new Agent(['provider' => 'anthropic']))
->loadTools(['read', 'write', 'bash'])
->registerTool(new MyDomainTool());
$result = $agent->run('apply the refactor plan in ./plan.md');Dispatch sub-agents in parallel by emitting multiple agent tool_use blocks in one assistant message:
$agent->registerTool(new AgentTool());
$result = $agent->run(<<<PROMPT
Run these three investigations in parallel:
1. Read CHANGELOG.md and summarise the last three releases
2. Read composer.json and list all runtime dependencies
3. Grep for TODO comments in src/
Collate the three reports.
PROMPT);Each sub-agent runs in its own PHP process (via ProcessBackend); blocking I/O in one child doesn't block siblings. When proc_open is disabled, fibers take over.
Every AgentTool result carries hard evidence of what the child actually did — not just success: true:
[
'status' => 'completed', // or 'completed_empty' / 'async_launched'
'filesWritten' => ['/abs/path/a.md'], // deduped absolute paths
'toolCallsByName' => ['Read' => 3, 'Write' => 1],
'totalToolUseCount' => 4, // observed, not self-reported turn count
'productivityWarning' => null, // or advisory string (CJK-localised — since v0.9.1)
'outputWarnings' => [], // since v0.9.1 — filesystem audit findings
]completed_empty — zero tool calls observed. Re-dispatch or pick a stronger model.
completed + non-empty productivityWarning — the child invoked tools but wrote no files (often fine for advisory consults; check the text).
Productivity instrumentation since v0.8.9. CJK localisation + filesystem audit since v0.9.1.
Pass output_subdir to opt into both (a) a CJK-aware guard-block prepended to the child's prompt and (b) a post-exit filesystem scan:
$agent->run('...', [
'output_subdir' => '/abs/path/to/reports/analyst-1',
]);
// Audit catches:
// - non-whitelisted extensions (defaults to .md / .csv / .png)
// - consolidator-reserved filenames (summary.md / 摘要.md / mindmap.md / ...)
// - sibling-role sub-dirs (ceo / cfo / cto / marketing / ... or kebab-case role slugs)
// Configurable via AgentOutputAuditor constructor. Never modifies disk.Since v0.9.1
Any main brain can call these as regular tools — no provider switch needed.
Moonshot server-hosted builtins (execute server-side; results inlined in the assistant reply):
| Tool | Attributes | Since |
|---|---|---|
KimiMoonshotWebSearchTool ($web_search) |
network | v0.9.0 |
KimiMoonshotWebFetchTool ($web_fetch) |
network | v0.9.1 |
KimiMoonshotCodeInterpreterTool ($code_interpreter) |
network, cost, sensitive | v0.9.1 |
Other provider-native tool families:
- Kimi —
KimiFileExtractTool,KimiBatchTool,KimiSwarmTool,KimiMediaUploadTool - Qwen —
QwenLongFileTool+dashscope_cache_controlfeature - GLM —
glm_web_search,glm_web_reader,glm_ocr,glm_asr - MiniMax —
minimax_tts,minimax_music,minimax_video,minimax_image
Auto-loaded from ~/.superagent/agents/ (user scope) and <project>/.superagent/agents/ (project scope). Three formats: .yaml, .yml, .md. Cross-format extend: inheritance.
# ~/.superagent/agents/reviewer.yaml
name: reviewer
description: Code reviewer with strict style enforcement
extend: base-coder # can be .yaml / .yml / .md
system_prompt: |
You review PRs with a focus on correctness and hidden state.
allowed_tools: [read, grep, glob]
disallowed_tools: [write, edit, bash]
model: claude-sonnet-4-5-20250929<!-- ~/.superagent/agents/analyst.md -->
---
name: analyst
extend: reviewer
model: gpt-5
---
Your job is to surface architectural risks. Write findings as Markdown.Tool-list fields (allowed_tools, disallowed_tools, exclude_tools) accumulate through extend: chains. Cycle depth-limited.
Since v0.9.0
Markdown-based capabilities you can register globally and pull into any agent run:
superagent skills install ./my-skill.md
superagent skills list
superagent skills show review
superagent skills remove review
superagent skills path # show install directorySkill markdown supports frontmatter with name, description, allowed_tools, system_prompt. Skill runs inherit the caller's provider.
superagent mcp list
superagent mcp add sqlite stdio uvx --arg mcp-server-sqlite
superagent mcp add brave stdio npx --arg @brave/mcp --env BRAVE_API_KEY=...
superagent mcp remove sqlite
superagent mcp status
superagent mcp pathConfig persists atomically at ~/.superagent/mcp.json.
superagent mcp auth <name> # run RFC 8628 device flow
superagent mcp reset-auth <name> # clear stored token
superagent mcp test <name> # probe availability (stdio `command -v` or HTTP reachability)Servers declaring an oauth: {client_id, device_endpoint, token_endpoint} block in their config use this flow. Since v0.9.0.
Drop a catalog at .mcp-servers/catalog.json (or .mcp-catalog.json) in your project root:
{
"mcpServers": {
"sqlite": {"command": "uvx", "args": ["mcp-server-sqlite"]},
"brave": {"command": "npx", "args": ["@brave/mcp"], "env": {"BRAVE_API_KEY": "k"}}
},
"domains": {
"baseline": ["sqlite"],
"all": ["sqlite", "brave"]
}
}Sync to a project .mcp.json:
superagent mcp sync # full catalog
superagent mcp sync --domain=baseline # only the "baseline" domain
superagent mcp sync --servers=sqlite,brave # explicit subset
superagent mcp sync --dry-run # preview, no disk writesNon-destructive contract — byte-equal disk hash → unchanged; a user-edited file is kept as user-edited; first-time writes or our-last-hash matches become written. A manifest at <project>/.superagent/mcp-manifest.json tracks sha256 of every file we've written so stale entries clean up automatically.
Since v0.9.1
v1 — line-delimited JSON (NDJSON), one event per line, self-describing via wire_version + type top-level fields. Foundation for IDE bridges, CI integrations, structured logs.
superagent --output json-stream "summarise src/"
# Emits events like:
# {"wire_version":1,"type":"turn.begin","turn_number":1}
# {"wire_version":1,"type":"text.delta","delta":"I'll start by..."}
# {"wire_version":1,"type":"tool.call","name":"read","input":{"path":"src/"}}
# {"wire_version":1,"type":"turn.end","turn_number":1,"usage":{...}}Choose where the stream goes via a DSN:
| DSN | Meaning |
|---|---|
stdout (default) / stderr |
Standard streams |
file:///path/to/log.ndjson |
Append-mode file write |
tcp://host:port |
Connect to a listening TCP peer |
unix:///path/to/sock |
Connect to a listening unix socket |
listen://tcp/host:port |
Listen on TCP, accept one client |
listen://unix//path/to/sock |
Listen on unix socket, accept one client |
Programmatic use:
$factory = new SuperAgent\CLI\AgentFactory();
[$emitter, $transport] = $factory->makeWireEmitterForDsn('listen://unix//tmp/agent.sock');
// IDE plugin attaches, then:
$agent->run($prompt, ['wire_emitter' => $emitter]);
$transport->close();Non-blocking peer socket means a dropped IDE doesn't stall the agent loop.
Wire Protocol v1 since v0.9.0. Socket / TCP / file transport since v0.9.1.
new Agent([
'provider' => 'openai',
'request_max_retries' => 4, // HTTP connect / 4xx / 5xx (default 3)
'stream_max_retries' => 5, // reserved for mid-stream resume (Responses API)
'stream_idle_timeout_ms' => 60_000, // cURL low-speed cutoff on SSE (default 300 000)
]);Jittered exponential backoff (0.9–1.1× multiplier) prevents thundering-herd retries from parallel workers. Retry-After header honoured exactly (no jitter — the server knows best).
Since v0.9.1
Six subclasses of ProviderException emitted by OpenAIErrorClassifier against the response body's error.code / error.type / HTTP status:
try {
$agent->run($prompt);
} catch (\SuperAgent\Exceptions\Provider\ContextWindowExceededException $e) {
// prompt was too long; compact history or swap models
} catch (\SuperAgent\Exceptions\Provider\QuotaExceededException $e) {
// monthly cap hit; notify operator
} catch (\SuperAgent\Exceptions\Provider\UsageNotIncludedException $e) {
// ChatGPT plan doesn't include this model; upgrade or switch to API key
} catch (\SuperAgent\Exceptions\Provider\CyberPolicyException $e) {
// policy rejection — don't retry
} catch (\SuperAgent\Exceptions\Provider\ServerOverloadedException $e) {
// retryable with backoff; check $e->retryAfterSeconds
} catch (\SuperAgent\Exceptions\Provider\InvalidPromptException $e) {
// malformed body — inspect and fix
} catch (\SuperAgent\Exceptions\ProviderException $e) {
// catch-all base; every subclass above extends this
}All subclasses extend ProviderException, so pre-existing catch (ProviderException) sites keep working unchanged.
Since v0.9.1
superagent health # 5s cURL probe of every configured provider
superagent health --all # include providers with no env key (useful for "what did I forget to set?")
superagent health --json # machine-readable table; exits non-zero on any failureWraps ProviderRegistry::healthCheck() — distinguishes auth rejection (401/403) from network timeout from "no API key" so an operator can fix the right thing without guessing.
Since v0.9.1
- Per-index tool-call assembly — one streamed call split across N chunks now produces one tool-use block, not N fragments.
finish_reason: error_finishdetection — DashScope-compat throttles raiseStreamContentError(retryable, HTTP 429) instead of silently polluting the message body.- Truncated tool-call JSON repair — one-shot attempt to close unbalanced braces before falling back to an empty arg dict.
- Dual-shape cached-token reads —
usage.prompt_tokens_details.cached_tokens(current OpenAI shape) ANDusage.cached_tokens(legacy) both populateUsage::cacheReadInputTokens.
Five detectors observe the streaming event bus; first trigger is sticky:
| Detector | Signal |
|---|---|
TOOL_LOOP |
Same tool + same normalised args 5× in a row |
STAGNATION |
Same tool name 8× regardless of args |
FILE_READ_LOOP |
≥ 8 of last 15 tool calls are read-like, with cold-start exemption |
CONTENT_LOOP |
Same 50-char rolling window appears 10× in streamed text |
THOUGHT_LOOP |
Same thinking-channel text appears 3× |
new Agent([
'provider' => 'openai',
'loop_detection' => true, // defaults
// OR per-detector overrides:
// 'loop_detection' => ['TOOL_LOOP' => 10, 'STAGNATION' => 15],
]);Violations fan out as loop_detected wire events — the agent keeps running, the host decides whether to intervene.
Every turn snapshots the agent state (messages, cost, usage). Attach a GitShadowStore and file-level snapshots land alongside in a separate bare git repo at ~/.superagent/history/<project-hash>/shadow.git — never touches the user's own .git.
use SuperAgent\Checkpoint\CheckpointManager;
use SuperAgent\Checkpoint\GitShadowStore;
$mgr = new CheckpointManager(shadowStore: new GitShadowStore('/path/to/project'));
$mgr->createCheckpoint($agentState, label: 'after-refactor');
// Later:
$checkpoints = $mgr->list();
$mgr->restore($checkpoints[0]->id);
$mgr->restoreFiles($checkpoints[0]); // plays back the shadow commitRestore reverts tracked files and leaves untracked files in place for safety. The project's own .gitignore is respected (the shadow's worktree IS the project dir).
new Agent([
'provider' => 'anthropic',
'permission_mode' => 'ask', // or 'default' / 'plan' / 'bypassPermissions'
]);ask prompts the caller's PermissionCallbackInterface before any write-class tool. Wrap it in WireProjectingPermissionCallback to surface the request as a wire event for IDE prompts.
superagent # interactive REPL
superagent "fix the login bug" # one-shot
superagent init # initialize ~/.superagent/
superagent auth login <provider> # import OAuth login
superagent auth status # show stored credentials
superagent models list / update / refresh / status / reset
superagent mcp list / add / remove / sync / auth / reset-auth / test / status / path
superagent skills install / list / show / remove / path
superagent swarm <prompt> # plan + execute a swarm
superagent health [--all] [--json] [--providers=a,b,c] # provider reachabilityOptions:
-m, --model <model> Model name
-p, --provider <provider> Provider key (openai, anthropic, openai-responses, ...)
--max-turns <n> Maximum agent turns (default 50)
-s, --system-prompt <prompt> Custom system prompt
--project <path> Project working directory
--json Output results as JSON
--output json-stream Emit NDJSON wire events
--verbose-thinking Show full thinking stream
--no-thinking Hide thinking
--plain Disable ANSI colours
--no-rich Legacy minimal renderer
-V, --version Show version
-h, --help Show help
Interactive commands (inside the REPL):
/help available commands
/model <name> switch model
/cost show cost tracking
/compact force context compaction
/session save|load|list|delete
/clear clear conversation
/quit exit
Standalone CLI since v0.8.6.
The service provider auto-registers when you composer require forgeomni/superagent:
// config/superagent.php
return [
'default_provider' => env('SUPERAGENT_PROVIDER', 'anthropic'),
'providers' => [
'anthropic' => ['api_key' => env('ANTHROPIC_API_KEY')],
'openai' => ['api_key' => env('OPENAI_API_KEY')],
'openai-responses' => ['api_key' => env('OPENAI_API_KEY'), 'model' => 'gpt-5'],
// ...
],
'agent' => [
'max_turns' => 50,
'max_budget_usd' => 5.00,
],
];use SuperAgent\Facades\SuperAgent;
$result = SuperAgent::agent(['provider' => 'openai'])
->run('summarise this week\'s commits');Artisan commands mirror the CLI:
php artisan superagent:chat "fix the bug"
php artisan superagent:mcp sync
php artisan superagent:models refresh
php artisan superagent:health --jsonSee docs/LARAVEL.md for queue integration, job dispatching, and the ai_usage_logs schema.
Frameworks that embed SuperAgent — typically multi-tenant platforms that store encrypted provider credentials in a database row and spin up an agent per request — use ProviderRegistry::createForHost() instead of create(). The host passes a normalised shape and the SDK dispatches to the right constructor via per-provider adapters.
use SuperAgent\Providers\ProviderRegistry;
// One call, every provider — no `match ($type)` on the host side.
$agent = ProviderRegistry::createForHost($sdkKey, [
'api_key' => $aiProvider->decrypted_api_key,
'base_url' => $aiProvider->base_url,
'model' => $resolvedModel,
'max_tokens' => $extra['max_tokens'] ?? null,
'region' => $extra['region'] ?? null,
'credentials' => $extra, // opaque blob; adapter picks what it needs
'extra' => $extra, // provider-specific passthrough (organization, reasoning, verbosity, ...)
]);Every ChatCompletions-style provider (Anthropic, OpenAI, OpenAI-Responses, OpenRouter, Ollama, LM Studio, Gemini, Kimi, Qwen, Qwen-native, GLM, MiniMax) uses the default pass-through adapter. Bedrock ships a built-in adapter that splits credentials.aws_access_key_id / aws_secret_access_key / aws_region into the AWS SDK's shape.
Plugins or hosts that need to customise an adapter register their own:
ProviderRegistry::registerHostConfigAdapter('my-custom-provider', function (array $host): array {
return [
'api_key' => $host['credentials']['my_custom_token'] ?? null,
'model' => $host['model'] ?? 'default-model',
// ... arbitrary transform
];
});New SDK provider keys in future releases register their own adapter (or ride the default one), so the host-side factory code never needs to grow a new match arm per release.
Since v0.9.2
Every option accepted by the Agent constructor, grouped. Defaults in parentheses.
Provider selection
| Key | Accepts |
|---|---|
provider |
Registry key or an LLMProvider instance |
model |
Model id — overrides provider default |
base_url |
URL — overrides provider default; also triggers auto-detection (Azure) |
region |
intl / cn / us / hk / code (provider-specific) |
api_key |
Provider API key |
access_token + account_id |
OAuth (OpenAI ChatGPT / Anthropic Claude Code) |
auth_mode |
'api_key' (default) or 'oauth' |
organization |
OpenAI org id (adds OpenAI-Organization header) |
Agent loop
| Key | Default |
|---|---|
max_turns |
50 |
max_budget_usd |
0.0 (no cap) |
system_prompt |
null |
auto_mode |
false |
allowed_tools / denied_tools |
null / [] |
permission_mode |
'default' |
options |
[] (per-call defaults forwarded to provider) |
Per-call options ($agent->run($prompt, $options))
| Key | Since | Notes |
|---|---|---|
model / max_tokens / temperature / tool_choice / response_format |
v0.1.0 | Standard Chat Completions knobs |
features |
v0.8.8 | thinking / prompt_cache_key / dashscope_cache_control / ... routed via FeatureDispatcher |
extra_body |
v0.9.0 | Power-user escape hatch — deep-merged into the request body |
loop_detection |
v0.9.0 | true (defaults), false, or threshold overrides |
idempotency_key |
v0.9.1 | Passthrough to AgentResult::$idempotencyKey |
reasoning |
v0.9.1 | Responses API — {effort, summary} |
verbosity |
v0.9.1 | Responses API — low / medium / high |
prompt_cache_key |
v0.9.0 | Cache key for Kimi + OpenAI Responses |
previous_response_id |
v0.9.1 | Responses API continuation |
store / include / service_tier / parallel_tool_calls |
v0.9.1 | Responses API |
client_metadata |
v0.9.1 | Responses API opaque key-value map |
trace_context / traceparent / tracestate |
v0.9.1 | W3C Trace Context injection |
output_subdir |
v0.9.1 | AgentTool guard-block + post-exit audit |
Retry + transport (provider-level)
| Key | Default | Since |
|---|---|---|
max_retries |
3 |
v0.1.0 (legacy single knob) |
request_max_retries |
3 (inherits max_retries) |
v0.9.1 |
stream_max_retries |
5 |
v0.9.1 |
stream_idle_timeout_ms |
300_000 |
v0.9.1 |
env_http_headers |
[] |
v0.9.1 |
http_headers |
[] |
v0.9.1 |
experimental_ws_transport |
false |
v0.9.1 (scaffold) |
azure_api_version |
'2025-04-01-preview' |
v0.9.1 (Azure only) |
- CHANGELOG — full per-release notes
- INSTALL — install + first-run setup
- Advanced usage — patterns, sample agents, debugging
- Native providers — region maps + capability matrix
- Wire protocol — v1 spec
- Features matrix — which provider supports which feature
MIT — see LICENSE.