Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions docs/communicate.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,11 @@ agent = on_relay(my_agent, relay)
```

```typescript TypeScript
import { wrapLanguageModel } from 'ai';
import { Relay } from '@agent-relay/sdk/communicate';
import { onRelay } from '@agent-relay/sdk/communicate/adapters/pi';
const config = onRelay('MyAgent', piConfig, new Relay('MyAgent'));
import { onRelay } from '@agent-relay/sdk/communicate/adapters/ai-sdk';
const session = onRelay({ name: 'MyAgent' }, new Relay('MyAgent'));
const model = wrapLanguageModel({ model: baseModel, middleware: session.middleware });
```
</CodeGroup>

Expand All @@ -30,6 +32,7 @@ const config = onRelay('MyAgent', piConfig, new Relay('MyAgent'));
| Claude Agent SDK | Python, TypeScript | Push (Tier 1) | Hooks: PostToolUse, Stop |
| Google ADK | Python | Push (Tier 1) | before_model_callback injection |
| Pi | TypeScript | Push (Tier 1) | session.steer / session.followUp |
| AI SDK | TypeScript | Poll (Tier 2) | Tools + middleware system injection |
| OpenAI Agents | Python | Poll (Tier 2) | Tools + instructions wrapper |
| Agno | Python | Poll (Tier 2) | Tools + instructions wrapper |
| Swarms | Python | Poll (Tier 2) | Tools + on_message callback |
Expand Down Expand Up @@ -73,6 +76,7 @@ await relay.close()
## Per-Framework Guides

<CardGroup cols={2}>
<Card title="AI SDK" href="/communicate/ai-sdk">TypeScript adapter for Vercel AI SDK apps</Card>
<Card title="OpenAI Agents" href="/communicate/openai-agents">Python adapter for OpenAI Agents SDK</Card>
<Card title="Claude Agent SDK" href="/communicate/claude-sdk">Python + TypeScript adapter</Card>
<Card title="Google ADK" href="/communicate/google-adk">Python adapter for Google ADK</Card>
Expand Down
148 changes: 148 additions & 0 deletions docs/communicate/ai-sdk.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
---
title: 'AI SDK'
description: 'Connect Vercel AI SDK apps to Relaycast with onRelay().'
---

Connect an [AI SDK](https://ai-sdk.dev/docs/introduction) app to Relaycast with a single `onRelay()` call.

## Quick Start

```typescript
import { streamText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { Relay } from '@agent-relay/sdk/communicate';
import { onRelay } from '@agent-relay/sdk/communicate/adapters/ai-sdk';

const relay = new Relay('SupportLead');
const relaySession = onRelay({ name: 'SupportLead' }, relay);

const model = wrapLanguageModel({
model: openai('gpt-4o-mini'),
middleware: relaySession.middleware,
});

const result = await streamText({
model,
system: 'You coordinate support specialists and keep the user informed.',
tools: relaySession.tools,
messages: [{ role: 'user', content: 'Triage the latest onboarding issue.' }],
});
```

## What `onRelay()` Provides

`onRelay()` returns a session object with:

- `tools` — AI SDK-compatible relay tools for `generateText()` / `streamText()`
- `middleware` — language model middleware that injects newly received relay messages into the next model call
- `cleanup()` — unsubscribes from live relay delivery and clears buffered injections

For string-style call sites, relay context is appended to `system`. For message-array-heavy call sites, the middleware also prepends a synthetic `system` message so `messages`-driven flows get the same relay context without needing a separate top-level `system` string.

This fits the AI SDK model cleanly: tool calls remain explicit, while incoming relay messages show up as fresh coordination context on the next model turn.

## Tools Added

`onRelay()` exposes four tools:

- `relay_send({ to, text })`
- `relay_inbox()`
- `relay_post({ channel, text })`
- `relay_agents()`

These can be passed straight into `generateText()` or `streamText()`.

## Workflow-Friendly Pattern

For consumer-facing apps, the usual pattern is:

1. **Frontend app** uses AI SDK UI (`useChat`, streamed responses, etc.)
2. **Server route** runs `streamText()` with Relay tools attached
3. **Specialists or reviewers** participate via Relay / workflow runner
4. **Workflow runner** handles longer multi-agent execution when the chat turn needs more than one model call

### Next.js route that can escalate into a Relay workflow

```typescript
import { streamText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { Relay } from '@agent-relay/sdk/communicate';
import { onRelay } from '@agent-relay/sdk/communicate/adapters/ai-sdk';
import { runWorkflow } from '@agent-relay/sdk/workflows';

export async function POST(req: Request) {
const { prompt, repo, escalate } = await req.json();

const relay = new Relay('CustomerFacingLead');
const relaySession = onRelay({
name: 'CustomerFacingLead',
instructions:
'If implementation needs multiple specialists, post status to the team and summarize clearly for the end user.',
}, relay);

const model = wrapLanguageModel({
model: openai('gpt-4o-mini'),
middleware: relaySession.middleware,
});

if (escalate) {
const workflow = await runWorkflow('workflows/feature-dev.yaml', {
vars: { repo, task: prompt },
});

return Response.json({
status: workflow.status,
runId: workflow.runId,
});
}

const result = streamText({
model,
tools: relaySession.tools,
system: 'You are the point person for the user. Coordinate internally via Relay when needed.',
messages: [{ role: 'user', content: prompt }],
});

return result.toUIMessageStreamResponse({
onFinish() {
relaySession.cleanup();
void relay.close();
},
});
}
```

## Example App

A small end-to-end example lives at `examples/ai-sdk-relay-helpdesk/`.

It shows:

- a tiny Next.js UI
- an AI SDK route using `onRelay()`
- `messages`-based model calls
- a simple escalation gate into `workflows/helpdesk-escalation.yaml`

## API

### `onRelay(options, relay?)`

**Parameters**

- `options.name` — Relay agent name
- `options.instructions` — optional extra relay-specific instructions
- `options.includeDefaultInstructions` — set to `false` if you want full control over the injected relay guidance
- `relay` — optional pre-configured `Relay` client

**Returns**

- `tools`
- `middleware`
- `relay`
- `cleanup()`

## Notes

- Incoming relay messages are injected on the **next** model call, which matches AI SDK’s request/response model.
- `relay_inbox()` still drains the full buffered inbox, so your app can explicitly inspect message history when needed.
- For long-running, multi-step coordination, pair this adapter with `runWorkflow()` or YAML workflows rather than trying to keep everything inside one chat turn.
2 changes: 1 addition & 1 deletion docs/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: 'Spawn, coordinate, and connect AI agents from TypeScript or Python
The Agent Relay SDK has two modes:

- **Orchestrate** — Spawn and manage AI agents (Claude, Codex, Gemini, OpenCode) from code. Send messages, listen for responses, and shut them down when done.
- **Communicate** — Put an existing framework agent "on the relay" with a single `on_relay()` call. Works with OpenAI Agents, Claude Agent SDK, Google ADK, Pi, Agno, Swarms, and CrewAI.
- **Communicate** — Put an existing framework agent "on the relay" with a single `on_relay()` / `onRelay()` call. Works with AI SDK, OpenAI Agents, Claude Agent SDK, Google ADK, Pi, Agno, Swarms, and CrewAI.

<CodeGroup>
```bash TypeScript
Expand Down
8 changes: 6 additions & 2 deletions docs/markdown/communicate.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,11 @@ agent = on_relay(my_agent, relay)

```typescript
// TypeScript
import { wrapLanguageModel } from 'ai';
import { Relay } from '@agent-relay/sdk/communicate';
import { onRelay } from '@agent-relay/sdk/communicate/adapters/pi';
const config = onRelay('MyAgent', piConfig, new Relay('MyAgent'));
import { onRelay } from '@agent-relay/sdk/communicate/adapters/ai-sdk';
const session = onRelay({ name: 'MyAgent' }, new Relay('MyAgent'));
const model = wrapLanguageModel({ model: baseModel, middleware: session.middleware });
```

`on_relay()` auto-detects the framework and applies the right adapter. No configuration needed.
Expand All @@ -29,6 +31,7 @@ const config = onRelay('MyAgent', piConfig, new Relay('MyAgent'));
| Claude Agent SDK | Python, TypeScript | Push (Tier 1) | Hooks: PostToolUse, Stop |
| Google ADK | Python | Push (Tier 1) | before_model_callback injection |
| Pi | TypeScript | Push (Tier 1) | session.steer / session.followUp |
| AI SDK | TypeScript | Poll (Tier 2) | Tools + middleware system injection |
| OpenAI Agents | Python | Poll (Tier 2) | Tools + instructions wrapper |
| Agno | Python | Poll (Tier 2) | Tools + instructions wrapper |
| Swarms | Python | Poll (Tier 2) | Tools + on_message callback |
Expand Down Expand Up @@ -70,6 +73,7 @@ await relay.close()

## Per-Framework Guides

- [AI SDK](/communicate/ai-sdk) — TypeScript adapter for Vercel AI SDK apps
- [OpenAI Agents](/communicate/openai-agents) — Python adapter for OpenAI Agents SDK
- [Claude Agent SDK](/communicate/claude-sdk) — Python + TypeScript adapter
- [Google ADK](/communicate/google-adk) — Python adapter for Google ADK
Expand Down
91 changes: 91 additions & 0 deletions docs/markdown/communicate/ai-sdk.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# AI SDK

Connect an [AI SDK](https://ai-sdk.dev/docs/introduction) app to Relaycast with a single `onRelay()` call.

## Quick Start

```typescript
import { streamText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { Relay } from '@agent-relay/sdk/communicate';
import { onRelay } from '@agent-relay/sdk/communicate/adapters/ai-sdk';

const relay = new Relay('SupportLead');
const relaySession = onRelay({ name: 'SupportLead' }, relay);

const model = wrapLanguageModel({
model: openai('gpt-4o-mini'),
middleware: relaySession.middleware,
});

const result = await streamText({
model,
system: 'You coordinate support specialists and keep the user informed.',
tools: relaySession.tools,
messages: [{ role: 'user', content: 'Triage the latest onboarding issue.' }],
});
```

## What `onRelay()` Provides

`onRelay()` returns:

- `tools` for `generateText()` / `streamText()`
- `middleware` that injects live relay messages into the next model call
- `cleanup()` to unsubscribe and clear buffered injections

For string-style call sites, relay context is appended to `system`. For message-array call sites, the middleware also prepends a synthetic `system` message so chat-heavy flows get the same relay context.

## Workflow-Friendly Pattern

Use AI SDK in the consumer-facing app, and Relay workflows for the longer internal coordination path:

```typescript
import { streamText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { Relay } from '@agent-relay/sdk/communicate';
import { onRelay } from '@agent-relay/sdk/communicate/adapters/ai-sdk';
import { runWorkflow } from '@agent-relay/sdk/workflows';

export async function POST(req: Request) {
const { prompt, repo, escalate } = await req.json();

const relay = new Relay('CustomerFacingLead');
const relaySession = onRelay({ name: 'CustomerFacingLead' }, relay);

const model = wrapLanguageModel({
model: openai('gpt-4o-mini'),
middleware: relaySession.middleware,
});

if (escalate) {
const workflow = await runWorkflow('workflows/feature-dev.yaml', {
vars: { repo, task: prompt },
});

return Response.json({ status: workflow.status, runId: workflow.runId });
}

const result = await streamText({
model,
tools: relaySession.tools,
system: 'You are the point person for the user. Coordinate internally via Relay when needed.',
messages: [{ role: 'user', content: prompt }],
});

return Response.json({ mode: 'chat', text: await result.text });
}
```

## Example App

See `examples/ai-sdk-relay-helpdesk/` for a compact Next.js example that pairs AI SDK chat with Relay workflow escalation.

## API

### `onRelay(options, relay?)`

- `options.name` — Relay agent name
- `options.instructions` — optional extra instructions
- `options.includeDefaultInstructions` — disable built-in relay guidance if needed
- `relay` — optional pre-configured `Relay`
2 changes: 1 addition & 1 deletion docs/markdown/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Spawn, coordinate, and connect AI agents from TypeScript or Python.
The Agent Relay SDK has two modes:

- **Orchestrate** — Spawn and manage AI agents (Claude, Codex, Gemini, OpenCode) from code. Send messages, listen for responses, and shut them down when done.
- **Communicate** — Put an existing framework agent "on the relay" with a single `on_relay()` call. Works with OpenAI Agents, Claude Agent SDK, Google ADK, Pi, Agno, Swarms, and CrewAI.
- **Communicate** — Put an existing framework agent "on the relay" with a single `on_relay()` / `onRelay()` call. Works with AI SDK, OpenAI Agents, Claude Agent SDK, Google ADK, Pi, Agno, Swarms, and CrewAI.

## Install

Expand Down
39 changes: 39 additions & 0 deletions examples/ai-sdk-relay-helpdesk/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# AI SDK + Relay Helpdesk Example

A small consumer-facing Next.js app that uses the AI SDK adapter as the point-person layer and escalates bigger requests into a Relay workflow.

## What it demonstrates

- `onRelay()` attached to an AI SDK model via `wrapLanguageModel()`
- normal user-facing chat turns through `streamText()`
- a simple escalation gate that kicks off `runWorkflow()` for longer multi-step work
- a workflow file that uses a lead + specialist review path

## Files

- `app/page.tsx` — tiny browser UI
- `app/api/chat/route.ts` — AI SDK route with Relay communicate middleware
- `workflows/helpdesk-escalation.yaml` — Relay workflow used for escalations

## Run

```bash
cd examples/ai-sdk-relay-helpdesk
npm install
npm run dev
```

Set the env vars your app needs first, for example:

```bash
export OPENAI_API_KEY=...
export RELAY_API_KEY=...
export RELAY_BASE_URL=http://localhost:3888
```

Then open `http://localhost:3000` and try:

- a normal question like `Summarize the latest support issue`
- an escalation like `Please escalate: coordinate a migration plan for repo X`

If the prompt begins with `Please escalate:`, the route starts the Relay workflow and returns the workflow run id instead of trying to finish everything in one chat turn.
Loading
Loading