Skip to content

Commit 4f892c7

Browse files
Add documentation for new examples (25-28)
- Agent delegation example - Custom visualizer example - Observability with Laminar - Ask agent functionality Co-authored-by: openhands <openhands@all-hands.dev>
1 parent 7ed7975 commit 4f892c7

File tree

4 files changed

+218
-0
lines changed

4 files changed

+218
-0
lines changed

sdk/guides/agent-delegation.mdx

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
title: Agent Delegation
3+
description: Delegate tasks to sub-agents for parallel processing
4+
---
5+
6+
<Note>
7+
This example is available on GitHub: [examples/01_standalone_sdk/25_agent_delegation.py](https://github.com/All-Hands-AI/agent-sdk/blob/main/examples/01_standalone_sdk/25_agent_delegation.py)
8+
</Note>
9+
10+
Learn how to use agent delegation to have a main agent delegate tasks to specialized sub-agents for parallel processing. Each sub-agent runs independently and returns results to the main agent for consolidation.
11+
12+
```python icon="python" expandable examples/01_standalone_sdk/25_agent_delegation.py
13+
```
14+
15+
```bash Running the Example
16+
export LLM_API_KEY="your-api-key"
17+
# Optional overrides
18+
# export LLM_MODEL="anthropic/claude-sonnet-4-5-20250929"
19+
# export LLM_BASE_URL="https://your-llm-base-url"
20+
21+
cd agent-sdk
22+
uv run python examples/01_standalone_sdk/25_agent_delegation.py
23+
```
24+
25+
### How It Works
26+
27+
- Register the `DelegateTool` and create a delegation visualizer:
28+
29+
```python
30+
from openhands.tools.delegate import DelegateTool, DelegationVisualizer, register_agent
31+
32+
register_tool("DelegateTool", DelegateTool)
33+
visualizer = DelegationVisualizer()
34+
```
35+
36+
- Register sub-agents with specific capabilities:
37+
38+
```python
39+
register_agent(
40+
agent_id="analyst",
41+
agent=analyst_agent,
42+
workspace=cwd,
43+
)
44+
```
45+
46+
- The main agent can delegate tasks using the `DelegateTool`, and sub-agents will execute them independently.
47+
48+
## Next Steps
49+
50+
- **[Custom Tools](/sdk/guides/custom-tools)** – Create specialized tools for your agents
51+
- **[Multi-Agent Systems](/sdk/guides/multi-agent)** – Build complex agent hierarchies

sdk/guides/ask-agent.mdx

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
---
2+
title: Ask Agent
3+
description: Get quick responses from agents without interrupting execution
4+
---
5+
6+
<Note>
7+
This example is available on GitHub: [examples/01_standalone_sdk/28_ask_agent_example.py](https://github.com/All-Hands-AI/agent-sdk/blob/main/examples/01_standalone_sdk/28_ask_agent_example.py)
8+
</Note>
9+
10+
Learn how to use `ask_agent()` to get quick responses from the agent about the current conversation state without interrupting the main execution flow.
11+
12+
```python icon="python" expandable examples/01_standalone_sdk/28_ask_agent_example.py
13+
```
14+
15+
```bash Running the Example
16+
export LLM_API_KEY="your-api-key"
17+
# Optional overrides
18+
# export LLM_MODEL="anthropic/claude-sonnet-4-5-20250929"
19+
# export LLM_BASE_URL="https://your-llm-base-url"
20+
21+
cd agent-sdk
22+
uv run python examples/01_standalone_sdk/28_ask_agent_example.py
23+
```
24+
25+
### How It Works
26+
27+
- Create a conversation and start a long-running task:
28+
29+
```python
30+
conversation = Conversation(agent=agent, workspace=".")
31+
conversation.send_message("Your main task message")
32+
```
33+
34+
- While the conversation is running, use `ask_agent()` to get quick responses:
35+
36+
```python
37+
response = conversation.ask_agent("What is the current status?")
38+
print(f"Agent response: {response}")
39+
```
40+
41+
- The `ask_agent()` method:
42+
- Does not interrupt the main conversation flow
43+
- Returns a quick response based on the current conversation state
44+
- Can be called multiple times during execution
45+
- Useful for monitoring progress or getting status updates
46+
47+
### Use Cases
48+
49+
- **Progress monitoring**: Check how far along a task is
50+
- **Status queries**: Ask about current state or next steps
51+
- **Debug information**: Get insights into what the agent is doing
52+
- **Interactive monitoring**: Build UIs that show real-time updates
53+
54+
## Next Steps
55+
56+
- **[Conversation Management](/sdk/guides/conversation)** – Learn more about conversation APIs
57+
- **[Async Operations](/sdk/guides/async)** – Handle asynchronous agent interactions

sdk/guides/custom-visualizer.mdx

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
---
2+
title: Custom Visualizer
3+
description: Create custom visualizers for conversation monitoring
4+
---
5+
6+
<Note>
7+
This example is available on GitHub: [examples/01_standalone_sdk/26_custom_visualizer.py](https://github.com/All-Hands-AI/agent-sdk/blob/main/examples/01_standalone_sdk/26_custom_visualizer.py)
8+
</Note>
9+
10+
Learn how to create custom visualizers by subclassing `ConversationVisualizerBase` to monitor and display conversation events in your preferred format.
11+
12+
```python icon="python" expandable examples/01_standalone_sdk/26_custom_visualizer.py
13+
```
14+
15+
```bash Running the Example
16+
export LLM_API_KEY="your-api-key"
17+
# Optional overrides
18+
# export LLM_MODEL="anthropic/claude-sonnet-4-5-20250929"
19+
# export LLM_BASE_URL="https://your-llm-base-url"
20+
21+
cd agent-sdk
22+
uv run python examples/01_standalone_sdk/26_custom_visualizer.py
23+
```
24+
25+
### How It Works
26+
27+
- Create a custom visualizer by subclassing `ConversationVisualizerBase`:
28+
29+
```python
30+
from openhands.sdk.conversation.visualizer import ConversationVisualizerBase
31+
from openhands.sdk.event import Event
32+
33+
class MinimalVisualizer(ConversationVisualizerBase):
34+
"""A minimal visualizer that prints raw events."""
35+
36+
def on_event(self, event: Event) -> None:
37+
"""Handle events for visualization."""
38+
print(f"Event: {event}")
39+
```
40+
41+
- Pass the visualizer instance to the conversation:
42+
43+
```python
44+
visualizer = MinimalVisualizer()
45+
conversation = Conversation(
46+
agent=agent,
47+
workspace=".",
48+
visualizer=visualizer,
49+
)
50+
```
51+
52+
- The visualizer's `on_event` method will be called for each event in the conversation.
53+
54+
## Next Steps
55+
56+
- **[Event System](/sdk/guides/events)** – Learn about conversation events
57+
- **[Callbacks](/sdk/guides/callbacks)** – Use event callbacks for monitoring
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
title: Observability with Laminar
3+
description: Enable OpenTelemetry tracing with Laminar for observability
4+
---
5+
6+
<Note>
7+
This example is available on GitHub: [examples/01_standalone_sdk/27_observability_laminar.py](https://github.com/All-Hands-AI/agent-sdk/blob/main/examples/01_standalone_sdk/27_observability_laminar.py)
8+
</Note>
9+
10+
Learn how to enable OpenTelemetry tracing with Laminar to monitor and debug your agent conversations.
11+
12+
```python icon="python" expandable examples/01_standalone_sdk/27_observability_laminar.py
13+
```
14+
15+
```bash Running the Example
16+
export LLM_API_KEY="your-api-key"
17+
export LMNR_PROJECT_API_KEY="your-laminar-api-key"
18+
# Optional overrides
19+
# export LLM_MODEL="openhands/claude-sonnet-4-5-20250929"
20+
# export LLM_BASE_URL="https://your-llm-base-url"
21+
22+
cd agent-sdk
23+
uv run python examples/01_standalone_sdk/27_observability_laminar.py
24+
```
25+
26+
### How It Works
27+
28+
- Set the `LMNR_PROJECT_API_KEY` environment variable before running:
29+
30+
```bash
31+
export LMNR_PROJECT_API_KEY="your-laminar-api-key"
32+
```
33+
34+
- The SDK automatically detects the Laminar configuration and enables OpenTelemetry tracing.
35+
36+
- Create and run your conversation as usual:
37+
38+
```python
39+
conversation = Conversation(agent=agent, workspace=".")
40+
conversation.send_message("List the files in the current directory.")
41+
conversation.run()
42+
```
43+
44+
- Check your Laminar dashboard to see traces (the session ID matches the conversation UUID).
45+
46+
### Using Other OTLP Backends
47+
48+
For non-Laminar OpenTelemetry backends, set `OTEL_*` environment variables instead of `LMNR_PROJECT_API_KEY`.
49+
50+
## Next Steps
51+
52+
- **[Monitoring](/sdk/guides/monitoring)** – Learn about other monitoring options
53+
- **[Logging](/sdk/guides/logging)** – Configure logging for debugging

0 commit comments

Comments
 (0)