Skip to content
10 changes: 9 additions & 1 deletion docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,15 @@
{
"tab": "Agent SDK (v1)",
"pages": [
"sdk/index"
"sdk/index",
{
"group": "Language Models",
"pages": [
"sdk/llms/index",
"sdk/llms/configuration",
"sdk/llms/providers"
]
}
]
},
{
Expand Down
17 changes: 13 additions & 4 deletions openhands/usage/llms/llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,12 @@ description: OpenHands can connect to any LLM supported by LiteLLM. However, it
This section is for users who want to connect OpenHands to different LLMs.
</Note>

<Info>
OpenHands now delegates all LLM orchestration to the <a href="/sdk/llms/index">Agent SDK</a>. The guidance on this
page focuses on how the OpenHands interfaces surface those capabilities. When in doubt, refer to the SDK documentation
for the canonical list of supported parameters.
</Info>

## Model Recommendations

Based on our evaluations of language models for coding tasks (using the SWE-bench dataset), we can provide some
Expand Down Expand Up @@ -54,16 +60,16 @@ models driving it. However, if you do find ones that work, please add them to th

## LLM Configuration

The following can be set in the OpenHands UI through the Settings:
The following can be set in the OpenHands UI through the Settings. Each option is serialized into the
[`LLM.load_from_env()` schema](/sdk/llms/configuration) before being passed to the Agent SDK:

- `LLM Provider`
- `LLM Model`
- `API Key`
- `Base URL` (through `Advanced` settings)

There are some settings that may be necessary for some LLMs/providers that cannot be set through the UI. Instead, these
can be set through environment variables passed to the docker run command when starting the app
using `-e`:
There are some settings that may be necessary for certain providers that cannot be set directly through the UI. Set them
as environment variables (or add them to your `config.toml`) so the SDK picks them up during startup:

- `LLM_API_VERSION`
- `LLM_EMBEDDING_MODEL`
Expand All @@ -86,6 +92,9 @@ We have a few guides for running OpenHands with specific model providers:
- [OpenHands](/openhands/usage/llms/openhands-llms)
- [OpenRouter](/openhands/usage/llms/openrouter)

These pages remain the authoritative provider references for both the Agent SDK
and the OpenHands interfaces.

## Model Customization

LLM providers have specific settings that can be customized to optimize their performance with OpenHands, such as:
Expand Down
71 changes: 71 additions & 0 deletions sdk/llms/configuration.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
title: Configuration
description: Configure LLM objects in the Agent SDK and reuse them across OpenHands interfaces.
---

Configure `openhands.sdk.llm.LLM` instances with the parameters described in this
section. These fields apply identically whether you launch agents through your
own code or via the OpenHands interfaces built on the SDK.

## Environment variable layout

The SDK expects environment variables prefixed with `LLM_`. They are lowercased
and mapped onto field names when you call `LLM.load_from_env()`.

```bash
export LLM_MODEL="anthropic/claude-sonnet-4.1"
export LLM_API_KEY="sk-ant-123"
export LLM_SERVICE_ID="primary"
export LLM_TIMEOUT="120"
export LLM_NUM_RETRIES="5"
```

Then, in Python:

```python
from openhands.sdk import LLM

llm = LLM.load_from_env()
```

The loader automatically casts values into integers, floats, booleans, JSON, or
`SecretStr` where appropriate.

## JSON configuration

For declarative deployments you can persist the SDK model schema to JSON:

```python
from pydantic import SecretStr

llm = LLM(
model="openai/gpt-4o",
api_key=SecretStr("sk-openai"),
temperature=0.1,
)
with open("config/llm.json", "w") as fp:
fp.write(llm.model_dump_json(exclude_none=True, indent=2))

reloaded = LLM.load_from_json("config/llm.json")
```

Serialized structures redact secrets (API keys, AWS credentials). Combine the
JSON file with environment variables for secrets when your runtime requires
human review of committed configuration.

## Commonly tuned parameters

- **Latency & retry controls**: `timeout`, `num_retries`, `retry_min_wait`,
`retry_max_wait`, and `retry_multiplier` govern the SDK's LLM retry behavior
across providers.
- **Prompt shaping**: `temperature`, `top_p`, `top_k`, `reasoning_effort`, and
`extended_thinking_budget` adjust sampling characteristics and Anthropic
reasoning budgets.
- **Cost reporting**: `input_cost_per_token` and `output_cost_per_token` flow
into SDK telemetry so downstream interfaces can display usage estimates.

Refer to the docstring within
[`openhands.sdk.llm.LLM`](https://github.com/All-Hands-AI/agent-sdk/blob/main/openhands-sdk/openhands/sdk/llm/llm.py)
for the full schema. All fields can be set programmatically or via environment
variables using the naming rule `field -> LLM_FIELD`.

93 changes: 93 additions & 0 deletions sdk/llms/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
title: Overview
description: How Agent SDK language models work and how OpenHands interfaces connect to them.
---

Agent SDK handles all language model (LLM) orchestration in OpenHands **v1**. The
OpenHands repository now provides the interfaces—web app, CLI, and cloud—that
call into the SDK. Use this overview to understand the architecture and the core
APIs available when you build your own integrations or automations.

## Architecture overview

- **Agent SDK = source of truth.** The SDK defines the `LLM` model, request
pipeline, retries, telemetry, and registry.
- **Interfaces reuse the same LLM objects.** The OpenHands UI or CLI simply
hydrate an SDK `LLM` from persisted settings and pass it to an agent.
- **Consistent configuration.** Whether you launch an agent programmatically or
via the OpenHands UI, the supported parameters and defaults come from the SDK.

```mermaid
graph LR
subgraph Interfaces
UI[OpenHands UI]
CLI[OpenHands CLI]
Automation[Automations & workflows]
Custom[Your client]
end

SDK[Agent SDK]
LLMService[(External LLM providers)]

UI --> SDK
CLI --> SDK
Automation --> SDK
Custom --> SDK
SDK --> LLMService
```

## Creating LLM instances

Use the [`openhands.sdk.llm.LLM`](https://github.com/All-Hands-AI/agent-sdk/blob/main/openhands-sdk/openhands/sdk/llm/llm.py)
class to configure model access. The only required field is the `model` name;
other options (API keys, retry tuning, response tracing) are optional.

```python
from pydantic import SecretStr
from openhands.sdk import LLM

llm = LLM(
model="anthropic/claude-sonnet-4.1",
api_key=SecretStr("sk-ant-123"),
service_id="primary",
temperature=0.1,
timeout=120,
)
```

Key concepts:

- **`service_id`** identifies an LLM configuration when storing it in a registry
or persisting it between runs.
- **Retry settings** (`num_retries`, `retry_min_wait`, etc.) apply uniformly to
all providers through LiteLLM.
- **Cost metadata** (`input_cost_per_token`, `output_cost_per_token`) feeds into
SDK telemetry and logs for downstream UIs.

## Loading configuration from environments or files

Use helper constructors when you need to rehydrate an LLM from configuration
state:

```python
llm = LLM.load_from_env(prefix="LLM_")
```

This reads environment variables such as `LLM_MODEL`, `LLM_API_KEY`, or
`LLM_TIMEOUT` and casts them into the appropriate types. Interfaces like the
OpenHands UI persist settings using this convention so that the SDK can read them
without additional glue code. For JSON based workflows the SDK also exposes
`LLM.load_from_json("config/llm.json")`.

Learn more about configuration options in [LLM configuration details](./configuration).

## Relationship with OpenHands interfaces

The OpenHands repository (UI/CLI) now consumes these SDK APIs. When you adjust
LLM settings in the interfaces they are persisted and reloaded into SDK `LLM`
objects before each run, so any customization documented here carries over to
those application experiences.

Provider-specific guidance—pricing summaries, required parameters, or proxy
setups—remains valid and lives alongside the existing OpenHands documentation.
See [LLM provider guides](./providers) for links to those pages.
27 changes: 27 additions & 0 deletions sdk/llms/providers.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
title: Provider Guides
description: Provider-specific notes for configuring Agent SDK and OpenHands interfaces.
---

Provider integrations remain shared between the Agent SDK and the OpenHands UI.
The pages linked below live under the historical OpenHands section but apply
verbatim to SDK applications because both layers wrap the same
`openhands.sdk.llm.LLM` interface.

| Provider / scenario | Documentation |
| --- | --- |
| OpenHands hosted models | [/openhands/usage/llms/openhands-llms](/openhands/usage/llms/openhands-llms) |
| OpenAI | [/openhands/usage/llms/openai-llms](/openhands/usage/llms/openai-llms) |
| Azure OpenAI | [/openhands/usage/llms/azure-llms](/openhands/usage/llms/azure-llms) |
| Google Gemini / Vertex | [/openhands/usage/llms/google-llms](/openhands/usage/llms/google-llms) |
| Groq | [/openhands/usage/llms/groq](/openhands/usage/llms/groq) |
| OpenRouter | [/openhands/usage/llms/openrouter](/openhands/usage/llms/openrouter) |
| Moonshot | [/openhands/usage/llms/moonshot](/openhands/usage/llms/moonshot) |
| LiteLLM proxy | [/openhands/usage/llms/litellm-proxy](/openhands/usage/llms/litellm-proxy) |
| Local LLMs (Ollama, SGLang, vLLM, LM Studio) | [/openhands/usage/llms/local-llms](/openhands/usage/llms/local-llms) |
| Custom LLM configurations | [/openhands/usage/llms/custom-llm-configs](/openhands/usage/llms/custom-llm-configs) |

When you follow any of those guides while building with the SDK, create an
`LLM` object using the documented parameters (for example, API keys, base URLs,
or custom headers) and pass it into your agent or registry. The OpenHands UI
surfacing is simply a convenience layer on top of the same configuration model.