Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions docs/user-guide/concepts/model-providers/amazon-bedrock.md
Original file line number Diff line number Diff line change
Expand Up @@ -511,6 +511,44 @@ response = agent("If a train travels at 120 km/h and needs to cover 450 km, how

> **Note**: Not all models support structured reasoning output. Check the [inference reasoning documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-reasoning.html) for details on supported models.

### Structured Output

Amazon Bedrock models support structured output through their tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to Bedrock's tool specification format.

```python
from pydantic import BaseModel, Field
from strands import Agent
from strands.models import BedrockModel
from typing import List, Optional

class ProductAnalysis(BaseModel):
"""Analyze product information from text."""
name: str = Field(description="Product name")
category: str = Field(description="Product category")
price: float = Field(description="Price in USD")
features: List[str] = Field(description="Key product features")
rating: Optional[float] = Field(description="Customer rating 1-5", ge=1, le=5)

bedrock_model = BedrockModel()

agent = Agent(model=bedrock_model)

result = agent.structured_output(
ProductAnalysis,
"""
Analyze this product: The UltraBook Pro is a premium laptop computer
priced at $1,299. It features a 15-inch 4K display, 16GB RAM, 512GB SSD,
and 12-hour battery life. Customer reviews average 4.5 stars.
"""
)

print(f"Product: {result.name}")
print(f"Category: {result.category}")
print(f"Price: ${result.price}")
print(f"Features: {result.features}")
print(f"Rating: {result.rating}")
```

## Troubleshooting

### Model access issue
Expand Down
47 changes: 47 additions & 0 deletions docs/user-guide/concepts/model-providers/anthropic.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,53 @@ The `model_config` configures the underlying model selected for inference. The s

If you encounter the error `ModuleNotFoundError: No module named 'anthropic'`, this means you haven't installed the `anthropic` dependency in your environment. To fix, run `pip install 'strands-agents[anthropic]'`.

## Advanced Features

### Structured Output

Anthropic's Claude models support structured output through their tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to Anthropic's tool specification format.

```python
from pydantic import BaseModel, Field
from strands import Agent
from strands.models.anthropic import AnthropicModel

class BookAnalysis(BaseModel):
"""Analyze a book's key information."""
title: str = Field(description="The book's title")
author: str = Field(description="The book's author")
genre: str = Field(description="Primary genre or category")
summary: str = Field(description="Brief summary of the book")
rating: int = Field(description="Rating from 1-10", ge=1, le=10)

model = AnthropicModel(
client_args={
"api_key": "<KEY>",
},
max_tokens=1028,
model_id="claude-3-7-sonnet-20250219",
params={
"temperature": 0.7,
}
)

agent = Agent(model=model)

result = agent.structured_output(
BookAnalysis,
"""
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
It's a science fiction comedy about Arthur Dent's adventures through space
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
"""
)

print(f"Title: {result.title}")
print(f"Author: {result.author}")
print(f"Genre: {result.genre}")
print(f"Rating: {result.rating}")
```

## References

- [API](../../../api-reference/models.md)
Expand Down
110 changes: 108 additions & 2 deletions docs/user-guide/concepts/model-providers/custom_model_provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,40 @@

Strands Agents SDK provides an extensible interface for implementing custom model providers, allowing organizations to integrate their own LLM services while keeping implementation details private to their codebase.

## Model Provider Functionality

Custom model providers in Strands Agents support two primary interaction modes:

### Conversational Interaction
The standard conversational mode where agents exchange messages with the model. This is the default interaction pattern that is used when you call an agent directly:

```python
agent = Agent(model=your_custom_model)
response = agent("Hello, how can you help me today?")
```

This invokes the underlying model provided to the agent.

### Structured Output
A specialized mode that returns type-safe, validated responses using [Pydantic](https://docs.pydantic.dev/latest/concepts/models/) models instead of raw text. This enables reliable data extraction and processing:

```python
from pydantic import BaseModel

class PersonInfo(BaseModel):
name: str
age: int
occupation: str

result = agent.structured_output(
PersonInfo,
"Extract info: John Smith is a 30-year-old software engineer"
)
# Returns a validated PersonInfo object
```

Both modes work through the same underlying model provider interface, with structured output using tool calling capabilities to ensure schema compliance.

## Model Provider Architecture

Strands Agents uses an abstract `Model` class that defines the standard interface all model providers must implement:
Expand Down Expand Up @@ -254,9 +288,58 @@ Now that you have mapped the Strands Agents input to your models request, use th
yield chunk
```

### 5. Use Your Custom Model Provider
### 5. Structured Output Support

To support structured output in your custom model provider, you need to implement a `structured_output()` method that invokes your model, and has it return a json output. Below is an example of what this might look like for a Bedrock model, where we invoke the model with a tool spec, and check if the response contains a `toolUse` response.

```python

T = TypeVar('T', bound=BaseModel)

@override
def structured_output(
self, output_model: Type[T], prompt: Messages, callback_handler: Optional[Callable] = None
) -> T:
"""Get structured output using tool calling."""

# Convert Pydantic model to tool specification
tool_spec = convert_pydantic_to_tool_spec(output_model)

# Use existing converse method with tool specification
response = self.converse(messages=prompt, tool_specs=[tool_spec])

# Process streaming response
for event in process_stream(response, prompt):
if callback_handler and "callback" in event:
callback_handler(**event["callback"])
else:
stop_reason, messages, _, _ = event["stop"]

# Validate tool use response
if stop_reason != "tool_use":
raise ValueError("No valid tool use found in the model response.")

# Extract tool use output
content = messages["content"]
for block in content:
if block.get("toolUse") and block["toolUse"]["name"] == tool_spec["name"]:
return output_model(**block["toolUse"]["input"])

raise ValueError("No valid tool use input found in the response.")
```

**Implementation Suggestions:**

1. **Tool Integration**: Use your existing `converse()` method with tool specifications to invoke your model
2. **Response Validation**: Use `output_model(**data)` to validate the response
3. **Error Handling**: Provide clear error messages for parsing and validation failures

Once implemented, you can use your custom model provider in your applications:

For detailed structured output usage patterns, see the [Structured Output documentation](../agents/structured-output.md).

### 6. Use Your Custom Model Provider

Once implemented, you can use your custom model provider in your applications for regular agent invocation:

```python
from strands import Agent
Expand All @@ -280,6 +363,29 @@ agent = Agent(model=custom_model)
response = agent("Hello, how are you today?")
```

Or you can use the `structured_output` feature to generate structured output:

```python
from strands import Agent
from your_org.models.custom_model import Model as CustomModel
from pydantic import BaseModel, Field

class PersonInfo(BaseModel):
name: str = Field(description="Full name")
age: int = Field(description="Age in years")
occupation: str = Field(description="Job title")

model = CustomModel(api_key="key", model_id="model")

agent = Agent(model=model)

result = agent.structured_output(PersonInfo, "John Smith is a 30-year-old engineer.")

print(f"Name: {result.name}")
print(f"Age: {result.age}")
print(f"Occupation: {result.occupation}")
```

## Key Implementation Considerations

### 1. Message Formatting
Expand Down
40 changes: 40 additions & 0 deletions docs/user-guide/concepts/model-providers/litellm.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,46 @@ The `model_config` configures the underlying model selected for inference. The s

If you encounter the error `ModuleNotFoundError: No module named 'litellm'`, this means you haven't installed the `litellm` dependency in your environment. To fix, run `pip install 'strands-agents[litellm]'`.

## Advanced Features

### Structured Output

LiteLLM supports structured output by proxying requests to underlying model providers that support tool calling. The availability of structured output depends on the specific model and provider you're using through LiteLLM.

```python
from pydantic import BaseModel, Field
from strands import Agent
from strands.models.litellm import LiteLLMModel

class BookAnalysis(BaseModel):
"""Analyze a book's key information."""
title: str = Field(description="The book's title")
author: str = Field(description="The book's author")
genre: str = Field(description="Primary genre or category")
summary: str = Field(description="Brief summary of the book")
rating: int = Field(description="Rating from 1-10", ge=1, le=10)

model = LiteLLMModel(
model_id="bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
)

agent = Agent(model=model)

result = agent.structured_output(
BookAnalysis,
"""
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
It's a science fiction comedy about Arthur Dent's adventures through space
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
"""
)

print(f"Title: {result.title}")
print(f"Author: {result.author}")
print(f"Genre: {result.genre}")
print(f"Rating: {result.rating}")
```

## References

- [API](../../../api-reference/models.md)
Expand Down
41 changes: 41 additions & 0 deletions docs/user-guide/concepts/model-providers/llamaapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,47 @@ The `model_config` configures the underlying model selected for inference. The s

If you encounter the error `ModuleNotFoundError: No module named 'llamaapi'`, this means you haven't installed the `llamaapi` dependency in your environment. To fix, run `pip install 'strands-agents[llamaapi]'`.

## Advanced Features

### Structured Output

Llama API models support structured output through their tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to tool specifications that Llama models can understand.

```python
from pydantic import BaseModel, Field
from strands import Agent
from strands.models.llamaapi import LlamaAPIModel

class BookAnalysis(BaseModel):
"""Analyze a book's key information."""
title: str = Field(description="The book's title")
author: str = Field(description="The book's author")
genre: str = Field(description="Primary genre or category")
summary: str = Field(description="Brief summary of the book")
rating: int = Field(description="Rating from 1-10", ge=1, le=10)

model = LlamaAPIModel(
client_args={"api_key": "<KEY>"},
model_id="Llama-4-Maverick-17B-128E-Instruct-FP8",
)

agent = Agent(model=model)

result = agent.structured_output(
BookAnalysis,
"""
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
It's a science fiction comedy about Arthur Dent's adventures through space
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
"""
)

print(f"Title: {result.title}")
print(f"Author: {result.author}")
print(f"Genre: {result.genre}")
print(f"Rating: {result.rating}")
```

## References

- [API](../../../api-reference/models.md)
Expand Down
39 changes: 39 additions & 0 deletions docs/user-guide/concepts/model-providers/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,45 @@ creative_agent = Agent(model=creative_model)
factual_agent = Agent(model=factual_model)
```

### Structured Output

Ollama supports structured output for models that have tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to tool specifications that compatible Ollama models can understand.

```python
from pydantic import BaseModel, Field
from strands import Agent
from strands.models.ollama import OllamaModel

class BookAnalysis(BaseModel):
"""Analyze a book's key information."""
title: str = Field(description="The book's title")
author: str = Field(description="The book's author")
genre: str = Field(description="Primary genre or category")
summary: str = Field(description="Brief summary of the book")
rating: int = Field(description="Rating from 1-10", ge=1, le=10)

ollama_model = OllamaModel(
host="http://localhost:11434",
model_id="llama3",
)

agent = Agent(model=ollama_model)

result = agent.structured_output(
BookAnalysis,
"""
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
It's a science fiction comedy about Arthur Dent's adventures through space
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
"""
)

print(f"Title: {result.title}")
print(f"Author: {result.author}")
print(f"Genre: {result.genre}")
print(f"Rating: {result.rating}")
```

## Tool Support

[Ollama models that support tool use](https://ollama.com/search?c=tools) can use tools through Strands's tool system:
Expand Down
Loading