Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 19 additions & 19 deletions docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ In typing terms, agents are generic in their dependency and result types, e.g.,
Here's a toy example of an agent that simulates a roulette wheel:

```py title="roulette_wheel.py"
from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext

roulette_agent = Agent( # (1)!
'openai:gpt-4o',
Expand All @@ -32,7 +32,7 @@ roulette_agent = Agent( # (1)!


@roulette_agent.tool
async def roulette_wheel(ctx: CallContext[int], square: int) -> str: # (2)!
async def roulette_wheel(ctx: RunContext[int], square: int) -> str: # (2)!
"""check if the square is a winner"""
return 'winner' if square == ctx.deps else 'loser'

Expand All @@ -49,7 +49,7 @@ print(result.data)
```

1. Create an agent, which expects an integer dependency and returns a boolean result. This agent will have type `#!python Agent[int, bool]`.
2. Define a tool that checks if the square is a winner. Here [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error.
2. Define a tool that checks if the square is a winner. Here [`RunContext`][pydantic_ai.dependencies.RunContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error.
3. In reality, you might want to use a random number here e.g. `random.randint(0, 36)`.
4. `result.data` will be a boolean indicating if the square is a winner. Pydantic performs the result validation, it'll be typed as a `bool` since its type is derived from the `result_type` generic parameter of the agent.

Expand Down Expand Up @@ -135,7 +135,7 @@ Here's an example using both types of system prompts:
```py title="system_prompts.py"
from datetime import date

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext

agent = Agent(
'openai:gpt-4o',
Expand All @@ -145,7 +145,7 @@ agent = Agent(


@agent.system_prompt # (3)!
def add_the_users_name(ctx: CallContext[str]) -> str:
def add_the_users_name(ctx: RunContext[str]) -> str:
return f"The user's named is {ctx.deps}."


Expand All @@ -161,8 +161,8 @@ print(result.data)

1. The agent expects a string dependency.
2. Static system prompt defined at agent creation time.
3. Dynamic system prompt defined via a decorator with [`CallContext`][pydantic_ai.dependencies.CallContext], this is called just after `run_sync`, not when the agent is created, so can benefit from runtime information like the dependencies used on that run.
4. Another dynamic system prompt, system prompts don't have to have the `CallContext` parameter.
3. Dynamic system prompt defined via a decorator with [`RunContext`][pydantic_ai.dependencies.RunContext], this is called just after `run_sync`, not when the agent is created, so can benefit from runtime information like the dependencies used on that run.
4. Another dynamic system prompt, system prompts don't have to have the `RunContext` parameter.

_(This example is complete, it can be run "as is")_

Expand All @@ -179,8 +179,8 @@ They're useful when it is impractical or impossible to put all the context an ag

There are two different decorator functions to register tools:

1. [`@agent.tool`][pydantic_ai.Agent.tool] — for tools that need access to the agent [context][pydantic_ai.dependencies.CallContext]
2. [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] — for tools that do not need access to the agent [context][pydantic_ai.dependencies.CallContext]
1. [`@agent.tool`][pydantic_ai.Agent.tool] — for tools that need access to the agent [context][pydantic_ai.dependencies.RunContext]
2. [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] — for tools that do not need access to the agent [context][pydantic_ai.dependencies.RunContext]

`@agent.tool` is the default since in the majority of cases tools will need access to the agent context.

Expand All @@ -189,7 +189,7 @@ Here's an example using both:
```py title="dice_game.py"
import random

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext

agent = Agent(
'gemini-1.5-flash', # (1)!
Expand All @@ -209,7 +209,7 @@ def roll_die() -> str:


@agent.tool # (4)!
def get_player_name(ctx: CallContext[str]) -> str:
def get_player_name(ctx: RunContext[str]) -> str:
"""Get the player's name."""
return ctx.deps

Expand All @@ -222,7 +222,7 @@ print(dice_result.data)
1. This is a pretty simple task, so we can use the fast and cheap Gemini flash model.
2. We pass the user's name as the dependency, to keep things simple we use just the name as a string as the dependency.
3. This tool doesn't need any context, it just returns a random number. You could probably use a dynamic system prompt in this case.
4. This tool needs the player's name, so it uses `CallContext` to access dependencies which are just the player's name in this case.
4. This tool needs the player's name, so it uses `RunContext` to access dependencies which are just the player's name in this case.
5. Run the agent, passing the player's name as the dependency.

_(This example is complete, it can be run "as is")_
Expand Down Expand Up @@ -325,7 +325,7 @@ As the name suggests, function tools use the model's "tools" or "functions" API

### Function tools and schema

Function parameters are extracted from the function signature, and all parameters except `CallContext` are used to build the schema for that tool call.
Function parameters are extracted from the function signature, and all parameters except `RunContext` are used to build the schema for that tool call.

Even better, PydanticAI extracts the docstring from functions and (thanks to [griffe](https://mkdocstrings.github.io/griffe/)) extracts parameter descriptions from the docstring and adds them to the schema.

Expand Down Expand Up @@ -395,15 +395,15 @@ Validation errors from both function tool parameter validation and [structured r
You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](#function-tools) or [result validator function](results.md#result-validators-functions) to tell the model it should retry generating a response.

- The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific tool][pydantic_ai.Agent.tool], or a [result validator][pydantic_ai.Agent.__init__].
- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.dependencies.CallContext].
- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.dependencies.RunContext].

Here's an example:

```py title="tool_retry.py"
from fake_database import DatabaseConn
from pydantic import BaseModel

from pydantic_ai import Agent, CallContext, ModelRetry
from pydantic_ai import Agent, RunContext, ModelRetry


class ChatResult(BaseModel):
Expand All @@ -419,7 +419,7 @@ agent = Agent(


@agent.tool(retries=2)
def get_user_by_name(ctx: CallContext[DatabaseConn], name: str) -> int:
def get_user_by_name(ctx: RunContext[DatabaseConn], name: str) -> int:
"""Get a user's ID from their full name."""
print(name)
#> John
Expand Down Expand Up @@ -533,7 +533,7 @@ Consider the following script with type mistakes:
```py title="type_mistakes.py" hl_lines="18 28"
from dataclasses import dataclass

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext


@dataclass
Expand All @@ -549,7 +549,7 @@ agent = Agent(


@agent.system_prompt
def add_user_name(ctx: CallContext[str]) -> str: # (2)!
def add_user_name(ctx: RunContext[str]) -> str: # (2)!
return f"The user's name is {ctx.deps}."


Expand All @@ -569,7 +569,7 @@ Running `mypy` on this will give the following output:

```bash
➤ uv run mypy type_mistakes.py
type_mistakes.py:18: error: Argument 1 to "system_prompt" of "Agent" has incompatible type "Callable[[CallContext[str]], str]"; expected "Callable[[CallContext[User]], str]" [arg-type]
type_mistakes.py:18: error: Argument 1 to "system_prompt" of "Agent" has incompatible type "Callable[[RunContext[str]], str]"; expected "Callable[[RunContext[User]], str]" [arg-type]
type_mistakes.py:28: error: Argument 1 to "foobar" has incompatible type "bool"; expected "bytes" [arg-type]
Found 2 errors in 1 file (checked 1 source file)
```
Expand Down
38 changes: 19 additions & 19 deletions docs/dependencies.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,15 +51,15 @@ _(This example is complete, it can be run "as is")_

## Accessing Dependencies

Dependencies are accessed through the [`CallContext`][pydantic_ai.dependencies.CallContext] type, this should be the first parameter of system prompt functions etc.
Dependencies are accessed through the [`RunContext`][pydantic_ai.dependencies.RunContext] type, this should be the first parameter of system prompt functions etc.


```py title="system_prompt_dependencies.py" hl_lines="20-27"
from dataclasses import dataclass

import httpx

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext


@dataclass
Expand All @@ -75,7 +75,7 @@ agent = Agent(


@agent.system_prompt # (1)!
async def get_system_prompt(ctx: CallContext[MyDeps]) -> str: # (2)!
async def get_system_prompt(ctx: RunContext[MyDeps]) -> str: # (2)!
response = await ctx.deps.http_client.get( # (3)!
'https://example.com',
headers={'Authorization': f'Bearer {ctx.deps.api_key}'}, # (4)!
Expand All @@ -92,10 +92,10 @@ async def main():
#> Did you hear about the toothpaste scandal? They called it Colgate.
```

1. [`CallContext`][pydantic_ai.dependencies.CallContext] may optionally be passed to a [`system_prompt`][pydantic_ai.Agent.system_prompt] function as the only argument.
2. [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the type of the dependencies, if this type is incorrect, static type checkers will raise an error.
3. Access dependencies through the [`.deps`][pydantic_ai.dependencies.CallContext.deps] attribute.
4. Access dependencies through the [`.deps`][pydantic_ai.dependencies.CallContext.deps] attribute.
1. [`RunContext`][pydantic_ai.dependencies.RunContext] may optionally be passed to a [`system_prompt`][pydantic_ai.Agent.system_prompt] function as the only argument.
2. [`RunContext`][pydantic_ai.dependencies.RunContext] is parameterized with the type of the dependencies, if this type is incorrect, static type checkers will raise an error.
3. Access dependencies through the [`.deps`][pydantic_ai.dependencies.RunContext.deps] attribute.
4. Access dependencies through the [`.deps`][pydantic_ai.dependencies.RunContext.deps] attribute.

_(This example is complete, it can be run "as is")_

Expand All @@ -117,7 +117,7 @@ from dataclasses import dataclass

import httpx

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext


@dataclass
Expand All @@ -133,7 +133,7 @@ agent = Agent(


@agent.system_prompt
def get_system_prompt(ctx: CallContext[MyDeps]) -> str: # (2)!
def get_system_prompt(ctx: RunContext[MyDeps]) -> str: # (2)!
response = ctx.deps.http_client.get(
'https://example.com', headers={'Authorization': f'Bearer {ctx.deps.api_key}'}
)
Expand Down Expand Up @@ -165,7 +165,7 @@ from dataclasses import dataclass

import httpx

from pydantic_ai import Agent, CallContext, ModelRetry
from pydantic_ai import Agent, ModelRetry, RunContext


@dataclass
Expand All @@ -181,14 +181,14 @@ agent = Agent(


@agent.system_prompt
async def get_system_prompt(ctx: CallContext[MyDeps]) -> str:
async def get_system_prompt(ctx: RunContext[MyDeps]) -> str:
response = await ctx.deps.http_client.get('https://example.com')
response.raise_for_status()
return f'Prompt: {response.text}'


@agent.tool # (1)!
async def get_joke_material(ctx: CallContext[MyDeps], subject: str) -> str:
async def get_joke_material(ctx: RunContext[MyDeps], subject: str) -> str:
response = await ctx.deps.http_client.get(
'https://example.com#jokes',
params={'subject': subject},
Expand All @@ -199,7 +199,7 @@ async def get_joke_material(ctx: CallContext[MyDeps], subject: str) -> str:


@agent.result_validator # (2)!
async def validate_result(ctx: CallContext[MyDeps], final_response: str) -> str:
async def validate_result(ctx: RunContext[MyDeps], final_response: str) -> str:
response = await ctx.deps.http_client.post(
'https://example.com#validate',
headers={'Authorization': f'Bearer {ctx.deps.api_key}'},
Expand All @@ -219,8 +219,8 @@ async def main():
#> Did you hear about the toothpaste scandal? They called it Colgate.
```

1. To pass `CallContext` to a tool, use the [`tool`][pydantic_ai.Agent.tool] decorator.
2. `CallContext` may optionally be passed to a [`result_validator`][pydantic_ai.Agent.result_validator] function as the first argument.
1. To pass `RunContext` to a tool, use the [`tool`][pydantic_ai.Agent.tool] decorator.
2. `RunContext` may optionally be passed to a [`result_validator`][pydantic_ai.Agent.result_validator] function as the first argument.

_(This example is complete, it can be run "as is")_

Expand All @@ -238,7 +238,7 @@ from dataclasses import dataclass

import httpx

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext


@dataclass
Expand All @@ -256,7 +256,7 @@ joke_agent = Agent('openai:gpt-4o', deps_type=MyDeps)


@joke_agent.system_prompt
async def get_system_prompt(ctx: CallContext[MyDeps]) -> str:
async def get_system_prompt(ctx: RunContext[MyDeps]) -> str:
return await ctx.deps.system_prompt_factory() # (2)!


Expand Down Expand Up @@ -303,7 +303,7 @@ Since dependencies can be any python type, and agents are just python objects, a
```py title="agents_as_dependencies.py"
from dataclasses import dataclass

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext


@dataclass
Expand All @@ -324,7 +324,7 @@ factory_agent = Agent('gemini-1.5-pro', result_type=list[str])


@joke_agent.tool
async def joke_factory(ctx: CallContext[MyDeps], count: int) -> str:
async def joke_factory(ctx: RunContext[MyDeps], count: int) -> str:
r = await ctx.deps.factory_agent.run(f'Please generate {count} jokes.')
return '\n'.join(r.data)

Expand Down
10 changes: 5 additions & 5 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ Here is a concise example using PydanticAI to build a support agent for a bank:
from dataclasses import dataclass

from pydantic import BaseModel, Field
from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext

from bank_database import DatabaseConn

Expand Down Expand Up @@ -87,14 +87,14 @@ support_agent = Agent( # (1)!


@support_agent.system_prompt # (5)!
async def add_customer_name(ctx: CallContext[SupportDependencies]) -> str:
async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str:
customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
return f"The customer's name is {customer_name!r}"


@support_agent.tool # (6)!
async def customer_balance(
ctx: CallContext[SupportDependencies], include_pending: bool
ctx: RunContext[SupportDependencies], include_pending: bool
) -> str:
"""Returns the customer's current account balance.""" # (7)!
balance = await ctx.deps.db.customer_balance(
Expand Down Expand Up @@ -126,8 +126,8 @@ async def main():
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also set the model when running the agent.
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a type-safe way to customise the behavior of your agents, and can be especially useful when running unit tests and evals.
4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt` keyword argument][pydantic_ai.Agent.__init__] to the agent.
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`CallContext`][pydantic_ai.dependencies.CallContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
6. [Tools](agents.md#function-tools) let you register "tools" which the LLM may call while responding to a user. Again, dependencies are carried via [`CallContext`][pydantic_ai.dependencies.CallContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.dependencies.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
6. [Tools](agents.md#function-tools) let you register "tools" which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.dependencies.RunContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the tool schema sent to the LLM.
8. [Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result.
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
Expand Down
4 changes: 2 additions & 2 deletions docs/results.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ from typing import Union
from fake_database import DatabaseConn, QueryError
from pydantic import BaseModel

from pydantic_ai import Agent, CallContext, ModelRetry
from pydantic_ai import Agent, RunContext, ModelRetry


class Success(BaseModel):
Expand All @@ -135,7 +135,7 @@ agent: Agent[DatabaseConn, Response] = Agent(


@agent.result_validator
async def validate_result(ctx: CallContext[DatabaseConn], result: Response) -> Response:
async def validate_result(ctx: RunContext[DatabaseConn], result: Response) -> Response:
if isinstance(result, InvalidRequest):
return result
try:
Expand Down
8 changes: 4 additions & 4 deletions docs/testing-evals.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Let's write unit tests for the following application code:
import asyncio
from datetime import date

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext

from fake_database import DatabaseConn # (1)!
from weather_service import WeatherService # (2)!
Expand All @@ -54,7 +54,7 @@ weather_agent = Agent(

@weather_agent.tool
def weather_forecast(
ctx: CallContext[WeatherService], location: str, forecast_date: date
ctx: RunContext[WeatherService], location: str, forecast_date: date
) -> str:
if forecast_date < date.today(): # (3)!
return ctx.deps.get_historic_weather(location, forecast_date)
Expand Down Expand Up @@ -301,7 +301,7 @@ import json
from pathlib import Path
from typing import Union

from pydantic_ai import Agent, CallContext
from pydantic_ai import Agent, RunContext

from fake_database import DatabaseConn

Expand Down Expand Up @@ -349,7 +349,7 @@ sql_agent = Agent(


@sql_agent.system_prompt
async def system_prompt(ctx: CallContext[SqlSystemPrompt]) -> str:
async def system_prompt(ctx: RunContext[SqlSystemPrompt]) -> str:
return ctx.deps.build_prompt()


Expand Down
Loading