diff --git a/docs/agents.md b/docs/agents.md index 684835d96f..82c249a8af 100644 --- a/docs/agents.md +++ b/docs/agents.md @@ -18,7 +18,7 @@ In typing terms, agents are generic in their dependency and result types, e.g., Here's a toy example of an agent that simulates a roulette wheel: ```py title="roulette_wheel.py" -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext roulette_agent = Agent( # (1)! 'openai:gpt-4o', @@ -32,7 +32,7 @@ roulette_agent = Agent( # (1)! @roulette_agent.tool -async def roulette_wheel(ctx: CallContext[int], square: int) -> str: # (2)! +async def roulette_wheel(ctx: RunContext[int], square: int) -> str: # (2)! """check if the square is a winner""" return 'winner' if square == ctx.deps else 'loser' @@ -49,7 +49,7 @@ print(result.data) ``` 1. Create an agent, which expects an integer dependency and returns a boolean result. This agent will have type `#!python Agent[int, bool]`. -2. Define a tool that checks if the square is a winner. Here [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error. +2. Define a tool that checks if the square is a winner. Here [`RunContext`][pydantic_ai.dependencies.RunContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error. 3. In reality, you might want to use a random number here e.g. `random.randint(0, 36)`. 4. `result.data` will be a boolean indicating if the square is a winner. Pydantic performs the result validation, it'll be typed as a `bool` since its type is derived from the `result_type` generic parameter of the agent. @@ -135,7 +135,7 @@ Here's an example using both types of system prompts: ```py title="system_prompts.py" from datetime import date -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext agent = Agent( 'openai:gpt-4o', @@ -145,7 +145,7 @@ agent = Agent( @agent.system_prompt # (3)! -def add_the_users_name(ctx: CallContext[str]) -> str: +def add_the_users_name(ctx: RunContext[str]) -> str: return f"The user's named is {ctx.deps}." @@ -161,8 +161,8 @@ print(result.data) 1. The agent expects a string dependency. 2. Static system prompt defined at agent creation time. -3. Dynamic system prompt defined via a decorator with [`CallContext`][pydantic_ai.dependencies.CallContext], this is called just after `run_sync`, not when the agent is created, so can benefit from runtime information like the dependencies used on that run. -4. Another dynamic system prompt, system prompts don't have to have the `CallContext` parameter. +3. Dynamic system prompt defined via a decorator with [`RunContext`][pydantic_ai.dependencies.RunContext], this is called just after `run_sync`, not when the agent is created, so can benefit from runtime information like the dependencies used on that run. +4. Another dynamic system prompt, system prompts don't have to have the `RunContext` parameter. _(This example is complete, it can be run "as is")_ @@ -179,8 +179,8 @@ They're useful when it is impractical or impossible to put all the context an ag There are two different decorator functions to register tools: -1. [`@agent.tool`][pydantic_ai.Agent.tool] — for tools that need access to the agent [context][pydantic_ai.dependencies.CallContext] -2. [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] — for tools that do not need access to the agent [context][pydantic_ai.dependencies.CallContext] +1. [`@agent.tool`][pydantic_ai.Agent.tool] — for tools that need access to the agent [context][pydantic_ai.dependencies.RunContext] +2. [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] — for tools that do not need access to the agent [context][pydantic_ai.dependencies.RunContext] `@agent.tool` is the default since in the majority of cases tools will need access to the agent context. @@ -189,7 +189,7 @@ Here's an example using both: ```py title="dice_game.py" import random -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext agent = Agent( 'gemini-1.5-flash', # (1)! @@ -209,7 +209,7 @@ def roll_die() -> str: @agent.tool # (4)! -def get_player_name(ctx: CallContext[str]) -> str: +def get_player_name(ctx: RunContext[str]) -> str: """Get the player's name.""" return ctx.deps @@ -222,7 +222,7 @@ print(dice_result.data) 1. This is a pretty simple task, so we can use the fast and cheap Gemini flash model. 2. We pass the user's name as the dependency, to keep things simple we use just the name as a string as the dependency. 3. This tool doesn't need any context, it just returns a random number. You could probably use a dynamic system prompt in this case. -4. This tool needs the player's name, so it uses `CallContext` to access dependencies which are just the player's name in this case. +4. This tool needs the player's name, so it uses `RunContext` to access dependencies which are just the player's name in this case. 5. Run the agent, passing the player's name as the dependency. _(This example is complete, it can be run "as is")_ @@ -325,7 +325,7 @@ As the name suggests, function tools use the model's "tools" or "functions" API ### Function tools and schema -Function parameters are extracted from the function signature, and all parameters except `CallContext` are used to build the schema for that tool call. +Function parameters are extracted from the function signature, and all parameters except `RunContext` are used to build the schema for that tool call. Even better, PydanticAI extracts the docstring from functions and (thanks to [griffe](https://mkdocstrings.github.io/griffe/)) extracts parameter descriptions from the docstring and adds them to the schema. @@ -395,7 +395,7 @@ Validation errors from both function tool parameter validation and [structured r You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](#function-tools) or [result validator function](results.md#result-validators-functions) to tell the model it should retry generating a response. - The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific tool][pydantic_ai.Agent.tool], or a [result validator][pydantic_ai.Agent.__init__]. -- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.dependencies.CallContext]. +- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.dependencies.RunContext]. Here's an example: @@ -403,7 +403,7 @@ Here's an example: from fake_database import DatabaseConn from pydantic import BaseModel -from pydantic_ai import Agent, CallContext, ModelRetry +from pydantic_ai import Agent, RunContext, ModelRetry class ChatResult(BaseModel): @@ -419,7 +419,7 @@ agent = Agent( @agent.tool(retries=2) -def get_user_by_name(ctx: CallContext[DatabaseConn], name: str) -> int: +def get_user_by_name(ctx: RunContext[DatabaseConn], name: str) -> int: """Get a user's ID from their full name.""" print(name) #> John @@ -533,7 +533,7 @@ Consider the following script with type mistakes: ```py title="type_mistakes.py" hl_lines="18 28" from dataclasses import dataclass -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext @dataclass @@ -549,7 +549,7 @@ agent = Agent( @agent.system_prompt -def add_user_name(ctx: CallContext[str]) -> str: # (2)! +def add_user_name(ctx: RunContext[str]) -> str: # (2)! return f"The user's name is {ctx.deps}." @@ -569,7 +569,7 @@ Running `mypy` on this will give the following output: ```bash ➤ uv run mypy type_mistakes.py -type_mistakes.py:18: error: Argument 1 to "system_prompt" of "Agent" has incompatible type "Callable[[CallContext[str]], str]"; expected "Callable[[CallContext[User]], str]" [arg-type] +type_mistakes.py:18: error: Argument 1 to "system_prompt" of "Agent" has incompatible type "Callable[[RunContext[str]], str]"; expected "Callable[[RunContext[User]], str]" [arg-type] type_mistakes.py:28: error: Argument 1 to "foobar" has incompatible type "bool"; expected "bytes" [arg-type] Found 2 errors in 1 file (checked 1 source file) ``` diff --git a/docs/dependencies.md b/docs/dependencies.md index 70042124be..129b58228a 100644 --- a/docs/dependencies.md +++ b/docs/dependencies.md @@ -51,7 +51,7 @@ _(This example is complete, it can be run "as is")_ ## Accessing Dependencies -Dependencies are accessed through the [`CallContext`][pydantic_ai.dependencies.CallContext] type, this should be the first parameter of system prompt functions etc. +Dependencies are accessed through the [`RunContext`][pydantic_ai.dependencies.RunContext] type, this should be the first parameter of system prompt functions etc. ```py title="system_prompt_dependencies.py" hl_lines="20-27" @@ -59,7 +59,7 @@ from dataclasses import dataclass import httpx -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext @dataclass @@ -75,7 +75,7 @@ agent = Agent( @agent.system_prompt # (1)! -async def get_system_prompt(ctx: CallContext[MyDeps]) -> str: # (2)! +async def get_system_prompt(ctx: RunContext[MyDeps]) -> str: # (2)! response = await ctx.deps.http_client.get( # (3)! 'https://example.com', headers={'Authorization': f'Bearer {ctx.deps.api_key}'}, # (4)! @@ -92,10 +92,10 @@ async def main(): #> Did you hear about the toothpaste scandal? They called it Colgate. ``` -1. [`CallContext`][pydantic_ai.dependencies.CallContext] may optionally be passed to a [`system_prompt`][pydantic_ai.Agent.system_prompt] function as the only argument. -2. [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the type of the dependencies, if this type is incorrect, static type checkers will raise an error. -3. Access dependencies through the [`.deps`][pydantic_ai.dependencies.CallContext.deps] attribute. -4. Access dependencies through the [`.deps`][pydantic_ai.dependencies.CallContext.deps] attribute. +1. [`RunContext`][pydantic_ai.dependencies.RunContext] may optionally be passed to a [`system_prompt`][pydantic_ai.Agent.system_prompt] function as the only argument. +2. [`RunContext`][pydantic_ai.dependencies.RunContext] is parameterized with the type of the dependencies, if this type is incorrect, static type checkers will raise an error. +3. Access dependencies through the [`.deps`][pydantic_ai.dependencies.RunContext.deps] attribute. +4. Access dependencies through the [`.deps`][pydantic_ai.dependencies.RunContext.deps] attribute. _(This example is complete, it can be run "as is")_ @@ -117,7 +117,7 @@ from dataclasses import dataclass import httpx -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext @dataclass @@ -133,7 +133,7 @@ agent = Agent( @agent.system_prompt -def get_system_prompt(ctx: CallContext[MyDeps]) -> str: # (2)! +def get_system_prompt(ctx: RunContext[MyDeps]) -> str: # (2)! response = ctx.deps.http_client.get( 'https://example.com', headers={'Authorization': f'Bearer {ctx.deps.api_key}'} ) @@ -165,7 +165,7 @@ from dataclasses import dataclass import httpx -from pydantic_ai import Agent, CallContext, ModelRetry +from pydantic_ai import Agent, ModelRetry, RunContext @dataclass @@ -181,14 +181,14 @@ agent = Agent( @agent.system_prompt -async def get_system_prompt(ctx: CallContext[MyDeps]) -> str: +async def get_system_prompt(ctx: RunContext[MyDeps]) -> str: response = await ctx.deps.http_client.get('https://example.com') response.raise_for_status() return f'Prompt: {response.text}' @agent.tool # (1)! -async def get_joke_material(ctx: CallContext[MyDeps], subject: str) -> str: +async def get_joke_material(ctx: RunContext[MyDeps], subject: str) -> str: response = await ctx.deps.http_client.get( 'https://example.com#jokes', params={'subject': subject}, @@ -199,7 +199,7 @@ async def get_joke_material(ctx: CallContext[MyDeps], subject: str) -> str: @agent.result_validator # (2)! -async def validate_result(ctx: CallContext[MyDeps], final_response: str) -> str: +async def validate_result(ctx: RunContext[MyDeps], final_response: str) -> str: response = await ctx.deps.http_client.post( 'https://example.com#validate', headers={'Authorization': f'Bearer {ctx.deps.api_key}'}, @@ -219,8 +219,8 @@ async def main(): #> Did you hear about the toothpaste scandal? They called it Colgate. ``` -1. To pass `CallContext` to a tool, use the [`tool`][pydantic_ai.Agent.tool] decorator. -2. `CallContext` may optionally be passed to a [`result_validator`][pydantic_ai.Agent.result_validator] function as the first argument. +1. To pass `RunContext` to a tool, use the [`tool`][pydantic_ai.Agent.tool] decorator. +2. `RunContext` may optionally be passed to a [`result_validator`][pydantic_ai.Agent.result_validator] function as the first argument. _(This example is complete, it can be run "as is")_ @@ -238,7 +238,7 @@ from dataclasses import dataclass import httpx -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext @dataclass @@ -256,7 +256,7 @@ joke_agent = Agent('openai:gpt-4o', deps_type=MyDeps) @joke_agent.system_prompt -async def get_system_prompt(ctx: CallContext[MyDeps]) -> str: +async def get_system_prompt(ctx: RunContext[MyDeps]) -> str: return await ctx.deps.system_prompt_factory() # (2)! @@ -303,7 +303,7 @@ Since dependencies can be any python type, and agents are just python objects, a ```py title="agents_as_dependencies.py" from dataclasses import dataclass -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext @dataclass @@ -324,7 +324,7 @@ factory_agent = Agent('gemini-1.5-pro', result_type=list[str]) @joke_agent.tool -async def joke_factory(ctx: CallContext[MyDeps], count: int) -> str: +async def joke_factory(ctx: RunContext[MyDeps], count: int) -> str: r = await ctx.deps.factory_agent.run(f'Please generate {count} jokes.') return '\n'.join(r.data) diff --git a/docs/index.md b/docs/index.md index fe17f49230..414d2d1b6c 100644 --- a/docs/index.md +++ b/docs/index.md @@ -58,7 +58,7 @@ Here is a concise example using PydanticAI to build a support agent for a bank: from dataclasses import dataclass from pydantic import BaseModel, Field -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext from bank_database import DatabaseConn @@ -87,14 +87,14 @@ support_agent = Agent( # (1)! @support_agent.system_prompt # (5)! -async def add_customer_name(ctx: CallContext[SupportDependencies]) -> str: +async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str: customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id) return f"The customer's name is {customer_name!r}" @support_agent.tool # (6)! async def customer_balance( - ctx: CallContext[SupportDependencies], include_pending: bool + ctx: RunContext[SupportDependencies], include_pending: bool ) -> str: """Returns the customer's current account balance.""" # (7)! balance = await ctx.deps.db.customer_balance( @@ -126,8 +126,8 @@ async def main(): 2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also set the model when running the agent. 3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a type-safe way to customise the behavior of your agents, and can be especially useful when running unit tests and evals. 4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt` keyword argument][pydantic_ai.Agent.__init__] to the agent. -5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`CallContext`][pydantic_ai.dependencies.CallContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it. -6. [Tools](agents.md#function-tools) let you register "tools" which the LLM may call while responding to a user. Again, dependencies are carried via [`CallContext`][pydantic_ai.dependencies.CallContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry. +5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.dependencies.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it. +6. [Tools](agents.md#function-tools) let you register "tools" which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.dependencies.RunContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry. 7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the tool schema sent to the LLM. 8. [Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result. 9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again. diff --git a/docs/results.md b/docs/results.md index 4704c824e8..35486e7ccb 100644 --- a/docs/results.md +++ b/docs/results.md @@ -114,7 +114,7 @@ from typing import Union from fake_database import DatabaseConn, QueryError from pydantic import BaseModel -from pydantic_ai import Agent, CallContext, ModelRetry +from pydantic_ai import Agent, RunContext, ModelRetry class Success(BaseModel): @@ -135,7 +135,7 @@ agent: Agent[DatabaseConn, Response] = Agent( @agent.result_validator -async def validate_result(ctx: CallContext[DatabaseConn], result: Response) -> Response: +async def validate_result(ctx: RunContext[DatabaseConn], result: Response) -> Response: if isinstance(result, InvalidRequest): return result try: diff --git a/docs/testing-evals.md b/docs/testing-evals.md index dbddecee61..154ad335bd 100644 --- a/docs/testing-evals.md +++ b/docs/testing-evals.md @@ -40,7 +40,7 @@ Let's write unit tests for the following application code: import asyncio from datetime import date -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext from fake_database import DatabaseConn # (1)! from weather_service import WeatherService # (2)! @@ -54,7 +54,7 @@ weather_agent = Agent( @weather_agent.tool def weather_forecast( - ctx: CallContext[WeatherService], location: str, forecast_date: date + ctx: RunContext[WeatherService], location: str, forecast_date: date ) -> str: if forecast_date < date.today(): # (3)! return ctx.deps.get_historic_weather(location, forecast_date) @@ -301,7 +301,7 @@ import json from pathlib import Path from typing import Union -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext from fake_database import DatabaseConn @@ -349,7 +349,7 @@ sql_agent = Agent( @sql_agent.system_prompt -async def system_prompt(ctx: CallContext[SqlSystemPrompt]) -> str: +async def system_prompt(ctx: RunContext[SqlSystemPrompt]) -> str: return ctx.deps.build_prompt() diff --git a/pydantic_ai_examples/bank_support.py b/pydantic_ai_examples/bank_support.py index 021f10b905..3d1a5159b2 100644 --- a/pydantic_ai_examples/bank_support.py +++ b/pydantic_ai_examples/bank_support.py @@ -9,7 +9,7 @@ from pydantic import BaseModel, Field -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext class DatabaseConn: @@ -57,14 +57,14 @@ class SupportResult(BaseModel): @support_agent.system_prompt -async def add_customer_name(ctx: CallContext[SupportDependencies]) -> str: +async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str: customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id) return f"The customer's name is {customer_name!r}" @support_agent.tool async def customer_balance( - ctx: CallContext[SupportDependencies], include_pending: bool + ctx: RunContext[SupportDependencies], include_pending: bool ) -> str: """Returns the customer's current account balance.""" balance = await ctx.deps.db.customer_balance( diff --git a/pydantic_ai_examples/rag.py b/pydantic_ai_examples/rag.py index d64ff50c93..f595d279e1 100644 --- a/pydantic_ai_examples/rag.py +++ b/pydantic_ai_examples/rag.py @@ -34,7 +34,7 @@ from pydantic import TypeAdapter from typing_extensions import AsyncGenerator -from pydantic_ai import CallContext +from pydantic_ai import RunContext from pydantic_ai.agent import Agent # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured @@ -52,7 +52,7 @@ class Deps: @agent.tool -async def retrieve(context: CallContext[Deps], search_query: str) -> str: +async def retrieve(context: RunContext[Deps], search_query: str) -> str: """Retrieve documentation sections based on a search query. Args: diff --git a/pydantic_ai_examples/roulette_wheel.py b/pydantic_ai_examples/roulette_wheel.py index eee4947aaa..1173d567df 100644 --- a/pydantic_ai_examples/roulette_wheel.py +++ b/pydantic_ai_examples/roulette_wheel.py @@ -10,7 +10,7 @@ from dataclasses import dataclass from typing import Literal -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext # Define the dependencies class @@ -34,7 +34,7 @@ class Deps: @roulette_agent.tool async def roulette_wheel( - ctx: CallContext[Deps], square: int + ctx: RunContext[Deps], square: int ) -> Literal['winner', 'loser']: """Check if the bet square is a winner. diff --git a/pydantic_ai_examples/sql_gen.py b/pydantic_ai_examples/sql_gen.py index e9ba89e85e..0d23b5e7da 100644 --- a/pydantic_ai_examples/sql_gen.py +++ b/pydantic_ai_examples/sql_gen.py @@ -25,7 +25,7 @@ from pydantic import BaseModel, Field from typing_extensions import TypeAlias -from pydantic_ai import Agent, CallContext, ModelRetry +from pydantic_ai import Agent, ModelRetry, RunContext # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') @@ -109,7 +109,7 @@ async def system_prompt() -> str: @agent.result_validator -async def validate_result(ctx: CallContext[Deps], result: Response) -> Response: +async def validate_result(ctx: RunContext[Deps], result: Response) -> Response: if isinstance(result, InvalidRequest): return result diff --git a/pydantic_ai_examples/weather_agent.py b/pydantic_ai_examples/weather_agent.py index 7e62bf9e64..35d8ea70d0 100644 --- a/pydantic_ai_examples/weather_agent.py +++ b/pydantic_ai_examples/weather_agent.py @@ -20,7 +20,7 @@ from devtools import debug from httpx import AsyncClient -from pydantic_ai import Agent, CallContext, ModelRetry +from pydantic_ai import Agent, ModelRetry, RunContext # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') @@ -43,7 +43,7 @@ class Deps: @weather_agent.tool async def get_lat_lng( - ctx: CallContext[Deps], location_description: str + ctx: RunContext[Deps], location_description: str ) -> dict[str, float]: """Get the latitude and longitude of a location. @@ -72,7 +72,7 @@ async def get_lat_lng( @weather_agent.tool -async def get_weather(ctx: CallContext[Deps], lat: float, lng: float) -> dict[str, Any]: +async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str, Any]: """Get the weather at a location. Args: diff --git a/pydantic_ai_slim/pydantic_ai/__init__.py b/pydantic_ai_slim/pydantic_ai/__init__.py index cd6fc318cc..c4591d2a17 100644 --- a/pydantic_ai_slim/pydantic_ai/__init__.py +++ b/pydantic_ai_slim/pydantic_ai/__init__.py @@ -1,8 +1,8 @@ from importlib.metadata import version from .agent import Agent -from .dependencies import CallContext +from .dependencies import RunContext from .exceptions import ModelRetry, UnexpectedModelBehavior, UserError -__all__ = 'Agent', 'CallContext', 'ModelRetry', 'UnexpectedModelBehavior', 'UserError', '__version__' +__all__ = 'Agent', 'RunContext', 'ModelRetry', 'UnexpectedModelBehavior', 'UserError', '__version__' __version__ = version('pydantic_ai_slim') diff --git a/pydantic_ai_slim/pydantic_ai/_pydantic.py b/pydantic_ai_slim/pydantic_ai/_pydantic.py index c0bdb2c4aa..a948acf051 100644 --- a/pydantic_ai_slim/pydantic_ai/_pydantic.py +++ b/pydantic_ai_slim/pydantic_ai/_pydantic.py @@ -78,13 +78,13 @@ def function_schema(either_function: _tool.ToolEitherFunc[AgentDeps, ToolParams] if index == 0 and takes_ctx: if not _is_call_ctx(annotation): - errors.append('First argument must be a CallContext instance when using `.tool`') + errors.append('First argument must be a RunContext instance when using `.tool`') continue elif not takes_ctx and _is_call_ctx(annotation): - errors.append('CallContext instance can only be used with `.tool`') + errors.append('RunContext instance can only be used with `.tool`') continue elif index != 0 and _is_call_ctx(annotation): - errors.append('CallContext instance can only be used as the first argument') + errors.append('RunContext instance can only be used as the first argument') continue field_name = p.name @@ -191,10 +191,10 @@ def _build_schema( def _is_call_ctx(annotation: Any) -> bool: - from .dependencies import CallContext + from .dependencies import RunContext - return annotation is CallContext or ( - _typing_extra.is_generic_alias(annotation) and get_origin(annotation) is CallContext + return annotation is RunContext or ( + _typing_extra.is_generic_alias(annotation) and get_origin(annotation) is RunContext ) diff --git a/pydantic_ai_slim/pydantic_ai/_result.py b/pydantic_ai_slim/pydantic_ai/_result.py index 06a61826c4..cbd2880810 100644 --- a/pydantic_ai_slim/pydantic_ai/_result.py +++ b/pydantic_ai_slim/pydantic_ai/_result.py @@ -11,7 +11,7 @@ from typing_extensions import Self, TypeAliasType, TypedDict from . import _utils, messages -from .dependencies import AgentDeps, CallContext, ResultValidatorFunc +from .dependencies import AgentDeps, ResultValidatorFunc, RunContext from .exceptions import ModelRetry from .messages import ModelStructuredResponse, ToolCall from .result import ResultData @@ -42,7 +42,7 @@ async def validate( Result of either the validated result data (ok) or a retry message (Err). """ if self._takes_ctx: - args = CallContext(deps, retry, tool_call.tool_name if tool_call else None), result + args = RunContext(deps, retry, tool_call.tool_name if tool_call else None), result else: args = (result,) diff --git a/pydantic_ai_slim/pydantic_ai/_system_prompt.py b/pydantic_ai_slim/pydantic_ai/_system_prompt.py index e44588d722..04096c4e02 100644 --- a/pydantic_ai_slim/pydantic_ai/_system_prompt.py +++ b/pydantic_ai_slim/pydantic_ai/_system_prompt.py @@ -6,7 +6,7 @@ from typing import Any, Callable, Generic, cast from . import _utils -from .dependencies import AgentDeps, CallContext, SystemPromptFunc +from .dependencies import AgentDeps, RunContext, SystemPromptFunc @dataclass @@ -21,7 +21,7 @@ def __post_init__(self): async def run(self, deps: AgentDeps) -> str: if self._takes_ctx: - args = (CallContext(deps, 0, None),) + args = (RunContext(deps, 0, None),) else: args = () diff --git a/pydantic_ai_slim/pydantic_ai/_tool.py b/pydantic_ai_slim/pydantic_ai/_tool.py index 6ebcad449a..9d82360bde 100644 --- a/pydantic_ai_slim/pydantic_ai/_tool.py +++ b/pydantic_ai_slim/pydantic_ai/_tool.py @@ -9,7 +9,7 @@ from pydantic_core import SchemaValidator from . import _pydantic, _utils, messages -from .dependencies import AgentDeps, CallContext, ToolContextFunc, ToolParams, ToolPlainFunc +from .dependencies import AgentDeps, RunContext, ToolContextFunc, ToolParams, ToolPlainFunc from .exceptions import ModelRetry, UnexpectedModelBehavior # Usage `ToolEitherFunc[AgentDependencies, P]` @@ -87,7 +87,7 @@ def _call_args( if self.single_arg_name: args_dict = {self.single_arg_name: args_dict} - args = [CallContext(deps, self._current_retry, message.tool_name)] if self.function.is_left() else [] + args = [RunContext(deps, self._current_retry, message.tool_name)] if self.function.is_left() else [] for positional_field in self.positional_fields: args.append(args_dict.pop(positional_field)) if self.var_positional_field: diff --git a/pydantic_ai_slim/pydantic_ai/agent.py b/pydantic_ai_slim/pydantic_ai/agent.py index 1e4d043a5f..1ce066f806 100644 --- a/pydantic_ai_slim/pydantic_ai/agent.py +++ b/pydantic_ai_slim/pydantic_ai/agent.py @@ -19,7 +19,7 @@ models, result, ) -from .dependencies import AgentDeps, CallContext, ToolContextFunc, ToolParams, ToolPlainFunc +from .dependencies import AgentDeps, RunContext, ToolContextFunc, ToolParams, ToolPlainFunc from .result import ResultData __all__ = ('Agent',) @@ -334,13 +334,13 @@ def override( @overload def system_prompt( - self, func: Callable[[CallContext[AgentDeps]], str], / - ) -> Callable[[CallContext[AgentDeps]], str]: ... + self, func: Callable[[RunContext[AgentDeps]], str], / + ) -> Callable[[RunContext[AgentDeps]], str]: ... @overload def system_prompt( - self, func: Callable[[CallContext[AgentDeps]], Awaitable[str]], / - ) -> Callable[[CallContext[AgentDeps]], Awaitable[str]]: ... + self, func: Callable[[RunContext[AgentDeps]], Awaitable[str]], / + ) -> Callable[[RunContext[AgentDeps]], Awaitable[str]]: ... @overload def system_prompt(self, func: Callable[[], str], /) -> Callable[[], str]: ... @@ -353,7 +353,7 @@ def system_prompt( ) -> _system_prompt.SystemPromptFunc[AgentDeps]: """Decorator to register a system prompt function. - Optionally takes [`CallContext`][pydantic_ai.dependencies.CallContext] as it's only argument. + Optionally takes [`RunContext`][pydantic_ai.dependencies.RunContext] as it's only argument. Can decorate a sync or async functions. Overloads for every possible signature of `system_prompt` are included so the decorator doesn't obscure @@ -361,7 +361,7 @@ def system_prompt( Example: ```py - from pydantic_ai import Agent, CallContext + from pydantic_ai import Agent, RunContext agent = Agent('test', deps_type=str) @@ -370,7 +370,7 @@ def simple_system_prompt() -> str: return 'foobar' @agent.system_prompt - async def async_system_prompt(ctx: CallContext[str]) -> str: + async def async_system_prompt(ctx: RunContext[str]) -> str: return f'{ctx.deps} is the best' result = agent.run_sync('foobar', deps='spam') @@ -383,13 +383,13 @@ async def async_system_prompt(ctx: CallContext[str]) -> str: @overload def result_validator( - self, func: Callable[[CallContext[AgentDeps], ResultData], ResultData], / - ) -> Callable[[CallContext[AgentDeps], ResultData], ResultData]: ... + self, func: Callable[[RunContext[AgentDeps], ResultData], ResultData], / + ) -> Callable[[RunContext[AgentDeps], ResultData], ResultData]: ... @overload def result_validator( - self, func: Callable[[CallContext[AgentDeps], ResultData], Awaitable[ResultData]], / - ) -> Callable[[CallContext[AgentDeps], ResultData], Awaitable[ResultData]]: ... + self, func: Callable[[RunContext[AgentDeps], ResultData], Awaitable[ResultData]], / + ) -> Callable[[RunContext[AgentDeps], ResultData], Awaitable[ResultData]]: ... @overload def result_validator(self, func: Callable[[ResultData], ResultData], /) -> Callable[[ResultData], ResultData]: ... @@ -404,7 +404,7 @@ def result_validator( ) -> _result.ResultValidatorFunc[AgentDeps, ResultData]: """Decorator to register a result validator function. - Optionally takes [`CallContext`][pydantic_ai.dependencies.CallContext] as it's first argument. + Optionally takes [`RunContext`][pydantic_ai.dependencies.RunContext] as it's first argument. Can decorate a sync or async functions. Overloads for every possible signature of `result_validator` are included so the decorator doesn't obscure @@ -412,7 +412,7 @@ def result_validator( Example: ```py - from pydantic_ai import Agent, CallContext, ModelRetry + from pydantic_ai import Agent, ModelRetry, RunContext agent = Agent('test', deps_type=str) @@ -423,7 +423,7 @@ def result_validator_simple(data: str) -> str: return data @agent.result_validator - async def result_validator_deps(ctx: CallContext[str], data: str) -> str: + async def result_validator_deps(ctx: RunContext[str], data: str) -> str: if ctx.deps in data: raise ModelRetry('wrong response') return data @@ -452,7 +452,7 @@ def tool( retries: int | None = None, ) -> Any: """Decorator to register a tool function which takes - [`CallContext`][pydantic_ai.dependencies.CallContext] as its first argument. + [`RunContext`][pydantic_ai.dependencies.RunContext] as its first argument. Can decorate a sync or async functions. @@ -464,16 +464,16 @@ def tool( Example: ```py - from pydantic_ai import Agent, CallContext + from pydantic_ai import Agent, RunContext agent = Agent('test', deps_type=int) @agent.tool - def foobar(ctx: CallContext[int], x: int) -> int: + def foobar(ctx: RunContext[int], x: int) -> int: return ctx.deps + x @agent.tool(retries=2) - async def spam(ctx: CallContext[str], y: float) -> float: + async def spam(ctx: RunContext[str], y: float) -> float: return ctx.deps + y result = agent.run_sync('foobar', deps=1) @@ -510,7 +510,7 @@ def tool_plain( ) -> Callable[[ToolPlainFunc[ToolParams]], ToolPlainFunc[ToolParams]]: ... def tool_plain(self, func: ToolPlainFunc[ToolParams] | None = None, /, *, retries: int | None = None) -> Any: - """Decorator to register a tool function which DOES NOT take `CallContext` as an argument. + """Decorator to register a tool function which DOES NOT take `RunContext` as an argument. Can decorate a sync or async functions. @@ -522,16 +522,16 @@ def tool_plain(self, func: ToolPlainFunc[ToolParams] | None = None, /, *, retrie Example: ```py - from pydantic_ai import Agent, CallContext + from pydantic_ai import Agent, RunContext agent = Agent('test') @agent.tool - def foobar(ctx: CallContext[int]) -> int: + def foobar(ctx: RunContext[int]) -> int: return 123 @agent.tool(retries=2) - async def spam(ctx: CallContext[str]) -> float: + async def spam(ctx: RunContext[str]) -> float: return 3.14 result = agent.run_sync('foobar', deps=1) diff --git a/pydantic_ai_slim/pydantic_ai/dependencies.py b/pydantic_ai_slim/pydantic_ai/dependencies.py index 6c22473040..1f0d300689 100644 --- a/pydantic_ai_slim/pydantic_ai/dependencies.py +++ b/pydantic_ai_slim/pydantic_ai/dependencies.py @@ -13,7 +13,7 @@ __all__ = ( 'AgentDeps', - 'CallContext', + 'RunContext', 'ResultValidatorFunc', 'SystemPromptFunc', 'ToolReturnValue', @@ -28,7 +28,7 @@ @dataclass -class CallContext(Generic[AgentDeps]): +class RunContext(Generic[AgentDeps]): """Information about the current call.""" deps: AgentDeps @@ -43,19 +43,19 @@ class CallContext(Generic[AgentDeps]): """Retrieval function param spec.""" SystemPromptFunc = Union[ - Callable[[CallContext[AgentDeps]], str], - Callable[[CallContext[AgentDeps]], Awaitable[str]], + Callable[[RunContext[AgentDeps]], str], + Callable[[RunContext[AgentDeps]], Awaitable[str]], Callable[[], str], Callable[[], Awaitable[str]], ] -"""A function that may or maybe not take `CallContext` as an argument, and may or may not be async. +"""A function that may or maybe not take `RunContext` as an argument, and may or may not be async. Usage `SystemPromptFunc[AgentDeps]`. """ ResultValidatorFunc = Union[ - Callable[[CallContext[AgentDeps], ResultData], ResultData], - Callable[[CallContext[AgentDeps], ResultData], Awaitable[ResultData]], + Callable[[RunContext[AgentDeps], ResultData], ResultData], + Callable[[RunContext[AgentDeps], ResultData], Awaitable[ResultData]], Callable[[ResultData], ResultData], Callable[[ResultData], Awaitable[ResultData]], ] @@ -71,13 +71,13 @@ class CallContext(Generic[AgentDeps]): ToolReturnValue = Union[JsonData, Awaitable[JsonData]] """Return value of a tool function.""" -ToolContextFunc = Callable[Concatenate[CallContext[AgentDeps], ToolParams], ToolReturnValue] -"""A tool function that takes `CallContext` as the first argument. +ToolContextFunc = Callable[Concatenate[RunContext[AgentDeps], ToolParams], ToolReturnValue] +"""A tool function that takes `RunContext` as the first argument. Usage `ToolContextFunc[AgentDeps, ToolParams]`. """ ToolPlainFunc = Callable[ToolParams, ToolReturnValue] -"""A tool function that does not take `CallContext` as the first argument. +"""A tool function that does not take `RunContext` as the first argument. Usage `ToolPlainFunc[ToolParams]`. """ diff --git a/tests/models/test_model_function.py b/tests/models/test_model_function.py index c82fb978b1..e9e5761b88 100644 --- a/tests/models/test_model_function.py +++ b/tests/models/test_model_function.py @@ -10,7 +10,7 @@ from inline_snapshot import snapshot from pydantic import BaseModel -from pydantic_ai import Agent, CallContext, ModelRetry +from pydantic_ai import Agent, ModelRetry, RunContext from pydantic_ai.messages import ( Message, ModelAnyResponse, @@ -121,7 +121,7 @@ async def get_location(location_description: str) -> str: @weather_agent.tool -async def get_weather(_: CallContext[None], lat: int, lng: int): +async def get_weather(_: RunContext[None], lat: int, lng: int): if (lat, lng) == (51, 0): # it always rains in London return 'Raining' @@ -201,7 +201,7 @@ async def call_function_model(messages: list[Message], _: AgentInfo) -> ModelAny @var_args_agent.tool -def get_var_args(ctx: CallContext[int], *args: int): +def get_var_args(ctx: RunContext[int], *args: int): assert ctx.deps == 123 return json.dumps({'args': args}) @@ -235,7 +235,7 @@ def test_deps_none(): agent = Agent(FunctionModel(call_tool)) @agent.tool - async def get_none(ctx: CallContext[None]): + async def get_none(ctx: RunContext[None]): nonlocal called called = True @@ -252,7 +252,7 @@ async def get_none(ctx: CallContext[None]): def test_deps_init(): - def get_check_foobar(ctx: CallContext[tuple[str, str]]) -> str: + def get_check_foobar(ctx: RunContext[tuple[str, str]]) -> str: nonlocal called called = True @@ -279,7 +279,7 @@ def test_model_arg(): @agent_all.tool -async def foo(_: CallContext[None], x: int) -> str: +async def foo(_: RunContext[None], x: int) -> str: return str(x + 1) diff --git a/tests/test_agent.py b/tests/test_agent.py index c10457dc5a..e416bd4fb0 100644 --- a/tests/test_agent.py +++ b/tests/test_agent.py @@ -6,7 +6,7 @@ from inline_snapshot import snapshot from pydantic import BaseModel -from pydantic_ai import Agent, CallContext, ModelRetry, UnexpectedModelBehavior, UserError +from pydantic_ai import Agent, ModelRetry, RunContext, UnexpectedModelBehavior, UserError from pydantic_ai.messages import ( ArgsDict, ArgsJson, @@ -111,7 +111,7 @@ def return_model(messages: list[Message], info: AgentInfo) -> ModelAnyResponse: agent = Agent(FunctionModel(return_model), result_type=Foo) @agent.result_validator - def validate_result(ctx: CallContext[None], r: Foo) -> Foo: + def validate_result(ctx: RunContext[None], r: Foo) -> Foo: assert ctx.tool_name == 'final_result' if r.a == 42: return r @@ -227,7 +227,7 @@ def test_response_union_allow_str(input_union_callable: Callable[[], Any]): got_tool_call_name = 'unset' @agent.result_validator - def validate_result(ctx: CallContext[None], r: Any) -> Any: + def validate_result(ctx: RunContext[None], r: Any) -> Any: nonlocal got_tool_call_name got_tool_call_name = ctx.tool_name return r @@ -303,7 +303,7 @@ class Bar(BaseModel): got_tool_call_name = 'unset' @agent.result_validator - def validate_result(ctx: CallContext[None], r: Any) -> Any: + def validate_result(ctx: RunContext[None], r: Any) -> Any: nonlocal got_tool_call_name got_tool_call_name = ctx.tool_name return r diff --git a/tests/test_deps.py b/tests/test_deps.py index 7a89d73bd9..2a62d4b9ea 100644 --- a/tests/test_deps.py +++ b/tests/test_deps.py @@ -1,6 +1,6 @@ from dataclasses import dataclass -from pydantic_ai import Agent, CallContext +from pydantic_ai import Agent, RunContext from pydantic_ai.models.test import TestModel @@ -14,7 +14,7 @@ class MyDeps: @agent.tool -async def example_tool(ctx: CallContext[MyDeps]) -> str: +async def example_tool(ctx: RunContext[MyDeps]) -> str: return f'{ctx.deps}' diff --git a/tests/test_retrievers.py b/tests/test_retrievers.py index 37258c4809..c411c1cce2 100644 --- a/tests/test_retrievers.py +++ b/tests/test_retrievers.py @@ -5,7 +5,7 @@ from inline_snapshot import snapshot from pydantic import BaseModel, Field -from pydantic_ai import Agent, CallContext, UserError +from pydantic_ai import Agent, RunContext, UserError from pydantic_ai.messages import Message, ModelAnyResponse, ModelTextResponse from pydantic_ai.models.function import AgentInfo, FunctionModel from pydantic_ai.models.test import TestModel @@ -22,7 +22,7 @@ def invalid_tool(x: int) -> str: # pragma: no cover assert str(exc_info.value) == snapshot( 'Error generating schema for test_tool_no_ctx..invalid_tool:\n' - ' First argument must be a CallContext instance when using `.tool`' + ' First argument must be a RunContext instance when using `.tool`' ) @@ -32,12 +32,12 @@ def test_tool_plain_with_ctx(): with pytest.raises(UserError) as exc_info: @agent.tool_plain - async def invalid_tool(ctx: CallContext[None]) -> str: # pragma: no cover + async def invalid_tool(ctx: RunContext[None]) -> str: # pragma: no cover return 'Hello' assert str(exc_info.value) == snapshot( 'Error generating schema for test_tool_plain_with_ctx..invalid_tool:\n' - ' CallContext instance can only be used with `.tool`' + ' RunContext instance can only be used with `.tool`' ) @@ -47,13 +47,13 @@ def test_tool_ctx_second(): with pytest.raises(UserError) as exc_info: @agent.tool # pyright: ignore[reportArgumentType] - def invalid_tool(x: int, ctx: CallContext[None]) -> str: # pragma: no cover + def invalid_tool(x: int, ctx: RunContext[None]) -> str: # pragma: no cover return 'Hello' assert str(exc_info.value) == snapshot( 'Error generating schema for test_tool_ctx_second..invalid_tool:\n' - ' First argument must be a CallContext instance when using `.tool`\n' - ' CallContext instance can only be used as the first argument' + ' First argument must be a RunContext instance when using `.tool`\n' + ' RunContext instance can only be used as the first argument' ) diff --git a/tests/typed_agent.py b/tests/typed_agent.py index e21731f3ba..a73f389b82 100644 --- a/tests/typed_agent.py +++ b/tests/typed_agent.py @@ -5,7 +5,7 @@ from dataclasses import dataclass from typing import Callable, Union, assert_type -from pydantic_ai import Agent, CallContext, ModelRetry +from pydantic_ai import Agent, ModelRetry, RunContext from pydantic_ai.result import RunResult @@ -20,7 +20,7 @@ class MyDeps: @typed_agent.system_prompt -async def system_prompt_ok1(ctx: CallContext[MyDeps]) -> str: +async def system_prompt_ok1(ctx: RunContext[MyDeps]) -> str: return f'{ctx.deps}' @@ -30,7 +30,7 @@ def system_prompt_ok2() -> str: # we have overloads for every possible signature of system_prompt, so the type of decorated functions is correct -assert_type(system_prompt_ok1, Callable[[CallContext[MyDeps]], Awaitable[str]]) +assert_type(system_prompt_ok1, Callable[[RunContext[MyDeps]], Awaitable[str]]) assert_type(system_prompt_ok2, Callable[[], str]) @@ -45,14 +45,14 @@ def expect_error(error_type: type[Exception]) -> Iterator[None]: @typed_agent.tool -async def ok_tool(ctx: CallContext[MyDeps], x: str) -> str: +async def ok_tool(ctx: RunContext[MyDeps], x: str) -> str: assert_type(ctx.deps, MyDeps) total = ctx.deps.foo + ctx.deps.bar return f'{x} {total}' # we can't add overloads for every possible signature of tool, so the type of ok_tool is obscured -assert_type(ok_tool, Callable[[CallContext[MyDeps], str], str]) # type: ignore[assert-type] +assert_type(ok_tool, Callable[[RunContext[MyDeps], str], str]) # type: ignore[assert-type] @typed_agent.tool_plain @@ -66,13 +66,13 @@ def ok_json_list(x: str) -> list[Union[str, int]]: @typed_agent.tool -async def bad_tool1(ctx: CallContext[MyDeps], x: str) -> str: +async def bad_tool1(ctx: RunContext[MyDeps], x: str) -> str: total = ctx.deps.foo + ctx.deps.spam # type: ignore[attr-defined] return f'{x} {total}' @typed_agent.tool # type: ignore[arg-type] -async def bad_tool2(ctx: CallContext[int], x: str) -> str: +async def bad_tool2(ctx: RunContext[int], x: str) -> str: return f'{x} {ctx.deps}' @@ -94,7 +94,7 @@ def ok_validator_simple(data: str) -> str: @typed_agent.result_validator -async def ok_validator_ctx(ctx: CallContext[MyDeps], data: str) -> str: +async def ok_validator_ctx(ctx: RunContext[MyDeps], data: str) -> str: if ctx.deps.foo == 1: raise ModelRetry('foo is 1') return data @@ -102,11 +102,11 @@ async def ok_validator_ctx(ctx: CallContext[MyDeps], data: str) -> str: # we have overloads for every possible signature of result_validator, so the type of decorated functions is correct assert_type(ok_validator_simple, Callable[[str], str]) -assert_type(ok_validator_ctx, Callable[[CallContext[MyDeps], str], Awaitable[str]]) +assert_type(ok_validator_ctx, Callable[[RunContext[MyDeps], str], Awaitable[str]]) @typed_agent.result_validator # type: ignore[arg-type] -async def result_validator_wrong(ctx: CallContext[int], result: str) -> str: +async def result_validator_wrong(ctx: RunContext[int], result: str) -> str: return result