-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New intro #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
New intro #64
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
Small but complete example of using PydanticAI to build a support agent for a bank. | ||
|
||
Demonstrates: | ||
|
||
* [dynamic system prompt](../agents.md#system-prompts) | ||
* [structured `result_type`](../results.md#structured-result-validation) | ||
* [retrievers](../agents.md#retrievers) | ||
|
||
## Running the Example | ||
|
||
With [dependencies installed and environment variables set](./index.md#usage), run: | ||
|
||
```bash | ||
python/uv-run -m pydantic_ai_examples.bank_support | ||
``` | ||
|
||
(or `PYDANTIC_AI_MODEL=gemini-1.5-flash ...`) | ||
|
||
## Example Code | ||
|
||
```py title="bank_support.py" | ||
#! pydantic_ai_examples/bank_support.py | ||
``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,78 +1,135 @@ | ||
# Introduction {.hide} | ||
|
||
--8<-- "docs/.partials/index-header.html" | ||
|
||
# PydanticAI {.hide} | ||
When I first found FastAPI, I got it immediately, I was excited to find something so genuinely innovative and yet ergonomic built on Pydantic. | ||
|
||
Virtually every Agent Framework and LLM library in Python uses Pydantic, but when we came to use Gen AI in [Pydantic Logfire](https://pydantic.dev/logfire), I couldn't find anything that gave me the same feeling. | ||
|
||
PydanticAI is a Python Agent Framework designed to make it less painful to build production grade applications with Generative AI. | ||
|
||
You can think of PydanticAI as an Agent Framework or a shim to use Pydantic with LLMs — they're the same thing. | ||
## Why use PydanticAI | ||
|
||
PydanticAI tries to make working with LLMs feel similar to building a web application. | ||
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, Langchain, LlamaIndex, AutoGPT, Transformers, Instructor and many more) | ||
* Multi-model — currently with OpenAI and Gemini are support, Anthropic [coming soon](https://github.com/pydantic/pydantic-ai/issues/63), simply interface to implement other models or adapt existing ones | ||
* Type-safe | ||
* Built on tried and tested best practices in Python | ||
* Structured response validation with Pydantic | ||
* Streamed responses, including validation of streamed structured responses with Pydantic | ||
* Novel, type-safe dependency injection system | ||
* Logfire integration | ||
|
||
!!! example "In Beta" | ||
PydanticAI is in early beta, the API is subject to change and there's a lot more to do. | ||
[Feedback](https://github.com/pydantic/pydantic-ai/issues) is very welcome! | ||
|
||
## Example — Hello World | ||
|
||
Here's a very minimal example of PydanticAI. | ||
|
||
```py title="hello_world.py" | ||
from pydantic_ai import Agent | ||
|
||
agent = Agent('gemini-1.5-flash', system_prompt='Be concise, reply with one sentence.') | ||
|
||
result = agent.run_sync('Where does "hello world" come from?') | ||
print(result.data) | ||
""" | ||
The first known use of "hello, world" was in a 1974 textbook about the C programming language. | ||
""" | ||
``` | ||
_(This example is complete, it can be run "as is")_ | ||
|
||
Not very interesting yet, but we can easily add retrievers, dynamic system prompts and structured responses to build more powerful agents. | ||
|
||
## Example — Retrievers and Dependency Injection | ||
|
||
Partial example of using retrievers to help an LLM respond to a user's query about the weather: | ||
Small but complete example of using PydanticAI to build a support agent for a bank. | ||
|
||
```py title="bank_support.py" | ||
from dataclasses import dataclass | ||
|
||
```py title="weather_agent.py" | ||
import httpx | ||
from pydantic import BaseModel, Field | ||
|
||
from pydantic_ai import Agent, CallContext | ||
|
||
weather_agent = Agent( # (1)! | ||
from bank_database import DatabaseConn | ||
|
||
|
||
@dataclass | ||
class SupportDependencies: # (3)! | ||
customer_id: int | ||
db: DatabaseConn | ||
|
||
|
||
class SupportResult(BaseModel): | ||
support_advice: str = Field(description='Advice returned to the customer') | ||
block_card: bool = Field(description='Whether to block their') | ||
risk: int = Field(description='Risk level of query', ge=0, le=10) | ||
|
||
|
||
support_agent = Agent( # (1)! | ||
'openai:gpt-4o', # (2)! | ||
deps_type=httpx.AsyncClient, # (3)! | ||
system_prompt='Be concise, reply with one sentence.', # (4)! | ||
deps_type=SupportDependencies, | ||
result_type=SupportResult, # (9)! | ||
system_prompt=( # (4)! | ||
'You are a support agent in our bank, give the ' | ||
'customer support and judge the risk level of their query. ' | ||
"Reply using the customer's name." | ||
), | ||
) | ||
|
||
|
||
@weather_agent.retriever_context # (5)! | ||
async def get_location( | ||
ctx: CallContext[httpx.AsyncClient], | ||
location_description: str, | ||
) -> dict[str, float]: | ||
"""Get the latitude and longitude of a location by its description.""" # (6)! | ||
response = await ctx.deps.get('https://api.geolocation...') | ||
... | ||
|
||
|
||
@weather_agent.retriever_context # (7)! | ||
async def get_weather( | ||
ctx: CallContext[httpx.AsyncClient], | ||
lat: float, | ||
lng: float, | ||
) -> dict[str, str]: | ||
"""Get the weather at a location by its latitude and longitude.""" | ||
response = await ctx.deps.get('https://api.weather...') | ||
... | ||
|
||
|
||
async def main(): | ||
async with httpx.AsyncClient() as client: | ||
result = await weather_agent.run( # (8)! | ||
'What is the weather like in West London and in Wiltshire?', | ||
deps=client, | ||
) | ||
print(result.data) # (9)! | ||
#> The weather in West London is raining, while in Wiltshire it is sunny. | ||
|
||
messages = result.all_messages() # (10)! | ||
@support_agent.system_prompt # (5)! | ||
async def add_customer_name(ctx: CallContext[SupportDependencies]) -> str: | ||
customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id) | ||
return f"The customer's name is {customer_name!r}" | ||
|
||
|
||
@support_agent.retriever_context # (6)! | ||
async def customer_balance( | ||
ctx: CallContext[SupportDependencies], include_pending: bool | ||
) -> str: | ||
"""Returns the customer's current account balance.""" # (7)! | ||
balance = await ctx.deps.db.customer_balance( | ||
id=ctx.deps.customer_id, | ||
include_pending=include_pending, | ||
) | ||
return f'${balance:.2f}' | ||
|
||
|
||
... # (11)! | ||
|
||
|
||
deps = SupportDependencies(customer_id=123, db=DatabaseConn()) | ||
result = support_agent.run_sync('What is my balance?', deps=deps) # (8)! | ||
print(result.data) # (10)! | ||
""" | ||
support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1 | ||
""" | ||
|
||
result = support_agent.run_sync('I just lost my card!', deps=deps) | ||
print(result.data) | ||
""" | ||
support_advice="I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions." block_card=True risk=8 | ||
""" | ||
``` | ||
|
||
1. An agent that can tell users about the weather in a particular location. Agents combine a system prompt, a response type (here `str`) and "retrievers" (aka tools). | ||
2. Here we configure the agent to use OpenAI's GPT-4o model, you can also customise the model when running the agent. | ||
3. We specify the type dependencies for the agent, in this case an HTTP client, which retrievers will use to make requests to external services. PydanticAI's system of dependency injection provides a powerful, type safe way to customise the behaviour of your agents, including for unit tests and evals. | ||
4. Static system prompts can be registered as key word arguments to the agent, dynamic system prompts can be registered with the `@agent.system_prompot` decorator and benefit from dependency injection. | ||
5. Retrievers let you register "tools" which the LLM may call while to respond to a user. You inject dependencies into the retriever with `CallContext`, any other arguments become the tool schema passed to the LLM, Pydantic is used to validate these arguments, errors are passed back to the LLM so it can retry. | ||
6. This docstring is also passed to the LLM as a description of the tool. | ||
7. Multiple retrievers can be registered with the same agent, the LLM can choose which (if any) retrievers to call in order to respond to a user. | ||
8. Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached. You can also run agents synchronously with `run_sync`. Internally agents are all async, so `run_sync` is a helper using `asyncio.run` to call `run()`. | ||
9. The response from the LLM, in this case a `str`, Agents are generic in both the type of `deps` and `result_type`, so calls are typed end-to-end. | ||
10. [`result.all_messages()`](message-history.md) includes details of messages exchanged, this is useful both to understand the conversation that took place and useful if you want to continue the conversation later — messages can be passed back to later `run/run_sync` calls. | ||
1. An [agent](agents.md) that acts as first-tier support in a bank, agents are generic in the type of dependencies they take and the type of result they return, in this case `Deps` and `SupportResult`. | ||
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also customise the model when running the agent. | ||
3. The `SupportDependencies` dataclass is used to pass data and connections into the model that will be needed when running [system prompts](agents.md#system-prompts) and [retrievers](agents.md#retrievers). PydanticAI's system of dependency injection provides a powerful, type safe way to customise the behaviour of your agents, including for unit tests and evals. | ||
samuelcolvin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
4. Static [system prompts](agents.md#system-prompts) can be registered as keyword arguments to the agent | ||
5. dynamic [system prompts](agents.md#system-prompts) can be registered with the `@agent.system_prompot` decorator and benefit from dependency injection. | ||
6. [Retrievers](agents.md#retrievers) let you register "tools" which the LLM may call while responding to a user. You inject dependencies into the retriever with [`CallContext`][pydantic_ai.dependencies.CallContext], any other arguments become the tool schema passed to the LLM, Pydantic is used to validate these arguments, errors are passed back to the LLM so it can retry. | ||
7. The docstring is also passed to the LLM as a description of the tool. | ||
8. [Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM until a final response is reached. | ||
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again. | ||
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking. | ||
11. In real use case, you'd add many more retrievers to the agent to extend the context it's equipped with and support it can provide. | ||
|
||
!!! tip "Complete `weather_agent.py` example" | ||
This example is incomplete for the sake of brevity; you can find a complete `weather_agent.py` example [here](examples/weather-agent.md). | ||
!!! tip "Complete `bank_support.py` example" | ||
This example is incomplete for the sake of brevity (the definition of `DatabaseConn` is missing); you can find a complete `bank_support.py` example [here](examples/bank-support.md). | ||
|
||
## Example — Result Validation | ||
## Next Steps | ||
|
||
TODO | ||
To try PydanticAI yourself, follow instructions [in examples](examples/index.md). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.