-
Notifications
You must be signed in to change notification settings - Fork 75
langchain: dynamic system prompt w/ middleware docs #592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
73a4f3a
93ac99f
de254ee
f2f5ae2
6fe4b08
d00a207
b9d2668
668b0f0
fb2b208
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -128,10 +128,10 @@ Model instances give you complete control over configuration. Use them when you | |||||||||||||||||
|
||||||||||||||||||
#### Dynamic model | ||||||||||||||||||
|
||||||||||||||||||
:::python | ||||||||||||||||||
|
||||||||||||||||||
Dynamic models are selected at <Tooltip tip="The execution environment of your agent, containing immutable configuration and contextual data that persists throughout the agent's execution (e.g., user IDs, session details, or application-specific configuration).">runtime</Tooltip> based on the current <Tooltip tip="The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g., user preferences or tool usage stats).">state</Tooltip> and context. This enables sophisticated routing logic and cost optimization. | ||||||||||||||||||
|
||||||||||||||||||
:::python | ||||||||||||||||||
|
||||||||||||||||||
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bind_tools(tools)`, where `tools` is a subset of the `tools` parameter. | ||||||||||||||||||
|
||||||||||||||||||
```python | ||||||||||||||||||
|
@@ -153,11 +153,6 @@ agent = create_agent(select_model, tools=tools) | |||||||||||||||||
``` | ||||||||||||||||||
::: | ||||||||||||||||||
:::js | ||||||||||||||||||
<Info> | ||||||||||||||||||
**`state`**: The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g. user preferences or tool usage stats). | ||||||||||||||||||
</Info> | ||||||||||||||||||
|
||||||||||||||||||
Dynamic models are selected at runtime based on the current state and context. This enables sophisticated routing logic and cost optimization. | ||||||||||||||||||
|
||||||||||||||||||
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bindTools(tools)`, where `tools` is a subset of the `tools` parameter. | ||||||||||||||||||
|
||||||||||||||||||
|
@@ -465,8 +460,95 @@ const agent = createAgent({ | |||||||||||||||||
|
||||||||||||||||||
When no `prompt` is provided, the agent will infer its task from the messages directly. | ||||||||||||||||||
|
||||||||||||||||||
#### Dynamic prompts with middleware | ||||||||||||||||||
|
||||||||||||||||||
:::python | ||||||||||||||||||
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modify_model_request` decorator to create a simple custom middleware. | ||||||||||||||||||
::: | ||||||||||||||||||
:::js | ||||||||||||||||||
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modifyModelRequest` decorator to create a simple custom middleware. | ||||||||||||||||||
::: | ||||||||||||||||||
|
||||||||||||||||||
Dynamic system prompt is especially useful for personalizing prompts based on user roles, conversation context, or other changing factors: | ||||||||||||||||||
|
||||||||||||||||||
:::python | ||||||||||||||||||
```python wrap | ||||||||||||||||||
from langchain.agents import create_agent, AgentState | ||||||||||||||||||
from langchain.agents.middleware.types import modify_model_request | ||||||||||||||||||
from langgraph.runtime import Runtime | ||||||||||||||||||
from typing import TypedDict | ||||||||||||||||||
|
||||||||||||||||||
class Context(TypedDict): | ||||||||||||||||||
user_role: str | ||||||||||||||||||
|
||||||||||||||||||
@modify_model_request | ||||||||||||||||||
def dynamic_system_prompt(state: AgentState, request: ModelRequest, runtime: Runtime[Context]) -> ModelRequest: | ||||||||||||||||||
user_role = runtime.context.get("user_role", "user") | ||||||||||||||||||
base_prompt = "You are a helpful assistant." | ||||||||||||||||||
|
||||||||||||||||||
if user_role == "expert": | ||||||||||||||||||
prompt = f"{base_prompt} Provide detailed technical responses." | ||||||||||||||||||
elif user_role == "beginner": | ||||||||||||||||||
prompt = f"{base_prompt} Explain concepts simply and avoid jargon." | ||||||||||||||||||
else: | ||||||||||||||||||
prompt = base_prompt | ||||||||||||||||||
|
||||||||||||||||||
request.system_prompt = prompt | ||||||||||||||||||
return request | ||||||||||||||||||
|
||||||||||||||||||
agent = create_agent( | ||||||||||||||||||
model="openai:gpt-4o", | ||||||||||||||||||
tools=tools, | ||||||||||||||||||
middleware=[dynamic_system_prompt], | ||||||||||||||||||
) | ||||||||||||||||||
|
||||||||||||||||||
# The system prompt will be set dynamically based on context | ||||||||||||||||||
result = agent.invoke( | ||||||||||||||||||
{"messages": [{"role": "user", "content": "Explain machine learning"}]}, | ||||||||||||||||||
{"context": {"user_role": "expert"}} | ||||||||||||||||||
) | ||||||||||||||||||
``` | ||||||||||||||||||
::: | ||||||||||||||||||
|
||||||||||||||||||
:::js | ||||||||||||||||||
```typescript wrap | ||||||||||||||||||
import { z } from "zod"; | ||||||||||||||||||
import { createAgent } from "langchain"; | ||||||||||||||||||
import { dynamicSystemPromptMiddleware } from "langchain/middleware"; | ||||||||||||||||||
|
||||||||||||||||||
const contextSchema = z.object({ | ||||||||||||||||||
userRole: z.enum(["expert", "beginner"]), | ||||||||||||||||||
}); | ||||||||||||||||||
|
||||||||||||||||||
const agent = createAgent({ | ||||||||||||||||||
model: "openai:gpt-4o", | ||||||||||||||||||
tools: [/* ... */], | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The comment
Suggested change
Copilot uses AI. Check for mistakes. Positive FeedbackNegative Feedback |
||||||||||||||||||
contextSchema, | ||||||||||||||||||
middleware: [ | ||||||||||||||||||
dynamicSystemPromptMiddleware<z.infer<typeof contextSchema>>((state, runtime) => { | ||||||||||||||||||
const userRole = runtime.context.userRole || "user"; | ||||||||||||||||||
const basePrompt = "You are a helpful assistant."; | ||||||||||||||||||
|
||||||||||||||||||
if (userRole === "expert") { | ||||||||||||||||||
return `${basePrompt} Provide detailed technical responses.`; | ||||||||||||||||||
} else if (userRole === "beginner") { | ||||||||||||||||||
return `${basePrompt} Explain concepts simply and avoid jargon.`; | ||||||||||||||||||
} | ||||||||||||||||||
return basePrompt; | ||||||||||||||||||
}), | ||||||||||||||||||
], | ||||||||||||||||||
}); | ||||||||||||||||||
|
||||||||||||||||||
// The system prompt will be set dynamically based on context | ||||||||||||||||||
const result = await agent.invoke( | ||||||||||||||||||
{ messages: [{ role: "user", content: "Explain machine learning" }] }, | ||||||||||||||||||
{ context: { userRole: "expert" } } | ||||||||||||||||||
); | ||||||||||||||||||
``` | ||||||||||||||||||
::: | ||||||||||||||||||
|
||||||||||||||||||
<Tip> | ||||||||||||||||||
For more details on message types and formatting, see [Messages](/oss/langchain/messages). | ||||||||||||||||||
For more details on message types and formatting, see [Messages](/oss/langchain/messages). For comprehensive middleware documentation, see [Middleware](/oss/langchain/middleware). | ||||||||||||||||||
</Tip> | ||||||||||||||||||
|
||||||||||||||||||
## Advanced configuration | ||||||||||||||||||
|
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -151,6 +151,7 @@ LangChain provides several built in middleware to use off-the-shelf | |||||
- [Summarization](#summarization) | ||||||
- [Human-in-the-loop](#human-in-the-loop) | ||||||
- [Anthropic prompt caching](#anthropic-prompt-caching) | ||||||
- [Dynamic system prompt](#dynamic-system-prompt) | ||||||
|
||||||
### Summarization | ||||||
|
||||||
|
@@ -467,6 +468,138 @@ const result = await agent.invoke({ messages: [HumanMessage("What's my name?")] | |||||
``` | ||||||
::: | ||||||
|
||||||
### Dynamic system prompt | ||||||
|
||||||
:::python | ||||||
A system prompt can be dynamically set right before each model invocation using the `@modify_model_request` decorator. This middleware is particularly useful when the prompt depends on the current agent state or runtime context. | ||||||
|
||||||
For example, you can adjust the system prompt based on the user's expertise level: | ||||||
|
||||||
```python | ||||||
from typing import TypedDict | ||||||
|
||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The import order doesn't follow Python conventions. Standard library imports should come first, followed by third-party imports, then local imports. Move
Suggested change
Copilot uses AI. Check for mistakes. Positive FeedbackNegative Feedback |
||||||
from langchain.agents import create_agent, AgentState | ||||||
from langchain.agents.middleware.types import modify_model_request | ||||||
from langgraph.runtime import Runtime | ||||||
|
||||||
class Context(TypedDict): | ||||||
user_role: str | ||||||
|
||||||
sydney-runkle marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
@modify_model_request | ||||||
def dynamic_system_prompt(state: AgentState, request: ModelRequest, runtime: Runtime[Context]) -> ModelRequest: | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The Copilot uses AI. Check for mistakes. Positive FeedbackNegative Feedback |
||||||
user_role = runtime.context.get("user_role", "user") | ||||||
base_prompt = "You are a helpful assistant." | ||||||
|
||||||
if user_role == "expert": | ||||||
prompt = f"{base_prompt} Provide detailed technical responses." | ||||||
elif user_role == "beginner": | ||||||
prompt = f"{base_prompt} Explain concepts simply and avoid jargon." | ||||||
else: | ||||||
prompt = base_prompt | ||||||
|
||||||
request.system_prompt = prompt | ||||||
return request | ||||||
|
||||||
agent = create_agent( | ||||||
model="openai:gpt-4o", | ||||||
tools=[web_search], | ||||||
middleware=[dynamic_system_prompt], | ||||||
) | ||||||
|
||||||
# Use with context | ||||||
result = agent.invoke( | ||||||
{"messages": [{"role": "user", "content": "Explain async programming"}]}, | ||||||
{"context": {"user_role": "expert"}} | ||||||
) | ||||||
``` | ||||||
::: | ||||||
:::js | ||||||
|
||||||
A system prompt can be dynamically set right before each model invocation using the `dynamicSystemPromptMiddleware` middleware. This middleware is particularly useful when the prompt depends on the current agent state or runtime context. | ||||||
|
||||||
For example, you can adjust the system prompt based on the user's expertise level: | ||||||
|
||||||
```typescript | ||||||
import { z } from "zod"; | ||||||
import { createAgent } from "langchain"; | ||||||
import { dynamicSystemPromptMiddleware } from "langchain/middleware"; | ||||||
|
||||||
const contextSchema = z.object({ | ||||||
userRole: z.enum(["expert", "beginner"]), | ||||||
}); | ||||||
|
||||||
const agent = createAgent({ | ||||||
model: "openai:gpt-4o", | ||||||
tools: [...], | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The ellipsis
Suggested change
Copilot uses AI. Check for mistakes. Positive FeedbackNegative Feedback |
||||||
contextSchema, | ||||||
middleware: [ | ||||||
dynamicSystemPromptMiddleware<z.infer<typeof contextSchema>>((state, runtime) => { | ||||||
const userRole = runtime.context.userRole || "user"; | ||||||
const basePrompt = "You are a helpful assistant."; | ||||||
|
||||||
if (userRole === "expert") { | ||||||
return `${basePrompt} Provide detailed technical responses.`; | ||||||
} else if (userRole === "beginner") { | ||||||
return `${basePrompt} Explain concepts simply and avoid jargon.`; | ||||||
} | ||||||
return basePrompt; | ||||||
}), | ||||||
], | ||||||
}); | ||||||
|
||||||
// The system prompt will be set dynamically based on context | ||||||
const result = await agent.invoke( | ||||||
{ messages: [{ role: "user", content: "Explain async programming" }] }, | ||||||
{ context: { userRole: "expert" } } | ||||||
); | ||||||
``` | ||||||
::: | ||||||
|
||||||
Alternatively, you can adjust the system prompt based on the conversation length: | ||||||
|
||||||
:::python | ||||||
```python | ||||||
from langchain.agents.middleware.types import modify_model_request | ||||||
|
||||||
@modify_model_request | ||||||
def simple_prompt(state: AgentState, request: ModelRequest) -> ModelRequest: | ||||||
message_count = len(state["messages"]) | ||||||
|
||||||
if message_count > 10: | ||||||
prompt = "You are in an extended conversation. Be more concise." | ||||||
else: | ||||||
prompt = "You are a helpful assistant." | ||||||
|
||||||
request.system_prompt = prompt | ||||||
return request | ||||||
|
||||||
agent = create_agent( | ||||||
model="openai:gpt-4o", | ||||||
tools=[search_tool], | ||||||
middleware=[simple_prompt], | ||||||
) | ||||||
``` | ||||||
::: | ||||||
|
||||||
:::js | ||||||
```typescript | ||||||
const agent = createAgent({ | ||||||
model: "openai:gpt-4o", | ||||||
tools: [searchTool], | ||||||
middleware: [ | ||||||
dynamicSystemPromptMiddleware((state) => { | ||||||
const messageCount = state.messages.length; | ||||||
|
||||||
if (messageCount > 10) { | ||||||
return "You are in an extended conversation. Be more concise."; | ||||||
} | ||||||
return "You are a helpful assistant."; | ||||||
}), | ||||||
], | ||||||
}); | ||||||
``` | ||||||
::: | ||||||
|
||||||
## Custom Middleware | ||||||
|
||||||
Middleware for agents are subclasses of `AgentMiddleware`, which implement one or more of its hooks. | ||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the middleware.mdx file, the
ModelRequest
type is used without being imported. Add the necessary import statement to make this example runnable.Copilot uses AI. Check for mistakes.