diff --git a/src/oss/langchain/middleware/built-in.mdx b/src/oss/langchain/middleware/built-in.mdx index 01dba44eaf..3cec195b11 100644 --- a/src/oss/langchain/middleware/built-in.mdx +++ b/src/oss/langchain/middleware/built-in.mdx @@ -2302,695 +2302,13 @@ result = agent.invoke({ ## Provider-specific middleware -These middleware are optimized for specific LLM providers. - -### Anthropic - -Middleware specifically designed for Anthropic's Claude models. - -:::python - -| Middleware | Description | -|------------|-------------| -| [Prompt caching](#prompt-caching) | Reduce costs by caching repetitive prompt prefixes | -| [Bash tool](#bash-tool) | Execute Claude's native bash tool with local command execution | -| [Text editor](#text-editor) | Provide Claude's text editor tool for file editing | -| [Memory](#memory) | Provide Claude's memory tool for persistent agent memory | -| [File search](#file-search-1) | Search tools for state-based file systems | - -::: - -:::js - -| Middleware | Description | -|------------|-------------| -| [Prompt caching](#prompt-caching) | Reduce costs by caching repetitive prompt prefixes | - -::: - -#### Prompt caching - -Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls. Prompt caching is useful for the following: - -- Applications with long, static system prompts that don't change between requests -- Agents with many tool definitions that remain constant across invocations -- Conversations where early message history is reused across multiple turns -- High-volume deployments where reducing API costs and latency is critical - - - Learn more about [Anthropic prompt caching](https://platform.claude.com/docs/en/build-with-claude/prompt-caching#cache-limitations) strategies and limitations. - - -:::python -**API reference:** @[`AnthropicPromptCachingMiddleware`] - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware -from langchain.agents import create_agent - -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - system_prompt="", - middleware=[AnthropicPromptCachingMiddleware(ttl="5m")], -) -``` -::: - -:::js -```typescript -import { createAgent, anthropicPromptCachingMiddleware } from "langchain"; - -const agent = createAgent({ - model: "claude-sonnet-4-5-20250929", - prompt: "", - middleware: [anthropicPromptCachingMiddleware({ ttl: "5m" })], -}); -``` -::: - - - -:::python - - Cache type. Only `'ephemeral'` is currently supported. - - - - Time to live for cached content. Valid values: `'5m'` or `'1h'` - - - - Minimum number of messages before caching starts - - - - Behavior when using non-Anthropic models. Options: `'ignore'`, `'warn'`, or `'raise'` - -::: - -:::js - - Time to live for cached content. Valid values: `'5m'` or `'1h'` - -::: - - - - - -The middleware caches content up to and including the latest message in each request. On subsequent requests within the TTL window (5 minutes or 1 hour), previously seen content is retrieved from cache rather than reprocessed, significantly reducing costs and latency. - -**How it works:** -1. First request: System prompt, tools, and the user message "Hi, my name is Bob" are sent to the API and cached -2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message "What's my name?" needs to be processed, plus the model's response from the first request -3. This pattern continues for each turn, with each request reusing the cached conversation history - -:::python -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware -from langchain.agents import create_agent -from langchain.messages import HumanMessage - - -LONG_PROMPT = """ -Please be a helpful assistant. - - -""" - -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - system_prompt=LONG_PROMPT, - middleware=[AnthropicPromptCachingMiddleware(ttl="5m")], -) - -# First invocation: Creates cache with system prompt, tools, and "Hi, my name is Bob" -agent.invoke({"messages": [HumanMessage("Hi, my name is Bob")]}) - -# Second invocation: Reuses cached system prompt, tools, and previous messages -# Only processes the new message "What's my name?" and the previous AI response -agent.invoke({"messages": [HumanMessage("What's my name?")]}) -``` -::: - -:::js -```typescript -import { createAgent, HumanMessage, anthropicPromptCachingMiddleware } from "langchain"; - -const LONG_PROMPT = ` -Please be a helpful assistant. - - -`; - -const agent = createAgent({ - model: "claude-sonnet-4-5-20250929", - prompt: LONG_PROMPT, - middleware: [anthropicPromptCachingMiddleware({ ttl: "5m" })], -}); - -// First invocation: Creates cache with system prompt, tools, and "Hi, my name is Bob" -await agent.invoke({ - messages: [new HumanMessage("Hi, my name is Bob")] -}); - -// Second invocation: Reuses cached system prompt, tools, and previous messages -// Only processes the new message "What's my name?" and the previous AI response -const result = await agent.invoke({ - messages: [new HumanMessage("What's my name?")] -}); -``` -::: - - - -:::python - -#### Bash tool - -Execute Claude's native `bash_20250124` tool with local command execution. The bash tool middleware is useful for the following: - -- Using Claude's built-in bash tool with local execution -- Leveraging Claude's optimized bash tool interface -- Agents that need persistent shell sessions with Anthropic models - - - This middleware wraps `ShellToolMiddleware` and exposes it as Claude's native bash tool. - - -**API reference:** @[`ClaudeBashToolMiddleware`] - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import ClaudeBashToolMiddleware -from langchain.agents import create_agent - -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - ClaudeBashToolMiddleware( - workspace_root="/workspace", - ), - ], -) -``` - - - -`ClaudeBashToolMiddleware` accepts all parameters from @[`ShellToolMiddleware`], including: - - - Base directory for the shell session - - - - Commands to run when the session starts - - - - Execution policy (`HostExecutionPolicy`, `DockerExecutionPolicy`, or `CodexSandboxExecutionPolicy`) - - - - Rules for sanitizing command output - - -See [Shell tool](#shell-tool) for full configuration details. - - - - - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import ClaudeBashToolMiddleware -from langchain.agents import create_agent -from langchain.agents.middleware import DockerExecutionPolicy - - -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - ClaudeBashToolMiddleware( - workspace_root="/workspace", - startup_commands=["pip install requests"], - execution_policy=DockerExecutionPolicy( - image="python:3.11-slim", - ), - ), - ], -) - -# Claude can now use its native bash tool -result = agent.invoke({ - "messages": [{"role": "user", "content": "List files in the workspace"}] -}) -``` - - - - -#### Text editor - -Provide Claude's text editor tool (`text_editor_20250728`) for file creation and editing. The text editor middleware is useful for the following: - -- File-based agent workflows -- Code editing and refactoring tasks -- Multi-file project work -- Agents that need persistent file storage - - - Available in two variants: **State-based** (files in LangGraph state) and **Filesystem-based** (files on disk). - - -**API reference:** @[`StateClaudeTextEditorMiddleware`], @[`FilesystemClaudeTextEditorMiddleware`] - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import StateClaudeTextEditorMiddleware -from langchain.agents import create_agent - -# State-based (files in LangGraph state) -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - StateClaudeTextEditorMiddleware(), - ], -) -``` - - - -**@[`StateClaudeTextEditorMiddleware`] (state-based)** - - - Optional list of allowed path prefixes. If specified, only paths starting with these prefixes are allowed. - - -**@[`FilesystemClaudeTextEditorMiddleware`] (filesystem-based)** - - - Root directory for file operations - - - - Optional list of allowed virtual path prefixes (default: `["/"]`) - - - - Maximum file size in MB - - - - - - -Claude's text editor tool supports the following commands: -- `view` - View file contents or list directory -- `create` - Create a new file -- `str_replace` - Replace string in file -- `insert` - Insert text at line number -- `delete` - Delete a file -- `rename` - Rename/move a file - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import ( - StateClaudeTextEditorMiddleware, - FilesystemClaudeTextEditorMiddleware, -) -from langchain.agents import create_agent - - -# State-based: Files persist in LangGraph state -agent_state = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - StateClaudeTextEditorMiddleware( - allowed_path_prefixes=["/project"], - ), - ], -) - -# Filesystem-based: Files persist on disk -agent_fs = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - FilesystemClaudeTextEditorMiddleware( - root_path="/workspace", - allowed_prefixes=["/src"], - max_file_size_mb=10, - ), - ], -) -``` - - - - -#### Memory - -Provide Claude's memory tool (`memory_20250818`) for persistent agent memory across conversation turns. The memory middleware is useful for the following: - -- Long-running agent conversations -- Maintaining context across interruptions -- Task progress tracking -- Persistent agent state management - - - Claude's memory tool uses a `/memories` directory and automatically injects a system prompt encouraging the agent to check and update memory. - - -**API reference:** @[`StateClaudeMemoryMiddleware`], @[`FilesystemClaudeMemoryMiddleware`] - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import StateClaudeMemoryMiddleware -from langchain.agents import create_agent - -# State-based memory -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - StateClaudeMemoryMiddleware(), - ], -) -``` - - - -**@[`StateClaudeMemoryMiddleware`] (state-based)** - - - Optional list of allowed path prefixes. Defaults to `["/memories"]`. - - - - System prompt to inject. Defaults to Anthropic's recommended memory prompt that encourages the agent to check and update memory. - - -**@[`FilesystemClaudeMemoryMiddleware`] (filesystem-based)** - - - Root directory for file operations - - - - Optional list of allowed virtual path prefixes. Defaults to `["/memories"]`. - - - - Maximum file size in MB - - - - System prompt to inject - - - - - - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import ( - StateClaudeMemoryMiddleware, - FilesystemClaudeMemoryMiddleware, -) -from langchain.agents import create_agent - - -# State-based: Memory persists in LangGraph state -agent_state = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - StateClaudeMemoryMiddleware(), - ], -) - -# Filesystem-based: Memory persists on disk -agent_fs = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - FilesystemClaudeMemoryMiddleware( - root_path="/workspace", - ), - ], -) - -# The agent will automatically: -# 1. Check /memories directory at start -# 2. Record progress and thoughts during execution -# 3. Update memory files as work progresses -``` - - - - -#### File search - -Provide Glob and Grep search tools for files stored in LangGraph state. File search middleware is useful for the following: - -- Searching through state-based virtual file systems -- Works with text editor and memory tools -- Finding files by patterns -- Content search with regex - -**API reference:** @[`StateFileSearchMiddleware`] - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import ( - StateClaudeTextEditorMiddleware, - StateFileSearchMiddleware, -) -from langchain.agents import create_agent - -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - StateClaudeTextEditorMiddleware(), - StateFileSearchMiddleware(), # Search text editor files - ], -) -``` - - - - - State key containing files to search. Use `"text_editor_files"` for text editor files or `"memory_files"` for memory files. - - - - - - -The middleware adds Glob and Grep search tools that work with state-based files. - -```python -from langchain_anthropic import ChatAnthropic -from langchain_anthropic.middleware import ( - StateClaudeTextEditorMiddleware, - StateClaudeMemoryMiddleware, - StateFileSearchMiddleware, -) -from langchain.agents import create_agent - - -# Search text editor files -agent = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - StateClaudeTextEditorMiddleware(), - StateFileSearchMiddleware(state_key="text_editor_files"), - ], -) - -# Search memory files -agent_memory = create_agent( - model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), - tools=[], - middleware=[ - StateClaudeMemoryMiddleware(), - StateFileSearchMiddleware(state_key="memory_files"), - ], -) -``` - - - -::: - -### OpenAI - -Middleware specifically designed for OpenAI models. - -| Middleware | Description | -|------------|-------------| -| [Content moderation](#content-moderation) | Moderate agent traffic using OpenAI's moderation endpoint | - -#### Content moderation - -Moderate agent traffic (user input, model output, and tool results) using OpenAI's moderation endpoint to detect and handle unsafe content. Content moderation is useful for the following: - -- Applications requiring content safety and compliance -- Filtering harmful, hateful, or inappropriate content -- Customer-facing agents that need safety guardrails -- Meeting platform moderation requirements - - - Learn more about [OpenAI's moderation models](https://platform.openai.com/docs/guides/moderation) and categories. - - -:::python -**API reference:** @[`OpenAIModerationMiddleware`] - -```python -from langchain_openai import ChatOpenAI -from langchain_openai.middleware import OpenAIModerationMiddleware -from langchain.agents import create_agent - -agent = create_agent( - model=ChatOpenAI(model="gpt-4o"), - tools=[search_tool, database_tool], - middleware=[ - OpenAIModerationMiddleware( - model="omni-moderation-latest", - check_input=True, - check_output=True, - exit_behavior="end", - ), - ], -) -``` -::: - - - -:::python - - OpenAI moderation model to use. Options: `'omni-moderation-latest'`, `'omni-moderation-2024-09-26'`, `'text-moderation-latest'`, `'text-moderation-stable'` - - - - Whether to check user input messages before the model is called - - - - Whether to check model output messages after the model is called - - - - Whether to check tool result messages before the model is called - - - - How to handle violations when content is flagged. Options: - - - `'end'` - End agent execution immediately with a violation message - - `'error'` - Raise `OpenAIModerationError` exception - - `'replace'` - Replace the flagged content with the violation message and continue - - - - Custom template for violation messages. Supports template variables: - - - `{categories}` - Comma-separated list of flagged categories - - `{category_scores}` - JSON string of category scores - - `{original_content}` - The original flagged content - - Default: `"I'm sorry, but I can't comply with that request. It was flagged for {categories}."` - - - - Optional pre-configured OpenAI client to reuse. If not provided, a new client will be created. - - - - Optional pre-configured AsyncOpenAI client to reuse. If not provided, a new async client will be created. - -::: - - - - - -The middleware integrates OpenAI's moderation endpoint to check content at different stages: - -**Moderation stages:** -- `check_input` - User messages before model call -- `check_output` - AI messages after model call -- `check_tool_results` - Tool outputs before model call - -**Exit behaviors:** -- `'end'` (default) - Stop execution with violation message -- `'error'` - Raise exception for application handling -- `'replace'` - Replace flagged content and continue - -:::python -```python -from langchain_openai import ChatOpenAI -from langchain_openai.middleware import OpenAIModerationMiddleware -from langchain.agents import create_agent - - -# Basic moderation -agent = create_agent( - model=ChatOpenAI(model="gpt-4o"), - tools=[search_tool, customer_data_tool], - middleware=[ - OpenAIModerationMiddleware( - model="omni-moderation-latest", - check_input=True, - check_output=True, - ), - ], -) - -# Strict moderation with custom message -agent_strict = create_agent( - model=ChatOpenAI(model="gpt-4o"), - tools=[search_tool, customer_data_tool], - middleware=[ - OpenAIModerationMiddleware( - model="omni-moderation-latest", - check_input=True, - check_output=True, - check_tool_results=True, - exit_behavior="error", - violation_message=( - "Content policy violation detected: {categories}. " - "Please rephrase your request." - ), - ), - ], -) - -# Moderation with replacement behavior -agent_replace = create_agent( - model=ChatOpenAI(model="gpt-4o"), - tools=[search_tool], - middleware=[ - OpenAIModerationMiddleware( - check_input=True, - exit_behavior="replace", - violation_message="[Content removed due to safety policies]", - ), - ], -) -``` -::: - - +These middleware are optimized for specific LLM providers. See each provider's documentation for full details and examples. + + + + Prompt caching, bash tool, text editor, memory, and file search middleware for Claude models. + + + Content moderation middleware for OpenAI models. + + diff --git a/src/oss/python/integrations/providers/anthropic.mdx b/src/oss/python/integrations/providers/anthropic.mdx index 7b64d084b8..08c1e2b892 100644 --- a/src/oss/python/integrations/providers/anthropic.mdx +++ b/src/oss/python/integrations/providers/anthropic.mdx @@ -14,3 +14,532 @@ This page covers all LangChain integrations with [Anthropic](https://www.anthrop (Legacy) Anthropic text completion models. + +## Middleware + +Middleware specifically designed for Anthropic's Claude models. Learn more about [middleware](/oss/langchain/middleware/overview). + +:::python + +| Middleware | Description | +|------------|-------------| +| [Prompt caching](#prompt-caching) | Reduce costs by caching repetitive prompt prefixes | +| [Bash tool](#bash-tool) | Execute Claude's native bash tool with local command execution | +| [Text editor](#text-editor) | Provide Claude's text editor tool for file editing | +| [Memory](#memory) | Provide Claude's memory tool for persistent agent memory | +| [File search](#file-search) | Search tools for state-based file systems | + +::: + +:::js + +| Middleware | Description | +|------------|-------------| +| [Prompt caching](#prompt-caching) | Reduce costs by caching repetitive prompt prefixes | + +::: + +### Prompt caching + +Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls. Prompt caching is useful for the following: + +- Applications with long, static system prompts that don't change between requests +- Agents with many tool definitions that remain constant across invocations +- Conversations where early message history is reused across multiple turns +- High-volume deployments where reducing API costs and latency is critical + + + Learn more about [Anthropic prompt caching](https://platform.claude.com/docs/en/build-with-claude/prompt-caching#cache-limitations) strategies and limitations. + + +:::python +**API reference:** @[`AnthropicPromptCachingMiddleware`] + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware +from langchain.agents import create_agent + +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + system_prompt="", + middleware=[AnthropicPromptCachingMiddleware(ttl="5m")], +) +``` +::: + +:::js +```typescript +import { createAgent, anthropicPromptCachingMiddleware } from "langchain"; + +const agent = createAgent({ + model: "claude-sonnet-4-5-20250929", + prompt: "", + middleware: [anthropicPromptCachingMiddleware({ ttl: "5m" })], +}); +``` +::: + + + +:::python + + Cache type. Only `'ephemeral'` is currently supported. + + + + Time to live for cached content. Valid values: `'5m'` or `'1h'` + + + + Minimum number of messages before caching starts + + + + Behavior when using non-Anthropic models. Options: `'ignore'`, `'warn'`, or `'raise'` + +::: + +:::js + + Time to live for cached content. Valid values: `'5m'` or `'1h'` + +::: + + + + + +The middleware caches content up to and including the latest message in each request. On subsequent requests within the TTL window (5 minutes or 1 hour), previously seen content is retrieved from cache rather than reprocessed, significantly reducing costs and latency. + +**How it works:** +1. First request: System prompt, tools, and the user message "Hi, my name is Bob" are sent to the API and cached +2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message "What's my name?" needs to be processed, plus the model's response from the first request +3. This pattern continues for each turn, with each request reusing the cached conversation history + +:::python +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware +from langchain.agents import create_agent +from langchain.messages import HumanMessage + + +LONG_PROMPT = """ +Please be a helpful assistant. + + +""" + +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + system_prompt=LONG_PROMPT, + middleware=[AnthropicPromptCachingMiddleware(ttl="5m")], +) + +# First invocation: Creates cache with system prompt, tools, and "Hi, my name is Bob" +agent.invoke({"messages": [HumanMessage("Hi, my name is Bob")]}) + +# Second invocation: Reuses cached system prompt, tools, and previous messages +# Only processes the new message "What's my name?" and the previous AI response +agent.invoke({"messages": [HumanMessage("What's my name?")]}) +``` +::: + +:::js +```typescript +import { createAgent, HumanMessage, anthropicPromptCachingMiddleware } from "langchain"; + +const LONG_PROMPT = ` +Please be a helpful assistant. + + +`; + +const agent = createAgent({ + model: "claude-sonnet-4-5-20250929", + prompt: LONG_PROMPT, + middleware: [anthropicPromptCachingMiddleware({ ttl: "5m" })], +}); + +// First invocation: Creates cache with system prompt, tools, and "Hi, my name is Bob" +await agent.invoke({ + messages: [new HumanMessage("Hi, my name is Bob")] +}); + +// Second invocation: Reuses cached system prompt, tools, and previous messages +// Only processes the new message "What's my name?" and the previous AI response +const result = await agent.invoke({ + messages: [new HumanMessage("What's my name?")] +}); +``` +::: + + + +:::python + +### Bash tool + +Execute Claude's native `bash_20250124` tool with local command execution. The bash tool middleware is useful for the following: + +- Using Claude's built-in bash tool with local execution +- Leveraging Claude's optimized bash tool interface +- Agents that need persistent shell sessions with Anthropic models + + + This middleware wraps `ShellToolMiddleware` and exposes it as Claude's native bash tool. + + +**API reference:** @[`ClaudeBashToolMiddleware`] + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import ClaudeBashToolMiddleware +from langchain.agents import create_agent + +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + ClaudeBashToolMiddleware( + workspace_root="/workspace", + ), + ], +) +``` + + + +`ClaudeBashToolMiddleware` accepts all parameters from @[`ShellToolMiddleware`], including: + + + Base directory for the shell session + + + + Commands to run when the session starts + + + + Execution policy (`HostExecutionPolicy`, `DockerExecutionPolicy`, or `CodexSandboxExecutionPolicy`) + + + + Rules for sanitizing command output + + +See [Shell tool](/oss/langchain/middleware/built-in#shell-tool) for full configuration details. + + + + + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import ClaudeBashToolMiddleware +from langchain.agents import create_agent +from langchain.agents.middleware import DockerExecutionPolicy + + +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + ClaudeBashToolMiddleware( + workspace_root="/workspace", + startup_commands=["pip install requests"], + execution_policy=DockerExecutionPolicy( + image="python:3.11-slim", + ), + ), + ], +) + +# Claude can now use its native bash tool +result = agent.invoke({ + "messages": [{"role": "user", "content": "List files in the workspace"}] +}) +``` + + + + +### Text editor + +Provide Claude's text editor tool (`text_editor_20250728`) for file creation and editing. The text editor middleware is useful for the following: + +- File-based agent workflows +- Code editing and refactoring tasks +- Multi-file project work +- Agents that need persistent file storage + + + Available in two variants: **State-based** (files in LangGraph state) and **Filesystem-based** (files on disk). + + +**API reference:** @[`StateClaudeTextEditorMiddleware`], @[`FilesystemClaudeTextEditorMiddleware`] + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import StateClaudeTextEditorMiddleware +from langchain.agents import create_agent + +# State-based (files in LangGraph state) +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + StateClaudeTextEditorMiddleware(), + ], +) +``` + + + +**@[`StateClaudeTextEditorMiddleware`] (state-based)** + + + Optional list of allowed path prefixes. If specified, only paths starting with these prefixes are allowed. + + +**@[`FilesystemClaudeTextEditorMiddleware`] (filesystem-based)** + + + Root directory for file operations + + + + Optional list of allowed virtual path prefixes (default: `["/"]`) + + + + Maximum file size in MB + + + + + + +Claude's text editor tool supports the following commands: +- `view` - View file contents or list directory +- `create` - Create a new file +- `str_replace` - Replace string in file +- `insert` - Insert text at line number +- `delete` - Delete a file +- `rename` - Rename/move a file + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import ( + StateClaudeTextEditorMiddleware, + FilesystemClaudeTextEditorMiddleware, +) +from langchain.agents import create_agent + + +# State-based: Files persist in LangGraph state +agent_state = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + StateClaudeTextEditorMiddleware( + allowed_path_prefixes=["/project"], + ), + ], +) + +# Filesystem-based: Files persist on disk +agent_fs = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + FilesystemClaudeTextEditorMiddleware( + root_path="/workspace", + allowed_prefixes=["/src"], + max_file_size_mb=10, + ), + ], +) +``` + + + + +### Memory + +Provide Claude's memory tool (`memory_20250818`) for persistent agent memory across conversation turns. The memory middleware is useful for the following: + +- Long-running agent conversations +- Maintaining context across interruptions +- Task progress tracking +- Persistent agent state management + + + Claude's memory tool uses a `/memories` directory and automatically injects a system prompt encouraging the agent to check and update memory. + + +**API reference:** @[`StateClaudeMemoryMiddleware`], @[`FilesystemClaudeMemoryMiddleware`] + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import StateClaudeMemoryMiddleware +from langchain.agents import create_agent + +# State-based memory +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + StateClaudeMemoryMiddleware(), + ], +) +``` + + + +**@[`StateClaudeMemoryMiddleware`] (state-based)** + + + Optional list of allowed path prefixes. Defaults to `["/memories"]`. + + + + System prompt to inject. Defaults to Anthropic's recommended memory prompt that encourages the agent to check and update memory. + + +**@[`FilesystemClaudeMemoryMiddleware`] (filesystem-based)** + + + Root directory for file operations + + + + Optional list of allowed virtual path prefixes. Defaults to `["/memories"]`. + + + + Maximum file size in MB + + + + System prompt to inject + + + + + + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import ( + StateClaudeMemoryMiddleware, + FilesystemClaudeMemoryMiddleware, +) +from langchain.agents import create_agent + + +# State-based: Memory persists in LangGraph state +agent_state = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + StateClaudeMemoryMiddleware(), + ], +) + +# Filesystem-based: Memory persists on disk +agent_fs = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + FilesystemClaudeMemoryMiddleware( + root_path="/workspace", + ), + ], +) + +# The agent will automatically: +# 1. Check /memories directory at start +# 2. Record progress and thoughts during execution +# 3. Update memory files as work progresses +``` + + + + +### File search + +Provide Glob and Grep search tools for files stored in LangGraph state. File search middleware is useful for the following: + +- Searching through state-based virtual file systems +- Works with text editor and memory tools +- Finding files by patterns +- Content search with regex + +**API reference:** @[`StateFileSearchMiddleware`] + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import ( + StateClaudeTextEditorMiddleware, + StateFileSearchMiddleware, +) +from langchain.agents import create_agent + +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + StateClaudeTextEditorMiddleware(), + StateFileSearchMiddleware(), # Search text editor files + ], +) +``` + + + + + State key containing files to search. Use `"text_editor_files"` for text editor files or `"memory_files"` for memory files. + + + + + + +The middleware adds Glob and Grep search tools that work with state-based files. + +```python +from langchain_anthropic import ChatAnthropic +from langchain_anthropic.middleware import ( + StateClaudeTextEditorMiddleware, + StateClaudeMemoryMiddleware, + StateFileSearchMiddleware, +) +from langchain.agents import create_agent + + +# Search text editor files +agent = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + StateClaudeTextEditorMiddleware(), + StateFileSearchMiddleware(state_key="text_editor_files"), + ], +) + +# Search memory files +agent_memory = create_agent( + model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), + tools=[], + middleware=[ + StateClaudeMemoryMiddleware(), + StateFileSearchMiddleware(state_key="memory_files"), + ], +) +``` + + + +::: diff --git a/src/oss/python/integrations/providers/openai.mdx b/src/oss/python/integrations/providers/openai.mdx index d03201077e..ca56918a29 100644 --- a/src/oss/python/integrations/providers/openai.mdx +++ b/src/oss/python/integrations/providers/openai.mdx @@ -64,6 +64,172 @@ Get an [OpenAI Platform](https://platform.openai.com/docs/overview) API key and +## Middleware + +Middleware specifically designed for OpenAI models. Learn more about [middleware](/oss/langchain/middleware/overview). + +:::python + +| Middleware | Description | +|------------|-------------| +| [Content moderation](#content-moderation) | Moderate agent traffic using OpenAI's moderation endpoint | + +::: + +### Content moderation + +Moderate agent traffic (user input, model output, and tool results) using OpenAI's moderation endpoint to detect and handle unsafe content. Content moderation is useful for the following: + +- Applications requiring content safety and compliance +- Filtering harmful, hateful, or inappropriate content +- Customer-facing agents that need safety guardrails +- Meeting platform moderation requirements + + + Learn more about [OpenAI's moderation models](https://platform.openai.com/docs/guides/moderation) and categories. + + +:::python +**API reference:** @[`OpenAIModerationMiddleware`] + +```python +from langchain_openai import ChatOpenAI +from langchain_openai.middleware import OpenAIModerationMiddleware +from langchain.agents import create_agent + +agent = create_agent( + model=ChatOpenAI(model="gpt-4o"), + tools=[search_tool, database_tool], + middleware=[ + OpenAIModerationMiddleware( + model="omni-moderation-latest", + check_input=True, + check_output=True, + exit_behavior="end", + ), + ], +) +``` +::: + + + +:::python + + OpenAI moderation model to use. Options: `'omni-moderation-latest'`, `'omni-moderation-2024-09-26'`, `'text-moderation-latest'`, `'text-moderation-stable'` + + + + Whether to check user input messages before the model is called + + + + Whether to check model output messages after the model is called + + + + Whether to check tool result messages before the model is called + + + + How to handle violations when content is flagged. Options: + + - `'end'` - End agent execution immediately with a violation message + - `'error'` - Raise `OpenAIModerationError` exception + - `'replace'` - Replace the flagged content with the violation message and continue + + + + Custom template for violation messages. Supports template variables: + + - `{categories}` - Comma-separated list of flagged categories + - `{category_scores}` - JSON string of category scores + - `{original_content}` - The original flagged content + + Default: `"I'm sorry, but I can't comply with that request. It was flagged for {categories}."` + + + + Optional pre-configured OpenAI client to reuse. If not provided, a new client will be created. + + + + Optional pre-configured AsyncOpenAI client to reuse. If not provided, a new async client will be created. + +::: + + + + + +The middleware integrates OpenAI's moderation endpoint to check content at different stages: + +**Moderation stages:** +- `check_input` - User messages before model call +- `check_output` - AI messages after model call +- `check_tool_results` - Tool outputs before model call + +**Exit behaviors:** +- `'end'` (default) - Stop execution with violation message +- `'error'` - Raise exception for application handling +- `'replace'` - Replace flagged content and continue + +:::python +```python +from langchain_openai import ChatOpenAI +from langchain_openai.middleware import OpenAIModerationMiddleware +from langchain.agents import create_agent + + +# Basic moderation +agent = create_agent( + model=ChatOpenAI(model="gpt-4o"), + tools=[search_tool, customer_data_tool], + middleware=[ + OpenAIModerationMiddleware( + model="omni-moderation-latest", + check_input=True, + check_output=True, + ), + ], +) + +# Strict moderation with custom message +agent_strict = create_agent( + model=ChatOpenAI(model="gpt-4o"), + tools=[search_tool, customer_data_tool], + middleware=[ + OpenAIModerationMiddleware( + model="omni-moderation-latest", + check_input=True, + check_output=True, + check_tool_results=True, + exit_behavior="error", + violation_message=( + "Content policy violation detected: {categories}. " + "Please rephrase your request." + ), + ), + ], +) + +# Moderation with replacement behavior +agent_replace = create_agent( + model=ChatOpenAI(model="gpt-4o"), + tools=[search_tool], + middleware=[ + OpenAIModerationMiddleware( + check_input=True, + exit_behavior="replace", + violation_message="[Content removed due to safety policies]", + ), + ], +) +``` +::: + + + ## Other