Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions pipeline/preprocessors/link_map.py
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,9 @@ class LinkMap(TypedDict):
# @langchain/core references
"AIMessage": "classes/_langchain_core.messages.AIMessage.html",
"AIMessageChunk": "classes/_langchain_core.messages.AIMessageChunk.html",
"SystemMessage": "classes/_langchain_core.messages.SystemMessage.html",
"SystemMessage.concat": "classes/_langchain_core.messages.SystemMessage.html#concat",
"ModelRequest": "classes/_langchain_core.messages.ModelRequest.html",
"BaseChatModel.invoke": "classes/_langchain_core.language_models_chat_models.BaseChatModel.html#invoke",
"BaseChatModel.stream": "classes/_langchain_core.language_models_chat_models.BaseChatModel.html#stream",
"BaseChatModel.streamEvents": "classes/_langchain_core.language_models_chat_models.BaseChatModel.html#streamEvents",
Expand Down
82 changes: 75 additions & 7 deletions src/oss/javascript/releases/langchain-v1-1.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: What's new in v1.1
sidebarTitle: v1.1 Release notes
---

**LangChain v1.1 focuses on improving agent reliability and flexibility.** This release introduces model profiles, new middleware capabilities, and enhanced type safety for custom middleware implementations.
**LangChain v1.1 focuses on improving agent reliability and flexibility.** This release introduces model profiles for better model capability awareness, new middleware capabilities for retrying model calls and content moderation, improved system message handling, and enhanced compatibility with Zod v4.

To upgrade,

Expand All @@ -24,19 +24,87 @@ bun add @langchain/core @langchain/langgraph

## Model profiles

Model profiles allow you to configure how agents interact with specific models by defining capabilities, context handling, and structured output behavior.
Model profiles provide a standardized way to understand model capabilities and constraints. Every chat model now exposes a `.profile` getter that returns information about context window size, structured output support, and other model-specific characteristics.

### Summarization middleware
```typescript
import { initChatModel } from "langchain";

### Structured output for agents
const model = await initChatModel("gpt-4o");
const profile = model.profile;

console.log(profile.maxTokens); // Maximum context window size
console.log(profile.supportsStructuredOutput); // Native structured output support
```

Profiles are automatically generated from [models.dev](https://models.dev) and enable middleware like summarization to use accurate token limits, while `createAgent` can automatically detect native structured output support.

## System message improvements

You can now pass a `SystemMessage` instance directly to the `systemPrompt` parameter when creating agents, and use the new `concat` method to extend system messages. This enables advanced features like cache control (e.g., Anthropic's ephemeral cache) and structured content blocks.

```typescript
import { createAgent, SystemMessage } from "langchain";

// SystemMessage instance with cache control
const agent = createAgent({
model: "anthropic:claude-3-5-sonnet",
tools: [myTool],
systemPrompt: new SystemMessage({
content: [
{
type: "text",
text: "You are a helpful assistant.",
},
{
type: "text",
text: "Today's date is 2024-06-01.",
cache_control: { type: "ephemeral", ttl: "5m" },
},
],
}),
});
```

When using middleware with `wrapModelCall`, you can modify system prompts using either `systemPrompt` (string) or `systemMessage` (SystemMessage object). See the [custom middleware documentation](/oss/langchain/middleware/custom#working-with-system-messages) for detailed examples and best practices.

## Model retry middleware

## Misc
A new `modelRetryMiddleware` automatically retries failed model calls with configurable exponential backoff, improving agent reliability by handling transient model failures gracefully.

```typescript
import { createAgent, modelRetryMiddleware } from "langchain";

const agent = createAgent({
model: "gpt-4o",
tools: [searchTool, databaseTool],
middleware: [
modelRetryMiddleware({
maxRetries: 3,
backoffFactor: 2.0,
initialDelayMs: 1000,
}),
],
});
```

See the [built-in middleware documentation](/oss/langchain/middleware/built-in) for configuration options and detailed examples.

## OpenAI content moderation middleware

A new middleware integrates OpenAI's moderation endpoint to detect and handle unsafe content in agent interactions. This middleware is useful for applications requiring content safety and compliance.

The middleware can check content at multiple stages:
- **Input checking**: User messages before model calls
- **Output checking**: AI messages after model calls
- **Tool results**: Tool outputs before model calls

You can configure how violations are handled with options like ending execution, raising errors, or replacing flagged content. See the [middleware documentation](/oss/langchain/middleware/built-in#content-moderation) for detailed usage examples.

## Compatibility improvements

### New middleware docs
### Zod v4 support

### Type safety improvements for custom middlewares
LangChain.js now supports Zod v4, ensuring seamless integration with the latest version of the schema validation library. This update maintains backward compatibility while enabling you to use the latest Zod features for structured output and tool schemas.

## Reporting issues

Expand Down
73 changes: 73 additions & 0 deletions src/oss/langchain/middleware/custom.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -908,6 +908,79 @@ const agent = createAgent({

:::

### Working with system messages

You can modify system prompts in middleware using either `systemPrompt` (`string`) or `systemMessage` (@[`SystemMessage`]). The @[`ModelRequest`] provides both for maximum flexibility.

**Key behavior:**
- Middleware receives both `systemPrompt` (string) and `systemMessage` (@[`SystemMessage`]) in the request
- You can modify either `systemPrompt` or `systemMessage`, but **cannot set both in the same middleware call** - this prevents conflicts
- Using `systemPrompt` creates a new simple system message (may overwrite cache control metadata)
- Using `systemMessage` (JavaScript) or manually combining content blocks (Python) preserves existing cache control and structured content blocks
- Multiple middleware can chain modifications sequentially across different middleware calls

:::python

```python
@wrap_model_call
def add_context_preserve_cache(
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
new_system_message = SystemMessage(content="Additional context.")
return handler(request.override(system_message=new_system_message))
```
:::

:::js
**Example: Chaining middleware** - Different middleware can use different approaches:

```typescript
import { createMiddleware, SystemMessage, createAgent } from "langchain";

// Middleware 1: Uses systemPrompt (string)
const myMiddleware = createMiddleware({
name: "MyMiddleware",
wrapModelCall: async (request, handler) => {
return handler({
...request,
systemPrompt: request.systemMessage.concat(`\nAdditional context.`),
});
},
});

// Middleware 2: Uses systemMessage (preserves structure)
const myOtherMiddleware = createMiddleware({
name: "MyOtherMiddleware",
wrapModelCall: async (request, handler) => {
return handler({
...request,
systemMessage: request.systemMessage.concat(
new SystemMessage({
content: [
{
type: "text",
text: " More additional context. This will be cached.",
cache_control: { type: "ephemeral", ttl: "5m" },
},
],
})
),
});
},
});

const agent = createAgent({
model: "anthropic:claude-3-5-sonnet",
systemPrompt: "You are a helpful assistant.",
middleware: [myMiddleware, myOtherMiddleware],
});
```

Use @[`SystemMessage.concat`] to preserve cache control metadata or structured content blocks created by other middleware.

:::

## Additional resources

- [Middleware API reference](https://reference.langchain.com/python/langchain/middleware/)
Expand Down
Loading