Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/afraid-worms-yell.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'@ai-sdk/amazon-bedrock': patch
---

fix(provider/amazon-bedrock): resolve opus 4.1 reasoning mode validation error
5 changes: 5 additions & 0 deletions .changeset/metal-shrimps-fix.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'@ai-sdk/openai': patch
---

feat(provider/openai): add code interpreter tool (responses api)
11 changes: 0 additions & 11 deletions content/docs/02-foundations/02-providers-and-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -112,17 +112,6 @@ Here are the capabilities of popular models:
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-mini` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-nano` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-chat-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1-mini` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1-nano` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o-mini` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `o3-mini` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `o3` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `o4-mini` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `o1` | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-opus-4-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-sonnet-4-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-7-sonnet-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
Expand Down
84 changes: 84 additions & 0 deletions content/docs/07-reference/01-ai-sdk-core/70-step-count-is.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
---
title: stepCountIs
description: API Reference for stepCountIs.
---

# `stepCountIs()`

Creates a stop condition that stops when the number of steps reaches a specified count.

This function is used with `stopWhen` in `generateText` and `streamText` to control when a tool-calling loop should stop based on the number of steps executed.

```ts
import { openai } from '@ai-sdk/openai';
import { generateText, stepCountIs } from 'ai';

const result = await generateText({
model: openai('gpt-4o'),
tools: {
// your tools
},
// Stop after 5 steps
stopWhen: stepCountIs(5),
});
```

## Import

<Snippet text={`import { stepCountIs } from "ai"`} prompt={false} />

## API Signature

### Parameters

<PropertiesTable
content={[
{
name: 'count',
type: 'number',
description:
'The maximum number of steps to execute before stopping the tool-calling loop.',
},
]}
/>

### Returns

A `StopCondition` function that returns `true` when the step count reaches the specified number. The function can be used with the `stopWhen` parameter in `generateText` and `streamText`.

## Examples

### Basic Usage

Stop after 3 steps:

```ts
import { generateText, stepCountIs } from 'ai';

const result = await generateText({
model: yourModel,
tools: yourTools,
stopWhen: stepCountIs(3),
});
```

### Combining with Other Conditions

You can combine multiple stop conditions in an array:

```ts
import { generateText, stepCountIs, hasToolCall } from 'ai';

const result = await generateText({
model: yourModel,
tools: yourTools,
// Stop after 10 steps OR when finalAnswer tool is called
stopWhen: [stepCountIs(10), hasToolCall('finalAnswer')],
});
```

## See also

- [`hasToolCall()`](/docs/reference/ai-sdk-core/has-tool-call)
- [`generateText()`](/docs/reference/ai-sdk-core/generate-text)
- [`streamText()`](/docs/reference/ai-sdk-core/stream-text)
120 changes: 120 additions & 0 deletions content/docs/07-reference/01-ai-sdk-core/71-has-tool-call.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
---
title: hasToolCall
description: API Reference for hasToolCall.
---

# `hasToolCall()`

Creates a stop condition that stops when a specific tool is called.

This function is used with `stopWhen` in `generateText` and `streamText` to control when a tool-calling loop should stop based on whether a particular tool has been invoked.

```ts
import { openai } from '@ai-sdk/openai';
import { generateText, hasToolCall } from 'ai';

const result = await generateText({
model: openai('gpt-4o'),
tools: {
weather: weatherTool,
finalAnswer: finalAnswerTool,
},
// Stop when the finalAnswer tool is called
stopWhen: hasToolCall('finalAnswer'),
});
```

## Import

<Snippet text={`import { hasToolCall } from "ai"`} prompt={false} />

## API Signature

### Parameters

<PropertiesTable
content={[
{
name: 'toolName',
type: 'string',
description:
'The name of the tool that should trigger the stop condition when called.',
},
]}
/>

### Returns

A `StopCondition` function that returns `true` when the specified tool is called in the current step. The function can be used with the `stopWhen` parameter in `generateText` and `streamText`.

## Examples

### Basic Usage

Stop when a specific tool is called:

```ts
import { generateText, hasToolCall } from 'ai';

const result = await generateText({
model: yourModel,
tools: {
submitAnswer: submitAnswerTool,
search: searchTool,
},
stopWhen: hasToolCall('submitAnswer'),
});
```

### Combining with Other Conditions

You can combine multiple stop conditions in an array:

```ts
import { generateText, hasToolCall, stepCountIs } from 'ai';

const result = await generateText({
model: yourModel,
tools: {
weather: weatherTool,
search: searchTool,
finalAnswer: finalAnswerTool,
},
// Stop when weather tool is called OR finalAnswer is called OR after 5 steps
stopWhen: [
hasToolCall('weather'),
hasToolCall('finalAnswer'),
stepCountIs(5),
],
});
```

### Agent Pattern

Common pattern for agents that run until they provide a final answer:

```ts
import { generateText, hasToolCall } from 'ai';

const result = await generateText({
model: yourModel,
tools: {
search: searchTool,
calculate: calculateTool,
finalAnswer: {
description: 'Provide the final answer to the user',
parameters: z.object({
answer: z.string(),
}),
execute: async ({ answer }) => answer,
},
},
stopWhen: hasToolCall('finalAnswer'),
});
```

## See also

- [`stepCountIs()`](/docs/reference/ai-sdk-core/step-count-is)
- [`generateText()`](/docs/reference/ai-sdk-core/generate-text)
- [`streamText()`](/docs/reference/ai-sdk-core/stream-text)
36 changes: 34 additions & 2 deletions content/providers/01-ai-sdk-providers/03-openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -729,7 +729,7 @@ The following OpenAI-specific metadata is returned:

#### Web Search

The OpenAI responses provider supports web search through the `openai.tools.webSearchPreview` tool.
The OpenAI responses API supports web search through the `openai.tools.webSearchPreview` tool.

You can force the use of the web search tool by setting the `toolChoice` parameter to `{ type: 'tool', toolName: 'web_search_preview' }`.

Expand Down Expand Up @@ -830,7 +830,7 @@ The `textVerbosity` parameter scales output length without changing the underlyi

#### File Search

The OpenAI responses provider supports file search through the `openai.tools.fileSearch` tool.
The OpenAI responses API supports file search through the `openai.tools.fileSearch` tool.

You can force the use of the file search tool by setting the `toolChoice` parameter to `{ type: 'tool', toolName: 'file_search' }`.

Expand Down Expand Up @@ -866,6 +866,38 @@ const result = await generateText({
be customized.
</Note>

#### Code Interpreter

The OpenAI responses API supports the code interpreter tool through the `openai.tools.codeInterpreter` tool. This allows models to write and execute Python code.

```ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const result = await generateText({
model: openai.responses('gpt-5'),
prompt: 'Write and run Python code to calculate the factorial of 10',
tools: {
code_interpreter: openai.tools.codeInterpreter({
// optional configuration:
container: {
fileIds: ['file-123', 'file-456'], // optional file IDs to make available
},
}),
},
});
```

The code interpreter tool can be configured with:

- **container**: Either a container ID string or an object with `fileIds` to specify uploaded files that should be available to the code interpreter

<Note>
The tool must be named `code_interpreter` when using OpenAI's code interpreter
functionality. This name is required by OpenAI's API specification and cannot
be customized.
</Note>

#### Image Support

The OpenAI Responses API supports Image inputs for appropriate models.
Expand Down
8 changes: 0 additions & 8 deletions content/providers/01-ai-sdk-providers/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,6 @@ Not all providers support all AI SDK features. Here's a quick comparison of the
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-mini` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-nano` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-chat-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1-mini` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1-nano` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o-mini` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4.1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [OpenAI](/providers/ai-sdk-providers/openai) | `o1` | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3.7-sonnet-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3.5-sonnet-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3.5-haiku-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
Expand Down
22 changes: 22 additions & 0 deletions examples/ai-core/src/stream-text/openai-code-interpreter.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import { openai } from '@ai-sdk/openai';
import { stepCountIs, streamText } from 'ai';
import 'dotenv/config';

async function main() {
const result = streamText({
model: openai.responses('gpt-5'),
stopWhen: stepCountIs(5),
tools: {
code_interpreter: openai.tools.codeInterpreter({}),
},
prompt:
'Write and run Python code to simulate rolling two dice 10000 times and show a table of the results.' +
'The table should have three columns: "Sum", "Count", and "Percentage".',
});

for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
}

main().catch(console.error);
Original file line number Diff line number Diff line change
Expand Up @@ -1434,9 +1434,8 @@ describe('doStream', () => {
budget_tokens: 2000,
},
},
// Should have adjusted maxOutputTokens (100 + 2000)
inferenceConfig: {
maxOutputTokens: 2100,
maxTokens: 2100,
},
});

Expand Down Expand Up @@ -1667,7 +1666,7 @@ describe('doGenerate', () => {

expect(await server.calls[0].requestBodyJson).toMatchObject({
inferenceConfig: {
maxOutputTokens: 100,
maxTokens: 100,
temperature: 0.5,
topP: 0.5,
topK: 1,
Expand Down Expand Up @@ -2044,9 +2043,8 @@ describe('doGenerate', () => {
budget_tokens: 2000,
},
},
// Should have adjusted maxOutputTokens (100 + 2000)
inferenceConfig: {
maxOutputTokens: 2100,
maxTokens: 2100,
},
});

Expand Down
10 changes: 5 additions & 5 deletions packages/amazon-bedrock/src/bedrock-chat-language-model.ts
Original file line number Diff line number Diff line change
Expand Up @@ -159,19 +159,19 @@ export class BedrockChatLanguageModel implements LanguageModelV2 {
const thinkingBudget = bedrockOptions.reasoningConfig?.budgetTokens;

const inferenceConfig = {
...(maxOutputTokens != null && { maxOutputTokens }),
...(maxOutputTokens != null && { maxTokens: maxOutputTokens }),
...(temperature != null && { temperature }),
...(topP != null && { topP }),
...(topK != null && { topK }),
...(stopSequences != null && { stopSequences }),
};

// Adjust maxOutputTokens if thinking is enabled
// Adjust maxTokens if thinking is enabled
if (isThinking && thinkingBudget != null) {
if (inferenceConfig.maxOutputTokens != null) {
inferenceConfig.maxOutputTokens += thinkingBudget;
if (inferenceConfig.maxTokens != null) {
inferenceConfig.maxTokens += thinkingBudget;
} else {
inferenceConfig.maxOutputTokens = thinkingBudget + 4096; // Default + thinking budget maxOutputTokens = 4096, TODO update default in v5
inferenceConfig.maxTokens = thinkingBudget + 4096; // Default + thinking budget maxTokens = 4096, TODO update default in v5
}
// Add them to additional model request fields
// Add thinking config to additionalModelRequestFields
Expand Down
2 changes: 1 addition & 1 deletion packages/openai/src/openai-provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ export interface OpenAIProvider extends ProviderV2 {
/**
Creates an OpenAI model for text generation.
*/
languageModel(modelId: OpenAIResponsesModelId): OpenAIResponsesLanguageModel;
languageModel(modelId: OpenAIResponsesModelId): LanguageModelV2;

/**
Creates an OpenAI chat model for text generation.
Expand Down
Loading
Loading