Skip to content

Commit b5d909c

Browse files
committed
🤖 Add truncation: auto to OpenAI Responses API
Enables automatic conversation truncation for OpenAI Responses API to prevent context overflow errors. When set to 'auto', the API will automatically drop input items in the middle of the conversation to fit within the model's context window. This prevents failures when the conversation history exceeds the available context size and allows long conversations to continue seamlessly. _Generated with `cmux`_
1 parent 56b251c commit b5d909c

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

src/utils/ai/providerOptions.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,7 @@ export function buildProviderOptions(
111111
parallelToolCalls: true, // Always enable concurrent tool execution
112112
// TODO: allow this to be configured
113113
serviceTier: "priority", // Always use priority tier for best performance
114+
truncation: "auto", // Automatically truncate conversation to fit context window
114115
// Conditionally add reasoning configuration
115116
...(reasoningEffort && {
116117
reasoningEffort,

0 commit comments

Comments
 (0)