Skip to content

Codex server_is_overloaded stream errors are not retried #25730

@ItsWendell

Description

@ItsWendell

Description

OpenAI Codex subscription streams can return transient overload errors in this shape:

{"type":"error","sequence_number":2,"error":{"type":"service_unavailable_error","code":"server_is_overloaded","message":"Our servers are currently overloaded. Please try again later.","param":null}}

OpenCode should treat this as a retryable provider overload. Today this nested stream error shape can miss retry classification because the overload code/type are nested under error.

Related broader reports: #16214 and #21893.

Plugins

None

OpenCode version

Latest dev

Steps to reproduce

  1. Use an OpenAI Codex subscription model.
  2. Hit a transient streamed overload response with error.code: server_is_overloaded.
  3. Observe that the error can terminate instead of entering the existing retry path.

Screenshot and/or share link

No response

Operating System

No response

Terminal

No response

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions