Problem
Production telemetry from 2026-05-04 shows users receiving the error string:
APIError: Bad Request: {?:?}
(The {?:?} is the App Insights pipeline scrubbing string values from the raw structured JSON body.)
This happens whenever an OpenAI request returns a 4xx error with the standard shape {error: {message, type, code}}. The user gets no actionable information about what actually went wrong (e.g., "model not found", "invalid parameter", quota issues).
Root cause
In packages/opencode/src/provider/error.ts:65, the OR-chain short-circuits on the OpenAI error shape:
const errMsg = body.message || body.error || body.error?.message
// ^^^^^^^^^^ truthy object — assigned to errMsg
if (errMsg && typeof errMsg === "string") { // typeof check fails
return `${msg}: ${errMsg}`
}
// falls through to: return `${msg}: ${e.responseBody}` — raw body dump
For OpenAI's nested {error: {message: "..."}} shape, body.error is the object itself, not the message. The OR short-circuits there, the typeof string guard fails, and the parser falls through to dumping the raw body.
The sibling function parseStreamError at line 153 already uses the correct defensive pattern (typeof body?.error?.message === "string" ? ...); only parseAPICallError had the bug.
Telemetry context
Today (2026-05-04 PDT), 3 of 13 turns from session ses_20cde7243ffeX4k3gVGNzMATpx (machine 5aa5633c) failed with this Bad Request: {?:?} error after the user selected gpt-5-codex as their model. The user retried 3 times because the error message gave them no hint that the model name was the problem.
Proposed fix
Switch to an explicit-typeof ternary that matches parseStreamError's pattern (line 153). This handles all three known shapes ({error: {message}}, {message}, {error: "string"}) and is robust against any non-string truthy short-circuit at any level.
Upstream
This is also being filed as an upstream PR. The local fix is wrapped in // altimate_change start — upstream_fix: markers per the project's upstream-fix policy.
Problem
Production telemetry from 2026-05-04 shows users receiving the error string:
(The
{?:?}is the App Insights pipeline scrubbing string values from the raw structured JSON body.)This happens whenever an OpenAI request returns a 4xx error with the standard shape
{error: {message, type, code}}. The user gets no actionable information about what actually went wrong (e.g., "model not found", "invalid parameter", quota issues).Root cause
In
packages/opencode/src/provider/error.ts:65, the OR-chain short-circuits on the OpenAI error shape:For OpenAI's nested
{error: {message: "..."}}shape,body.erroris the object itself, not the message. The OR short-circuits there, the typeof string guard fails, and the parser falls through to dumping the raw body.The sibling function
parseStreamErrorat line 153 already uses the correct defensive pattern (typeof body?.error?.message === "string" ? ...); onlyparseAPICallErrorhad the bug.Telemetry context
Today (2026-05-04 PDT), 3 of 13 turns from session
ses_20cde7243ffeX4k3gVGNzMATpx(machine5aa5633c) failed with thisBad Request: {?:?}error after the user selectedgpt-5-codexas their model. The user retried 3 times because the error message gave them no hint that the model name was the problem.Proposed fix
Switch to an explicit-typeof ternary that matches
parseStreamError's pattern (line 153). This handles all three known shapes ({error: {message}},{message},{error: "string"}) and is robust against any non-string truthy short-circuit at any level.Upstream
This is also being filed as an upstream PR. The local fix is wrapped in
// altimate_change start — upstream_fix:markers per the project's upstream-fix policy.