Skip to content

SSE JSON parse failure with GLM-5.1 (Z.AI) — malformed JSON in content field kills stream #23442

@darkhipo

Description

@darkhipo

Description

When using GLM-5.1 via Z.AI's OpenAI-compatible API, the SSE stream occasionally contains invalid JSON. The model hallucinates SSE-formatted text (e.g. data: {"id":"...","choices":[...]}) as its actual response content. Z.AI's server does not properly escape the quotes in the content field, producing malformed JSON that fails parsing.

Observed Error

JSON parsing failed: Text: {"id":"20260420032348d6275404213948ac","created":1776626628,"object":"chat.completion.chunk","model":"glm-5.1","choices":[{
  "index":0,"delta":{"role":"assistant","content":"data: {"id":"20260420032457d424e96cb1da4f11","created":1776626698,"object":"chat.completion.chunk",
  "model":"glm-5.1","choices":[{"index":0,"delta":{"role":"assistant","content":"Two"}}]}.
  Error message: JSON Parse error: Expected '}'

Root Cause

The SSE data: line is:

data: {"id":"...","choices":[{"delta":{"content":"data: {"id":"...","choices":[{"delta":{"content":"Two"}}]}}]}

The inner " in the hallucinated "data: {"id":"..." breaks the outer JSON string boundary. Z.AI's server should be escaping these quotes (\") but isn't.

Where It Fails

  1. EventSourceParserStream (eventsource-parser) correctly extracts the line after data:
  2. safeParseJSON() in @ai-sdk/provider-utils/src/parse-json-event-stream.ts calls JSON.parse() on the malformed string → throws
  3. The error propagates up through processor.ts and kills the stream

Suggested Fix

Skip-and-continue: When a chunk fails JSON parsing, log a warning and continue to the next chunk instead of killing the entire stream. The hallucinated SSE chunks are typically noise — the real content arrives in adjacent valid chunks. A single malformed chunk shouldn't terminate the conversation.

// In the TransformStream inside parseJsonEventStream:
async transform({ data }, controller) {
  if (data === "[DONE]") return;
  try {
    controller.enqueue(await safeParseJSON({ text: data, schema }));
  } catch (err) {
    // Provider sent malformed JSON in this chunk — skip it
    // rather than killing the entire stream.
    console.warn("SSE chunk JSON parse failed, skipping:", data.slice(0, 200));
  }
}

Environment

  • opencode v1.14.x
  • Provider: Z.AI (api.z.ai/api/coding/paas/v4)
  • Model: glm-5.1
  • Stream mode: SSE

Reproduction

Any multi-turn conversation with GLM-5.1 via opencode will eventually trigger this. The model sporadically hallucinates SSE chunks in its response text. The failure is intermittent but frequent enough to disrupt normal usage.

Workaround

We added content sanitization in our downstream provider layer that strips data: {...} artifacts from response content, but the fix belongs in the SSE parser itself so all providers benefit.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions