Description
When using GLM-5.1 via Z.AI's OpenAI-compatible API, the SSE stream occasionally contains invalid JSON. The model hallucinates SSE-formatted text (e.g. data: {"id":"...","choices":[...]}) as its actual response content. Z.AI's server does not properly escape the quotes in the content field, producing malformed JSON that fails parsing.
Observed Error
JSON parsing failed: Text: {"id":"20260420032348d6275404213948ac","created":1776626628,"object":"chat.completion.chunk","model":"glm-5.1","choices":[{
"index":0,"delta":{"role":"assistant","content":"data: {"id":"20260420032457d424e96cb1da4f11","created":1776626698,"object":"chat.completion.chunk",
"model":"glm-5.1","choices":[{"index":0,"delta":{"role":"assistant","content":"Two"}}]}.
Error message: JSON Parse error: Expected '}'
Root Cause
The SSE data: line is:
data: {"id":"...","choices":[{"delta":{"content":"data: {"id":"...","choices":[{"delta":{"content":"Two"}}]}}]}
The inner " in the hallucinated "data: {"id":"..." breaks the outer JSON string boundary. Z.AI's server should be escaping these quotes (\") but isn't.
Where It Fails
EventSourceParserStream (eventsource-parser) correctly extracts the line after data:
safeParseJSON() in @ai-sdk/provider-utils/src/parse-json-event-stream.ts calls JSON.parse() on the malformed string → throws
- The error propagates up through
processor.ts and kills the stream
Suggested Fix
Skip-and-continue: When a chunk fails JSON parsing, log a warning and continue to the next chunk instead of killing the entire stream. The hallucinated SSE chunks are typically noise — the real content arrives in adjacent valid chunks. A single malformed chunk shouldn't terminate the conversation.
// In the TransformStream inside parseJsonEventStream:
async transform({ data }, controller) {
if (data === "[DONE]") return;
try {
controller.enqueue(await safeParseJSON({ text: data, schema }));
} catch (err) {
// Provider sent malformed JSON in this chunk — skip it
// rather than killing the entire stream.
console.warn("SSE chunk JSON parse failed, skipping:", data.slice(0, 200));
}
}
Environment
- opencode v1.14.x
- Provider: Z.AI (
api.z.ai/api/coding/paas/v4)
- Model:
glm-5.1
- Stream mode: SSE
Reproduction
Any multi-turn conversation with GLM-5.1 via opencode will eventually trigger this. The model sporadically hallucinates SSE chunks in its response text. The failure is intermittent but frequent enough to disrupt normal usage.
Workaround
We added content sanitization in our downstream provider layer that strips data: {...} artifacts from response content, but the fix belongs in the SSE parser itself so all providers benefit.
Description
When using GLM-5.1 via Z.AI's OpenAI-compatible API, the SSE stream occasionally contains invalid JSON. The model hallucinates SSE-formatted text (e.g.
data: {"id":"...","choices":[...]}) as its actual response content. Z.AI's server does not properly escape the quotes in thecontentfield, producing malformed JSON that fails parsing.Observed Error
Root Cause
The SSE
data:line is:The inner
"in the hallucinated"data: {"id":"..."breaks the outer JSON string boundary. Z.AI's server should be escaping these quotes (\") but isn't.Where It Fails
EventSourceParserStream(eventsource-parser) correctly extracts the line afterdata:safeParseJSON()in@ai-sdk/provider-utils/src/parse-json-event-stream.tscallsJSON.parse()on the malformed string → throwsprocessor.tsand kills the streamSuggested Fix
Skip-and-continue: When a chunk fails JSON parsing, log a warning and continue to the next chunk instead of killing the entire stream. The hallucinated SSE chunks are typically noise — the real content arrives in adjacent valid chunks. A single malformed chunk shouldn't terminate the conversation.
Environment
api.z.ai/api/coding/paas/v4)glm-5.1Reproduction
Any multi-turn conversation with GLM-5.1 via opencode will eventually trigger this. The model sporadically hallucinates SSE chunks in its response text. The failure is intermittent but frequent enough to disrupt normal usage.
Workaround
We added content sanitization in our downstream provider layer that strips
data: {...}artifacts from response content, but the fix belongs in the SSE parser itself so all providers benefit.