Skip to content

[BUG] GLM-5 model pollutes context window with broken thinking output #16903

@tayroneoliveira2023

Description

@tayroneoliveira2023

Description

When using the GLM-5 model (Z.AI), OpenCode's context window gets polluted with broken/malformed thinking output. The terminal displays ? characters scattered throughout the interface, the rendered content becomes garbled, and the TUI layout breaks completely. The context window shows raw thinking tokens that should be hidden or properly parsed.

The GLM 4.7 model works correctly without any of these rendering issues under the same conditions.

Steps to Reproduce

  1. Open OpenCode on Windows (cmd or PowerShell)
  2. Select the GLM-5 (Z.AI Coding Plan) model
  3. Run any task that triggers the model's thinking/reasoning mode (e.g., a build or code generation task)
  4. Observe the terminal output becoming corrupted

Expected Behavior

The context window should display clean, properly formatted output — thinking tokens should be hidden or correctly rendered, just like the GLM 4.7 model does.

Actual Behavior

  • Multiple ? characters appear throughout the terminal (top, bottom, status bar)
  • The thinking/reasoning output leaks into the visible context window in a broken/malformed state
  • The TUI layout breaks — code snippets, markdown, and status indicators become garbled
  • The status bar at the bottom shows ???■■■■■ instead of proper indicators
  • The overall context window becomes unusable and filled with noise

Specific observations from the screenshot

  • The title bar and content area show ? characters on the left margin where they shouldn't be
  • Code content (Java — EvidenciaRunner.java) is partially rendered but interspersed with broken formatting
  • The bottom status bar shows: ? Build · glm-5 followed by more ? lines, then ? Build GLM-5 Z.AI Coding Plan
  • The very bottom shows ???■■■■■ esc interrupt — the progress/status indicators are corrupted

Comparison

Model Behavior
GLM 4.7 Clean output, no artifacts, thinking is properly handled
GLM-5 Broken thinking tokens leak into context window, ? characters everywhere, TUI breaks

Environment

  • OS: Windows 10/11
  • Terminal: cmd / PowerShell
  • OpenCode version: Latest
  • Model: GLM-5 (Z.AI Coding Plan)

Screenshot

OpenCode GLM-5 broken thinking output


  • I have verified that this issue has not already been requested
  • I have read the contributing guidelines

Metadata

Metadata

Assignees

Labels

opentuiThis relates to changes in v1.0, now that opencode uses opentuiwindows

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions