Skip to content

ECONNRESET with zai-coding-plan provider (api.z.ai) #15350

@gletushov

Description

@gletushov

Summary

Requests to Z.AI API (api.z.ai) fail with ECONNRESET after ~40-100 seconds when using the zai-coding-plan provider. The same API works correctly via curl and in other tools (e.g., OpenClaw via VPS proxy).

Environment

  • OpenCode: v1.2.15
  • Platform: macOS (arm64)
  • Provider: zai-coding-plan
  • Model: glm-5
  • Endpoint: https://api.z.ai/api/coding/paas/v4/chat/completions

Error Details

From ~/Library/Logs/ai.opencode.desktop/opencode-desktop_*.log:

ERROR service=llm providerID=zai-coding-plan modelID=glm-5 sessionID=ses_xxx small=false agent=build mode=primary 
error={"error":{"code":"ECONNRESET","path":"https://api.z.ai/api/coding/paas/v4/chat/completions","errno":0}} stream error

Timing pattern of failures (from logs):

  • Most failures occur after 39-100 seconds
  • Some as long as 1000+ seconds
  • 0 successful requests logged (101 ECONNRESET errors in one session)

Reproduction

  1. Configure zai-coding-plan provider with a valid API key via opencode auth login
  2. Select glm-5 model
  3. Send any prompt
  4. After ~40-100 seconds: ECONNRESET error

Verification that API works

Direct curl requests work fine:

curl -X POST "https://api.z.ai/api/coding/paas/v4/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <key>" \
  -d '{"model":"glm-5","messages":[{"role":"user","content":"hi"}],"stream":true}'
# Returns streaming response successfully

Same API also works when routed through a VPS proxy (tested with OpenClaw which proxies through a France VPS).

Analysis

The issue appears to be with Bun's HTTP client in the OpenCode sidecar:

  1. Bun's connection pooling: Bun aggressively reuses HTTP connections (keep-alive)
  2. Z.AI's idle timeout: Server likely closes idle connections after 30-60 seconds
  3. Race condition: When Bun tries to reuse a connection that the server has already closed → ECONNRESET

The sidecar process (opencode-cli serve) makes the actual API calls:

/Applications/OpenCode.app/Contents/MacOS/opencode-cli --print-logs --log-level WARN serve --hostname 127.0.0.1 --port XXXXX

Suggested Fix

  1. Disable keep-alive for Z.AI endpoints, or
  2. Add retry logic with fresh connection on ECONNRESET, or
  3. Reduce connection pool lifetime for streaming endpoints

Workaround Attempts

  • Setting HTTPS_PROXY environment variable: doesn't help (sidecar is separate process)
  • Bun config (~/.bunfig.toml): OpenCode binary appears to ignore it
  • VPN: issue persists regardless of network route

Related

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions