-
Notifications
You must be signed in to change notification settings - Fork 12.4k
Open
Labels
bugSomething isn't workingSomething isn't workingcoreAnything pertaining to core functionality of the application (opencode server stuff)Anything pertaining to core functionality of the application (opencode server stuff)perfIndicates a performance issue or need for optimizationIndicates a performance issue or need for optimization
Description
Description
Summary
Add two models
- GLM - 5 Text only model
- GLM - 4.6v
via customer provider Opencode.jsonc
....
"models": {
"Qwen/Qwen3-VL-235B-A22B-Instruct": {
"name": "Qwen: Qwen3 VL 235B Instruct",
"attachment": true,
"modalities": {
"input": [
"text",
"image"
],
"output": [
"text"
]
}
},
"zai-org/GLM-5": {
"name": "z-ai glm 5"
},
"zai-org/GLM-4.6v": {
"name": "z-ai glm 4.6v",
"attachment": true,
"modalities": {
"input": [
"text",
"image"
],
"output": [
"text"
]
}
}
}
...GLM 5 works great buth both Qwen & GLM 4.6v struggle getting responses , see lag of 3 -7 seconds sometimes more than that.
Steps to reproduce
- add provided models - in openCode.jsonc
- restart Opencode
- those model start appearing - select one of them and try having a chat
- you find extreme lag with those models
I thought it might be inference issue but its not, inferences is as fast as other models
Expected behavior
it should be working as smooths as text models
Actual behavior
Laggy 8 - 10 seconds processing time for each step
Plugins
No response
OpenCode version
No response
Steps to reproduce
No response
Screenshot and/or share link
No response
Operating System
No response
Terminal
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingcoreAnything pertaining to core functionality of the application (opencode server stuff)Anything pertaining to core functionality of the application (opencode server stuff)perfIndicates a performance issue or need for optimizationIndicates a performance issue or need for optimization