-
Notifications
You must be signed in to change notification settings - Fork 13.9k
Closed
Labels
bugSomething isn't workingSomething isn't workingcoreAnything pertaining to core functionality of the application (opencode server stuff)Anything pertaining to core functionality of the application (opencode server stuff)
Description
Description
When I use the new-api as a third-party relay and provide the Codex model, most sessions do not read the cache when using OpenCode output. This results in each request failing to utilize the cache from the previous session, leading to significant quota wastage.
Configuration as follows:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"test-openai": {
"npm": "@ai-sdk/openai",
"name": "openai",
"options": {
"baseURL": "https://test.net/v1",
"setCacheKey": true
},
"models": {
"codex/gpt-5.4" : {
"name": "codex/gpt-5.4",
"variants": {
"xhigh": { "reasoningEffort": "xhigh" },
"high": { "reasoningEffort": "high" },
"medium": { "reasoningEffort": "medium" }
}
}
}
}
}
}When I use the official Codex connection, all sessions correctly utilize the cache.
Plugins
none
OpenCode version
v1.2.26
Steps to reproduce
No response
Screenshot and/or share link
According to the screenshot, only one cache read occurred among all these requests.
Operating System
windows 11
Terminal
Windows Terminal
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingcoreAnything pertaining to core functionality of the application (opencode server stuff)Anything pertaining to core functionality of the application (opencode server stuff)