Skip to content

Conversation

@ibetitsmike
Copy link
Contributor

@ibetitsmike ibetitsmike commented Dec 15, 2025

Wire AI SDK's providerOptions.openai.promptCacheKey to improve OpenAI prompt cache hit rates.

Changes

  • Derive cache key as mux-v1-{workspaceId} for OpenAI requests
  • Pass workspaceId from AIService.streamMessage to buildProviderOptions
  • Only set promptCacheKey when workspaceId is available (always true in real requests)

This enables OpenAI to route requests to cached prefixes within a workspace, improving cache hit rates for repeated calls.


Generated with mux • Model: anthropic:claude-opus-4-5 • Thinking: high

@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Repo admins can enable using credits for code reviews in their settings.

Wire AI SDK's providerOptions.openai.promptCacheKey to improve OpenAI
prompt cache hit rates.

- Derive default key as mux-v1-{workspaceId} when workspace ID available
- Fall back to mux-v1 when workspace ID is unavailable
- Pass workspaceId from AIService.streamMessage to buildProviderOptions

This enables OpenAI to route requests to cached prefixes within a
workspace, improving cache hit rates for repeated calls.

---
_Generated with `mux` • Model: `anthropic:claude-opus-4-5` • Thinking: `high`_
@ibetitsmike ibetitsmike added this pull request to the merge queue Dec 16, 2025
Merged via the queue into main with commit 429a6dd Dec 16, 2025
20 checks passed
@ibetitsmike ibetitsmike deleted the openai-caching-xc3r branch December 16, 2025 12:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant