Problem
The OpenAI provider in TablePro uses /v1/chat/completions. OpenAI's official guidance is that Responses API is the path for new projects. Reasoning models (gpt-5-codex, gpt-5.3-codex, gpt-5.5) lose capability on Chat Completions: tool calling with reasoning is not supported on GPT-5.4+, no reasoning summaries, no parallel tool calls, weaker caching. Today users who pick a Codex or GPT-5 model in TablePro get a degraded version of it.
Proposed solution
Refactor the OpenAI side of the AI layer:
- Split
OpenAICompatibleProvider into two providers:
OpenAIResponsesProvider — speaks /v1/responses, used by the OpenAI provider type.
ChatCompletionsProvider — keeps /v1/chat/completions for OpenRouter, Ollama, Custom.
- Add
reasoningDelta / reasoningSummary cases to ChatStreamEvent and route them in AnthropicProvider and OpenAIResponsesProvider. Render in a collapsible panel above each assistant reply.
- Pass
strict: true on function tools through ChatToolSpec.
- Stay stateless — send full
[ChatTurn] each call, do not use previous_response_id. Local chat history stays as-is.
- Curate the OpenAI model list:
gpt-5-codex, gpt-5.3-codex, gpt-5.5, gpt-5.4-mini surface at the top.
- Optional reasoning-effort dropdown (low/medium/high/xhigh) shown only for reasoning models.
- Image input in the chat composer for OpenAI and Anthropic.
Auth: existing AIKeyStorage (Keychain). No new credentials. ChatGPT Plus/Pro sign-in is not available to third-party apps; surface that in the API key field copy.
Out of scope (explicitly)
- No
codex CLI binary bundled.
- No
codex app-server subprocess.
- No new "Codex" provider type — Codex is a model under OpenAI.
- No
previous_response_id server-side state.
- No Agents SDK / multi-agent handoffs.
- Azure OpenAI stays out for now (different endpoint shape, header style).
Alternatives considered
- Embed
codex app-server over JSONL JSON-RPC (mirrors Copilot LSP). Rejected: ships a large Rust binary, agentic file editing is not a database-client feature.
- Add a separate Codex provider type. Rejected: marketing leaking into architecture; the canonical surface is the model name on the OpenAI provider.
- Keep Chat Completions everywhere. Rejected: blocks reasoning + tools on GPT-5.4+, worse caching, official guidance is to migrate.
Related database type
N/A / General
Problem
The OpenAI provider in TablePro uses
/v1/chat/completions. OpenAI's official guidance is that Responses API is the path for new projects. Reasoning models (gpt-5-codex,gpt-5.3-codex,gpt-5.5) lose capability on Chat Completions: tool calling with reasoning is not supported on GPT-5.4+, no reasoning summaries, no parallel tool calls, weaker caching. Today users who pick a Codex or GPT-5 model in TablePro get a degraded version of it.Proposed solution
Refactor the OpenAI side of the AI layer:
OpenAICompatibleProviderinto two providers:OpenAIResponsesProvider— speaks/v1/responses, used by the OpenAI provider type.ChatCompletionsProvider— keeps/v1/chat/completionsfor OpenRouter, Ollama, Custom.reasoningDelta/reasoningSummarycases toChatStreamEventand route them inAnthropicProviderandOpenAIResponsesProvider. Render in a collapsible panel above each assistant reply.strict: trueon function tools throughChatToolSpec.[ChatTurn]each call, do not useprevious_response_id. Local chat history stays as-is.gpt-5-codex,gpt-5.3-codex,gpt-5.5,gpt-5.4-minisurface at the top.Auth: existing
AIKeyStorage(Keychain). No new credentials. ChatGPT Plus/Pro sign-in is not available to third-party apps; surface that in the API key field copy.Out of scope (explicitly)
codexCLI binary bundled.codex app-serversubprocess.previous_response_idserver-side state.Alternatives considered
codex app-serverover JSONL JSON-RPC (mirrors Copilot LSP). Rejected: ships a large Rust binary, agentic file editing is not a database-client feature.Related database type
N/A / General