Skip to content

fix(ai): rebuild cached provider on any config change#1003

Merged
datlechin merged 2 commits intoTableProApp:mainfrom
tonghs:fix/ai-provider-cache
May 6, 2026
Merged

fix(ai): rebuild cached provider on any config change#1003
datlechin merged 2 commits intoTableProApp:mainfrom
tonghs:fix/ai-provider-cache

Conversation

@tonghs
Copy link
Copy Markdown
Contributor

@tonghs tonghs commented May 5, 2026

Summary

Fixes a stale-cache bug in AIProviderFactory where editing an unsaved provider's endpoint (or any other field) was silently ignored. The factory kept handing back the instance built from the original draft state until the user hit Save.

Repro

  1. Settings → AI → Add OpenAI
  2. Replace the default endpoint (https://api.openai.com) with another OpenAI-compatible URL, e.g. https://api.deepseek.com
  3. Click Test Connection

Expected: success.
Actual: unsupported URL. The model list's Reload button shows the same error.

Saving the provider invalidates the cache and the next call works, which is why chat itself functions — the broken state only surfaces while editing.

Root cause

AIProviderFactory.createProvider(for:apiKey:) keyed its cache on (id, apiKey) only:

if let cached = cache[config.id], cached.apiKey == apiKey {
    return cached.provider
}

The endpoint TextField in AIProviderDetailSheet calls scheduleFetchModels() (500 ms debounce) on every keystroke. When the field is briefly empty (cleared before re-typing) and the debounce fires, fetchModels calls createProvider and stashes a provider built with endpoint = "". Later calls under the same draft id — including Test Connection — get that stale instance back, regardless of the now-correct endpoint. The provider then composes URL(string: "/v1/models"), which URLSession rejects with URLError.unsupportedURL.

Cache invalidation only happens inside saveProvider, so the bad state persists for the entire edit session.

Fix

Compare the full AIProviderConfig (already Equatable) along with apiKey. Any mutation rebuilds the provider:

if let cached = cache[config.id], cached.apiKey == apiKey, cached.config == config {
    return cached.provider
}

Tests

New AIProviderFactoryCacheTests (11 cases):

  • Identical config + apiKey returns the same instance (cache hit preserved)
  • endpoint / model / maxOutputTokens / name change each rebuild
  • apiKey change rebuilds; nil → value transition rebuilds
  • Mid-edit empty-then-final endpoint scenario (the actual repro)
  • Different ids stay isolated
  • invalidateCache(for:) only affects the targeted id

Out of scope

While debugging I noticed two unrelated issues in OpenAICompatibleProvider. Not touching them here, can send a separate PR if wanted:

  1. URL composition is "\(endpoint)/v1/...". Endpoints that already include /v1 (e.g. SiliconFlow's https://api.siliconflow.cn/v1) become /v1/v1/... and 404.
  2. testConnection POSTs /v1/chat/completions with "model": "test" and decides success based on Content-Type. Fragile across providers; GET /v1/models would be lighter and more reliable.

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@tonghs
Copy link
Copy Markdown
Contributor Author

tonghs commented May 5, 2026

I have read the CLA Document and I hereby sign the CLA.

github-actions Bot added a commit that referenced this pull request May 5, 2026
@tonghs
Copy link
Copy Markdown
Contributor Author

tonghs commented May 5, 2026

Heads up: I haven't been able to run the new unit tests locally yet — my machine is still on macOS 15.6.1, and the test target's deployment target is macOS 26.2, so xcodebuild test fails to build. I'll upgrade and re-run them shortly. The file follows the existing conventions (Swift Testing, @Suite/@mainactor, UUID isolation + defer cache invalidation) and passes swiftlint lint --strict.

@tonghs
Copy link
Copy Markdown
Contributor Author

tonghs commented May 5, 2026

Separate thought, not for this PR:
would it be worth adding dedicated DeepSeek and SiliconFlow provider types? SiliconFlow's /v1-prefixed endpoint and similar vendor quirks would fit naturally in their own descriptors instead of layering on the generic OpenAI-compatible path.

Context:

  • DeepSeek is a Chinese LLM lab (deepseek-v4 / r1) widely used in China for its price/perf.
  • SiliconFlow is a Chinese inference platform hosting many open models (Qwen, DeepSeek, Llama, GLM, MiniMax) behind an OpenAI-compatible API.

Both are popular enough among Chinese developers that first-class entries would help onboarding.
Happy to send a separate PR if useful.

Copy link
Copy Markdown
Member

@datlechin datlechin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you.

@datlechin datlechin merged commit 0991f2d into TableProApp:main May 6, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants