fix(ai): rebuild cached provider on any config change#1003
fix(ai): rebuild cached provider on any config change#1003datlechin merged 2 commits intoTableProApp:mainfrom
Conversation
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
|
All contributors have signed the CLA ✍️ ✅ |
|
I have read the CLA Document and I hereby sign the CLA. |
|
Heads up: I haven't been able to run the new unit tests locally yet — my machine is still on macOS 15.6.1, and the test target's deployment target is macOS 26.2, so xcodebuild test fails to build. I'll upgrade and re-run them shortly. The file follows the existing conventions (Swift Testing, @Suite/@mainactor, UUID isolation + defer cache invalidation) and passes |
|
Separate thought, not for this PR: Context:
Both are popular enough among Chinese developers that first-class entries would help onboarding. |
Summary
Fixes a stale-cache bug in
AIProviderFactorywhere editing an unsaved provider's endpoint (or any other field) was silently ignored. The factory kept handing back the instance built from the original draft state until the user hit Save.Repro
https://api.openai.com) with another OpenAI-compatible URL, e.g.https://api.deepseek.comExpected: success.
Actual:
unsupported URL. The model list's Reload button shows the same error.Saving the provider invalidates the cache and the next call works, which is why chat itself functions — the broken state only surfaces while editing.
Root cause
AIProviderFactory.createProvider(for:apiKey:)keyed its cache on(id, apiKey)only:The endpoint TextField in
AIProviderDetailSheetcallsscheduleFetchModels()(500 ms debounce) on every keystroke. When the field is briefly empty (cleared before re-typing) and the debounce fires,fetchModelscallscreateProviderand stashes a provider built withendpoint = "". Later calls under the same draft id — including Test Connection — get that stale instance back, regardless of the now-correct endpoint. The provider then composesURL(string: "/v1/models"), whichURLSessionrejects withURLError.unsupportedURL.Cache invalidation only happens inside
saveProvider, so the bad state persists for the entire edit session.Fix
Compare the full
AIProviderConfig(alreadyEquatable) along withapiKey. Any mutation rebuilds the provider:Tests
New
AIProviderFactoryCacheTests(11 cases):endpoint/model/maxOutputTokens/namechange each rebuildapiKeychange rebuilds;nil → valuetransition rebuildsinvalidateCache(for:)only affects the targeted idOut of scope
While debugging I noticed two unrelated issues in
OpenAICompatibleProvider. Not touching them here, can send a separate PR if wanted:"\(endpoint)/v1/...". Endpoints that already include/v1(e.g. SiliconFlow'shttps://api.siliconflow.cn/v1) become/v1/v1/...and 404.testConnectionPOSTs/v1/chat/completionswith"model": "test"and decides success based on Content-Type. Fragile across providers;GET /v1/modelswould be lighter and more reliable.