Skip to content

feat(provider): add NewAPI provider#1443

Merged
zerob13 merged 3 commits intodevfrom
feat/newapi-provider
Apr 9, 2026
Merged

feat(provider): add NewAPI provider#1443
zerob13 merged 3 commits intodevfrom
feat/newapi-provider

Conversation

@yyhhyyyyyy
Copy link
Copy Markdown
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Apr 9, 2026

feat(provider): add NewAPI provider

Summary by CodeRabbit

  • New Features

    • Added "New API" LLM provider supporting multiple upstream endpoint types and per-model endpoint selection.
    • UI: model "endpoint type" selector for New API, editable base URL for New API, and provider icon.
    • Model lists now show only chat-selectable models where appropriate.
  • Documentation

    • Added localized strings for the new endpoint-type UI across multiple languages.
  • Tests

    • New test suites covering New API routing and updated model/config UI behavior.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 9, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a0e45182-7147-41bc-b8f9-da8fbc5baa3b

📥 Commits

Reviewing files that changed from the base of the PR and between ed8427e and f40bef0.

📒 Files selected for processing (10)
  • src/renderer/src/i18n/da-DK/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/he-IL/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
✅ Files skipped from review due to trivial changes (9)
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/da-DK/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/he-IL/settings.json
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/renderer/src/i18n/zh-TW/settings.json

📝 Walkthrough

Walkthrough

Adds a new new-api LLM provider that routes per-model requests to underlying delegates (OpenAI, Anthropic, Gemini) via a resolved endpoint type. Introduces endpointType model config and metadata, capability-provider resolution, chat-selectable model filtering, UI changes for endpoint selection, and tests.

Changes

Cohort / File(s) Summary
new-api Provider Implementation
src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts, src/main/presenter/llmProviderPresenter/providerInstanceManager.ts, src/main/presenter/configPresenter/providers.ts
Added NewApiProvider class with per-model endpoint resolution, delegate construction (OpenAI/Responses/Gemini/Anthropic), model discovery via /v1/models, proxy-aware Anthropic init, and provider registration.
Capability Routing Updates
src/main/presenter/configPresenter/index.ts, src/main/presenter/llmProviderPresenter/baseProvider.ts, src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts, src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts, src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
Added/getCapabilityProviderId and switched reasoning/capability checks to use resolved capability provider id (enables correct capability lookups for new-api).
Model Configuration & Storage
src/main/presenter/configPresenter/modelConfig.ts, src/main/presenter/llmProviderPresenter/managers/modelManager.ts, src/main/presenter/llmProviderPresenter/managers/providerModelHelper.ts
Propagate and merge endpointType from model configs into models returned to presenters; use config endpointType when available.
Types & Utilities
src/shared/model.ts, src/shared/types/presenters/legacy.presenters.d.ts, src/shared/types/presenters/llmprovider.presenter.d.ts
Added NEW_API_ENDPOINT_TYPES, NewApiEndpointType, isNewApiEndpointType, resolveNewApiCapabilityProviderId, and isChatSelectableModelType. Extended ModelConfig/MODEL_META/RENDERER_MODEL_META/LLM_PROVIDER types with endpoint fields and optional capabilityProviderId.
Model Selection Filtering
src/renderer/src/components/chat/ChatStatusBar.vue, src/renderer/src/pages/NewThreadPage.vue, src/renderer/src/stores/modelStore.ts
Filter model lists to chat-selectable types (Chat and ImageGeneration); adjust resolution/fallback logic and normalize supportedEndpointTypes/endpointType into renderer model metadata.
Configuration UI
src/renderer/src/components/settings/ModelConfigDialog.vue, src/renderer/settings/components/ProviderApiConfig.vue, src/renderer/src/components/icons/ModelIcon.vue
Added endpointType selector for new-api, sync logic to derive apiEndpoint/type from selection, allow editable base URL for new-api, compute provider API key URL for new-api hosts, and register new-api icon.
Localization
src/renderer/src/i18n/*/settings.json (13 files)
Added endpointType UI strings (label/description/placeholder/required and options) across languages.
Tests
test/main/presenter/llmProviderPresenter/newApiProvider.test.ts, test/renderer/components/ChatStatusBar.test.ts, test/renderer/components/ModelConfigDialog.test.ts
Added tests for new-api capability routing to delegates, chat-selectable filtering/fallback behavior, and endpointType normalization/save flows in model config dialog.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant NewApiProvider
    participant ConfigPresenter
    participant OpenAIDel as OpenAI Delegate
    participant AnthropicDel as Anthropic Delegate
    participant GeminiDel as Gemini Delegate
    participant ModelCapabilities

    Client->>NewApiProvider: completions(messages, modelId)
    NewApiProvider->>ConfigPresenter: getModelConfig(modelId)
    ConfigPresenter-->>NewApiProvider: config { endpointType? }
    NewApiProvider->>NewApiProvider: resolveEndpointType(modelId)

    alt endpointType == 'openai' or default
        NewApiProvider->>OpenAIDel: completions(...)
        OpenAIDel->>ModelCapabilities: getThinkingBudgetRange(openai, modelId)
        ModelCapabilities-->>OpenAIDel: range
        OpenAIDel-->>NewApiProvider: LLMResponse
    else endpointType == 'anthropic'
        NewApiProvider->>AnthropicDel: completions(...)
        AnthropicDel->>ModelCapabilities: supportsReasoningCapability(anthropic, modelId)
        ModelCapabilities-->>AnthropicDel: bool
        AnthropicDel-->>NewApiProvider: LLMResponse
    else endpointType == 'gemini'
        NewApiProvider->>GeminiDel: completions(...)
        GeminiDel->>ModelCapabilities: getReasoningPortrait(gemini, modelId)
        ModelCapabilities-->>GeminiDel: portrait
        GeminiDel-->>NewApiProvider: LLMResponse
    end

    NewApiProvider-->>Client: LLMResponse
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

🐰 A rabbit hops through endpoints new,
Routing whispers to the crew—
OpenAI, Anthropic, Gemini in a row,
One little provider, many places to go! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main change: adding a new provider named 'NewAPI'. It is directly relevant to the extensive changeset which implements the complete NewAPI provider feature.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/newapi-provider

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/main/presenter/configPresenter/index.ts`:
- Around line 539-579: The code currently hard-codes the literal string
'new-api' when deciding NewAPI behavior in resolveNewApiCapabilityEndpointType
and resolveCapabilityProviderId; instead look up the provider's apiType for the
given providerId and use that apiType when calling
getModelConfig/getProviderModels/getCustomModels and when checking whether to
run NewAPI logic. Concretely: in resolveCapabilityProviderId, query the provider
object (e.g., this.getProvider(providerId) or equivalent) to get
providerApiType; return providerId early if providerApiType !== 'new-api';
otherwise call resolveNewApiCapabilityEndpointType with the resolved
providerApiType (or modify resolveNewApiCapabilityEndpointType to fetch
providerApiType internally) and replace all hard-coded 'new-api' bucket
references in resolveNewApiCapabilityEndpointType with the providerApiType
variable so cloned/custom providers with apiType 'new-api' are handled correctly
while preserving fallback behavior and the call to
resolveNewApiCapabilityProviderId.

In `@src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts`:
- Around line 617-625: In the 'gemini' streaming branch, normalize the messages
before delegating to Gemini by converting the raw messages array with
toGeminiMessages() and passing that result into geminiDelegate.coreStream;
update the case 'gemini' handling (where coreStream(...) is called) to call
toGeminiMessages(messages) (same normalization used by completions() and
summaryTitles()) so unsupported roles/content parts are filtered out on the
streaming path as well.
- Around line 249-260: The inferModelType function currently promotes any model
that lists 'image-generation' in supported to ModelType.ImageGeneration; change
this so a model is classified as ImageGeneration only when its rawModel.type (or
rawModel.id) explicitly indicates an image-only model or when supported includes
'image-generation' and does not include chat-like endpoints (e.g., 'openai');
specifically, update inferModelType to check normalizedRawType for
'image'/'imagegeneration' OR (supported includes 'image-generation' AND
supported does NOT include 'openai' or other non-image endpoint types), and
consider also checking rawModel.id for image-specific identifiers before
returning ModelType.ImageGeneration to avoid promoting dual-mode models to
image-only.

In `@src/renderer/settings/components/ProviderApiConfig.vue`:
- Line 174: The anchor that opens external links in ProviderApiConfig.vue (the
<a> with :href="providerApiKeyUrl" and target="_blank" displaying {{
provider.name }}) should include rel="noopener noreferrer" to prevent
reverse-tabnabbing; update that <a> element to add the rel attribute while
keeping the existing :href and target bindings.

In `@src/renderer/src/components/settings/ModelConfigDialog.vue`:
- Around line 791-800: The computed availableEndpointTypes currently returns the
full NEW_API_ENDPOINT_TYPES when supportedEndpointTypes is absent, which exposes
endpoints for persisted models that only have a single persisted default; change
availableEndpointTypes to: if providerModelMeta.value?.supportedEndpointTypes is
a non-empty array use the filtered isNewApiEndpointType list, else if
providerModelMeta.value?.endpointType exists return
[providerModelMeta.value.endpointType] (validated with isNewApiEndpointType),
and only fall back to [...NEW_API_ENDPOINT_TYPES] when neither
supportedEndpointTypes nor providerModelMeta.endpointType exist (i.e., truly
new/custom models). This touches availableEndpointTypes, providerModelMeta,
supportedEndpointTypes, endpointType, isNewApiEndpointType, and
NEW_API_ENDPOINT_TYPES.

In `@src/renderer/src/i18n/da-DK/settings.json`:
- Around line 402-413: Translate the English text for the endpointType object
into Danish: update the values for the keys endpointType.label,
endpointType.description, endpointType.placeholder, endpointType.required and
each option under endpointType.options (openai, openai-response, anthropic,
gemini, image-generation) so the UI shows Danish strings instead of English;
keep the key names intact and only replace the English string values with
appropriate Danish translations.

In `@src/renderer/src/i18n/fa-IR/settings.json`:
- Around line 456-467: The Persian locale file contains English text for the new
"endpointType" block; translate every string under the endpointType object
(keys: label, description, placeholder, required and each options value:
"openai", "openai-response", "anthropic", "gemini", "image-generation") into
Persian so the fa-IR settings.json is fully localized; update those values in
the endpointType object (e.g., endpointType.label, endpointType.description,
endpointType.placeholder, endpointType.required, and endpointType.options.*)
with the proper Persian translations.

In `@src/renderer/src/i18n/fr-FR/settings.json`:
- Around line 456-467: The "endpointType" locale block is still in English;
translate the values for label, description, placeholder, required and each
option under options ("openai", "openai-response", "anthropic", "gemini",
"image-generation") into French so the fr-FR settings.json is fully localized;
update the strings for the "endpointType" object (label, description,
placeholder, required, and options keys) with appropriate French text while
keeping keys unchanged.

In `@src/renderer/src/i18n/he-IL/settings.json`:
- Around line 456-467: The strings under the endpointType object (keys: label,
description, placeholder, required, and options including openai,
openai-response, anthropic, gemini, image-generation) are still in English in
the he-IL file; replace each English string with the correct Hebrew translations
to avoid mixed-language UI for Hebrew users—update "endpointType.label",
"endpointType.description", "endpointType.placeholder", "endpointType.required"
and each "endpointType.options.*" value with their Hebrew equivalents while
preserving the key names and JSON structure.

In `@src/renderer/src/i18n/ja-JP/settings.json`:
- Around line 456-467: Translate the English strings under the endpointType
object into Japanese: update endpointType.label, endpointType.description,
endpointType.placeholder, endpointType.required and each endpointType.options
key (openai, openai-response, anthropic, gemini, image-generation) with
appropriate Japanese text; keep the same JSON keys and punctuation, preserve
Unicode/encoding, and ensure the resulting values read naturally in Japanese for
the settings UI.

In `@src/renderer/src/i18n/ko-KR/settings.json`:
- Around line 456-467: The endpointType localization block is still in English;
update the "endpointType" object keys (label, description, placeholder,
required) and each options entry ("openai", "openai-response", "anthropic",
"gemini", "image-generation") with Korean translations so the ko-KR
settings.json is fully localized; keep keys unchanged but replace the English
strings with appropriate Korean equivalents for label, description, placeholder,
required, and each option value.

In `@src/renderer/src/i18n/pt-BR/settings.json`:
- Around line 456-467: The endpointType translation entries are still in
English; update the "endpointType" object (keys: "label", "description",
"placeholder", "required", and each "options" value: "openai",
"openai-response", "anthropic", "gemini", "image-generation") to Portuguese so
the pt-BR locale is consistent; replace the English strings with appropriate
Brazilian Portuguese equivalents for the label, description, placeholder,
required message, and each option display name while preserving the JSON keys
and structure.

In `@src/renderer/src/i18n/ru-RU/settings.json`:
- Around line 456-467: The ru-RU locale's endpointType block is still English;
update the keys under "endpointType" (label, description, placeholder, required)
and each "options" entry ("openai", "openai-response", "anthropic", "gemini",
"image-generation") with Russian translations so the settings UI is fully
localized; locate the "endpointType" object in the ru-RU settings.json and
replace the English strings with appropriate Russian text for the label,
description, placeholder, required message and all option names.

In `@src/renderer/src/i18n/zh-HK/settings.json`:
- Around line 456-467: The endpointType translation block is still in English;
update the object keys under endpointType (label, description, placeholder,
required, and each options key: openai, openai-response, anthropic, gemini,
image-generation) to Traditional Chinese (zh-HK) so the UI is fully
localized—replace "Endpoint Type", "Select which upstream protocol New API
should use for this model.", "Select endpoint type", "Endpoint type is
required", and the option values "OpenAI Chat", "OpenAI Responses", "Anthropic
Messages", "Gemini Native", "Image Generation" with appropriate zh-HK
translations while preserving the same JSON keys and structure.

In `@src/renderer/src/i18n/zh-TW/settings.json`:
- Around line 456-467: The endpointType block contains English strings; update
the Traditional Chinese (zh-TW) translations for "endpointType.label",
"endpointType.description", "endpointType.placeholder", "endpointType.required"
and each option key ("openai", "openai-response", "anthropic", "gemini",
"image-generation") so the UI is fully localized—e.g., replace label with
"端點類型", description with "為此模型選擇上游通訊協定(New API)", placeholder with "選擇端點類型",
required with "需選擇端點類型", and option values with appropriate zh-TW equivalents
such as "OpenAI 聊天", "OpenAI 回應", "Anthropic 訊息", "Gemini 原生", "影像生成". Ensure
you update the strings under the endpointType object only.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3e8d8e59-9b45-4f0e-ab84-d61f59c007db

📥 Commits

Reviewing files that changed from the base of the PR and between c5afc64 and ed8427e.

⛔ Files ignored due to path filters (1)
  • src/renderer/src/assets/llm-icons/newapi.svg is excluded by !**/*.svg
📒 Files selected for processing (35)
  • src/main/presenter/configPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/configPresenter/providerModelHelper.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/renderer/settings/components/ProviderApiConfig.vue
  • src/renderer/src/components/chat/ChatStatusBar.vue
  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/i18n/da-DK/settings.json
  • src/renderer/src/i18n/en-US/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/he-IL/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/pages/NewThreadPage.vue
  • src/renderer/src/stores/modelStore.ts
  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • test/main/presenter/llmProviderPresenter/newApiProvider.test.ts
  • test/renderer/components/ChatStatusBar.test.ts
  • test/renderer/components/ModelConfigDialog.test.ts

Comment on lines +539 to +579
private resolveNewApiCapabilityEndpointType(modelId: string): NewApiEndpointType {
const modelConfig = this.getModelConfig(modelId, 'new-api')
if (isNewApiEndpointType(modelConfig.endpointType)) {
return modelConfig.endpointType
}

const storedModel =
this.getProviderModels('new-api').find((model) => model.id === modelId) ??
this.getCustomModels('new-api').find((model) => model.id === modelId)

if (storedModel) {
if (isNewApiEndpointType(storedModel.endpointType)) {
return storedModel.endpointType
}

const supportedEndpointTypes =
storedModel.supportedEndpointTypes?.filter(isNewApiEndpointType) ?? []
if (
storedModel.type === ModelType.ImageGeneration &&
supportedEndpointTypes.includes('image-generation')
) {
return 'image-generation'
}
if (supportedEndpointTypes.length > 0) {
return supportedEndpointTypes[0]
}
if (storedModel.type === ModelType.ImageGeneration) {
return 'image-generation'
}
}

return 'openai'
}

private resolveCapabilityProviderId(providerId: string, modelId: string): string {
if (providerId.trim().toLowerCase() !== 'new-api') {
return providerId
}

return resolveNewApiCapabilityProviderId(this.resolveNewApiCapabilityEndpointType(modelId))
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid hard-coding 'new-api' in capability resolution.

These helpers only activate for a literal provider id of 'new-api' and they also read config/model metadata from the 'new-api' bucket. The renderer already treats any provider whose apiType is 'new-api' as a NewAPI provider, so cloned/custom NewAPI providers will fall back to the raw custom id here and resolve capabilities from the wrong config store. Reasoning/verbosity support will be wrong for those providers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/configPresenter/index.ts` around lines 539 - 579, The code
currently hard-codes the literal string 'new-api' when deciding NewAPI behavior
in resolveNewApiCapabilityEndpointType and resolveCapabilityProviderId; instead
look up the provider's apiType for the given providerId and use that apiType
when calling getModelConfig/getProviderModels/getCustomModels and when checking
whether to run NewAPI logic. Concretely: in resolveCapabilityProviderId, query
the provider object (e.g., this.getProvider(providerId) or equivalent) to get
providerApiType; return providerId early if providerApiType !== 'new-api';
otherwise call resolveNewApiCapabilityEndpointType with the resolved
providerApiType (or modify resolveNewApiCapabilityEndpointType to fetch
providerApiType internally) and replace all hard-coded 'new-api' bucket
references in resolveNewApiCapabilityEndpointType with the providerApiType
variable so cloned/custom providers with apiType 'new-api' are handled correctly
while preserving fallback behavior and the call to
resolveNewApiCapabilityProviderId.

Comment on lines +249 to +260
private inferModelType(rawModel: NewApiModelRecord, supported: NewApiEndpointType[]) {
const normalizedRawType =
typeof rawModel.type === 'string' ? rawModel.type.trim().toLowerCase() : ''
const normalizedModelId = typeof rawModel.id === 'string' ? rawModel.id.toLowerCase() : ''

if (
normalizedRawType === 'imagegeneration' ||
normalizedRawType === 'image-generation' ||
normalizedRawType === 'image' ||
supported.includes('image-generation')
) {
return ModelType.ImageGeneration
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t treat mixed chat/image endpoint support as an image-only model.

A model can support both 'openai' and 'image-generation' without being an image-generation model by default. Promoting every model that advertises 'image-generation' to ModelType.ImageGeneration will make dual-mode chat models default to the image route, which breaks normal chat routing and downstream filtering.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts` around
lines 249 - 260, The inferModelType function currently promotes any model that
lists 'image-generation' in supported to ModelType.ImageGeneration; change this
so a model is classified as ImageGeneration only when its rawModel.type (or
rawModel.id) explicitly indicates an image-only model or when supported includes
'image-generation' and does not include chat-like endpoints (e.g., 'openai');
specifically, update inferModelType to check normalizedRawType for
'image'/'imagegeneration' OR (supported includes 'image-generation' AND
supported does NOT include 'openai' or other non-image endpoint types), and
consider also checking rawModel.id for image-specific identifiers before
returning ModelType.ImageGeneration to avoid promoting dual-mode models to
image-only.

Comment on lines +617 to +625
case 'gemini':
yield* this.geminiDelegate.coreStream(
messages,
modelId,
modelConfig,
temperature,
maxTokens,
tools
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize Gemini messages on the streaming path too.

The non-streaming Gemini branches already call toGeminiMessages(), but coreStream() forwards the raw messages array. That makes streaming behavior diverge from completions()/summaryTitles() and can pass unsupported roles or content parts into GeminiProvider.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts` around
lines 617 - 625, In the 'gemini' streaming branch, normalize the messages before
delegating to Gemini by converting the raw messages array with
toGeminiMessages() and passing that result into geminiDelegate.coreStream;
update the case 'gemini' handling (where coreStream(...) is called) to call
toGeminiMessages(messages) (same normalization used by completions() and
summaryTitles()) so unsupported roles/content parts are filtered out on the
streaming path as well.

<a :href="providerWebsites?.apiKey" target="_blank" class="text-primary">{{
provider.name
}}</a>
<a :href="providerApiKeyUrl" target="_blank" class="text-primary">{{ provider.name }}</a>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Harden external link opened with target="_blank"

Add rel="noopener noreferrer" to prevent reverse-tabnabbing when opening the API key page.

🔒 Suggested fix
-<a :href="providerApiKeyUrl" target="_blank" class="text-primary">{{ provider.name }}</a>
+<a :href="providerApiKeyUrl" target="_blank" rel="noopener noreferrer" class="text-primary">
+  {{ provider.name }}
+</a>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<a :href="providerApiKeyUrl" target="_blank" class="text-primary">{{ provider.name }}</a>
<a :href="providerApiKeyUrl" target="_blank" rel="noopener noreferrer" class="text-primary">
{{ provider.name }}
</a>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/settings/components/ProviderApiConfig.vue` at line 174, The
anchor that opens external links in ProviderApiConfig.vue (the <a> with
:href="providerApiKeyUrl" and target="_blank" displaying {{ provider.name }})
should include rel="noopener noreferrer" to prevent reverse-tabnabbing; update
that <a> element to add the rel attribute while keeping the existing :href and
target bindings.

Comment on lines +791 to +800
const availableEndpointTypes = computed<NewApiEndpointType[]>(() => {
const supportedEndpointTypes = providerModelMeta.value?.supportedEndpointTypes
if (Array.isArray(supportedEndpointTypes) && supportedEndpointTypes.length > 0) {
const normalizedEndpointTypes = supportedEndpointTypes.filter(isNewApiEndpointType)
if (normalizedEndpointTypes.length > 0) {
return normalizedEndpointTypes
}
}

return [...NEW_API_ENDPOINT_TYPES]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t expose every endpoint when the model only has a persisted default.

If supportedEndpointTypes is absent, this offers the full NEW_API_ENDPOINT_TYPES list even for existing provider models. Older/newly-fetched entries can still carry a single endpointType, so the dialog will let users save delegates the model never advertised and NewApiProvider may route those requests to an incompatible backend. Fall back to [providerModelMeta.endpointType] when that exists, and reserve the full list for brand-new custom models.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/components/settings/ModelConfigDialog.vue` around lines 791
- 800, The computed availableEndpointTypes currently returns the full
NEW_API_ENDPOINT_TYPES when supportedEndpointTypes is absent, which exposes
endpoints for persisted models that only have a single persisted default; change
availableEndpointTypes to: if providerModelMeta.value?.supportedEndpointTypes is
a non-empty array use the filtered isNewApiEndpointType list, else if
providerModelMeta.value?.endpointType exists return
[providerModelMeta.value.endpointType] (validated with isNewApiEndpointType),
and only fall back to [...NEW_API_ENDPOINT_TYPES] when neither
supportedEndpointTypes nor providerModelMeta.endpointType exist (i.e., truly
new/custom models). This touches availableEndpointTypes, providerModelMeta,
supportedEndpointTypes, endpointType, isNewApiEndpointType, and
NEW_API_ENDPOINT_TYPES.

Comment on lines +456 to +467
"endpointType": {
"label": "Endpoint Type",
"description": "Select which upstream protocol New API should use for this model.",
"placeholder": "Select endpoint type",
"required": "Endpoint type is required",
"options": {
"openai": "OpenAI Chat",
"openai-response": "OpenAI Responses",
"anthropic": "Anthropic Messages",
"gemini": "Gemini Native",
"image-generation": "Image Generation"
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Korean locale block is not localized yet.

The new endpointType labels/descriptions are English, so users on Korean locale will see mixed UI language.

💡 Suggested localized replacement
       "endpointType": {
-        "label": "Endpoint Type",
-        "description": "Select which upstream protocol New API should use for this model.",
-        "placeholder": "Select endpoint type",
-        "required": "Endpoint type is required",
+        "label": "엔드포인트 유형",
+        "description": "이 모델에 대해 New API가 사용할 업스트림 프로토콜을 선택하세요.",
+        "placeholder": "엔드포인트 유형 선택",
+        "required": "엔드포인트 유형은 필수입니다",
         "options": {
-          "openai": "OpenAI Chat",
+          "openai": "OpenAI 채팅",
           "openai-response": "OpenAI Responses",
           "anthropic": "Anthropic Messages",
-          "gemini": "Gemini Native",
-          "image-generation": "Image Generation"
+          "gemini": "Gemini 네이티브",
+          "image-generation": "이미지 생성"
         }
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"endpointType": {
"label": "Endpoint Type",
"description": "Select which upstream protocol New API should use for this model.",
"placeholder": "Select endpoint type",
"required": "Endpoint type is required",
"options": {
"openai": "OpenAI Chat",
"openai-response": "OpenAI Responses",
"anthropic": "Anthropic Messages",
"gemini": "Gemini Native",
"image-generation": "Image Generation"
}
"endpointType": {
"label": "엔드포인트 유형",
"description": "이 모델에 대해 New API가 사용할 업스트림 프로토콜을 선택하세요.",
"placeholder": "엔드포인트 유형 선택",
"required": "엔드포인트 유형은 필수입니다",
"options": {
"openai": "OpenAI 채팅",
"openai-response": "OpenAI Responses",
"anthropic": "Anthropic Messages",
"gemini": "Gemini 네이티브",
"image-generation": "이미지 생성"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/i18n/ko-KR/settings.json` around lines 456 - 467, The
endpointType localization block is still in English; update the "endpointType"
object keys (label, description, placeholder, required) and each options entry
("openai", "openai-response", "anthropic", "gemini", "image-generation") with
Korean translations so the ko-KR settings.json is fully localized; keep keys
unchanged but replace the English strings with appropriate Korean equivalents
for label, description, placeholder, required, and each option value.

Comment on lines +456 to +467
"endpointType": {
"label": "Endpoint Type",
"description": "Select which upstream protocol New API should use for this model.",
"placeholder": "Select endpoint type",
"required": "Endpoint type is required",
"options": {
"openai": "OpenAI Chat",
"openai-response": "OpenAI Responses",
"anthropic": "Anthropic Messages",
"gemini": "Gemini Native",
"image-generation": "Image Generation"
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Translate endpointType copy to pt-BR to avoid mixed-language UI.

Line 457 through Line 466 are English-only in the Portuguese locale file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/i18n/pt-BR/settings.json` around lines 456 - 467, The
endpointType translation entries are still in English; update the "endpointType"
object (keys: "label", "description", "placeholder", "required", and each
"options" value: "openai", "openai-response", "anthropic", "gemini",
"image-generation") to Portuguese so the pt-BR locale is consistent; replace the
English strings with appropriate Brazilian Portuguese equivalents for the label,
description, placeholder, required message, and each option display name while
preserving the JSON keys and structure.

Comment on lines +456 to +467
"endpointType": {
"label": "Endpoint Type",
"description": "Select which upstream protocol New API should use for this model.",
"placeholder": "Select endpoint type",
"required": "Endpoint type is required",
"options": {
"openai": "OpenAI Chat",
"openai-response": "OpenAI Responses",
"anthropic": "Anthropic Messages",
"gemini": "Gemini Native",
"image-generation": "Image Generation"
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Please translate the new endpointType block to zh-HK.

Line 457 through Line 466 are currently English, which will create mixed-language UI in this locale.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/i18n/zh-HK/settings.json` around lines 456 - 467, The
endpointType translation block is still in English; update the object keys under
endpointType (label, description, placeholder, required, and each options key:
openai, openai-response, anthropic, gemini, image-generation) to Traditional
Chinese (zh-HK) so the UI is fully localized—replace "Endpoint Type", "Select
which upstream protocol New API should use for this model.", "Select endpoint
type", "Endpoint type is required", and the option values "OpenAI Chat", "OpenAI
Responses", "Anthropic Messages", "Gemini Native", "Image Generation" with
appropriate zh-HK translations while preserving the same JSON keys and
structure.

@zerob13 zerob13 merged commit fe3c8b1 into dev Apr 9, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants