-
Notifications
You must be signed in to change notification settings - Fork 414
AI setting custom #1586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI setting custom #1586
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughRefactors provider definitions from object-based Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant UI as ProviderCard (config)
participant Form as FormState
participant Context as ProviderContext
participant Store as aiProviderSchema
Note right of UI #eef2ff: ProviderCard driven by `config` (id, icon, displayName, baseUrl, apiKey)
User->>UI: open provider panel
UI->>Form: set defaults (base_url = config.baseUrl ?? "", api_key = existing or "")
alt config.baseUrl truthy
UI->>UI: show Advanced section with editable base_url
else config.baseUrl falsy
UI->>UI: show base_url input in main form
end
User->>Form: edit fields
Form->>Context: update selected provider id = config.id
Form->>Store: validate (aiProviderSchema)
Store-->>Form: validation result (api_key required if base_url startsWith https:)
Form-->>UI: show errors / submit
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30–45 minutes Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 2 inconclusive)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
apps/desktop/src/components/settings/ai/llm/configure.tsx(3 hunks)apps/desktop/src/components/settings/ai/llm/shared.tsx(2 hunks)apps/desktop/src/components/settings/ai/shared/index.tsx(1 hunks)apps/desktop/src/components/settings/ai/stt/configure.tsx(4 hunks)apps/desktop/src/components/settings/ai/stt/shared.tsx(2 hunks)apps/desktop/src/hooks/useLLMConnection.ts(1 hunks)apps/desktop/src/store/tinybase/internal.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
apps/desktop/**/*.{tsx,jsx}
📄 CodeRabbit inference engine (apps/desktop/.cursor/rules/style.mdc)
apps/desktop/**/*.{tsx,jsx}: When there are many Tailwind classNames with conditional logic, use the utilitycnimported asimport { cn } from "@hypr/utils"
When usingcnfor Tailwind classNames, always pass an array
Group Tailwind classNames by logical sections when usingcn(split array items by grouping)
Files:
apps/desktop/src/components/settings/ai/shared/index.tsxapps/desktop/src/components/settings/ai/llm/configure.tsxapps/desktop/src/components/settings/ai/stt/configure.tsxapps/desktop/src/components/settings/ai/stt/shared.tsxapps/desktop/src/components/settings/ai/llm/shared.tsx
🧬 Code graph analysis (3)
apps/desktop/src/components/settings/ai/llm/configure.tsx (2)
apps/desktop/src/components/settings/ai/llm/shared.tsx (1)
PROVIDERS(6-63)apps/desktop/src/components/settings/ai/shared/index.tsx (2)
useProvider(9-21)FormField(23-68)
apps/desktop/src/components/settings/ai/stt/configure.tsx (2)
crates/transcribe-deepgram/src/service.rs (1)
config(34-37)apps/desktop/src/components/settings/ai/shared/index.tsx (1)
FormField(23-68)
apps/desktop/src/components/settings/ai/stt/shared.tsx (1)
plugins/local-stt/js/bindings.gen.ts (1)
SupportedSttModel(98-98)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: zizmor
- GitHub Check: ci (macos, macos-14)
🔇 Additional comments (15)
apps/desktop/src/hooks/useLLMConnection.ts (1)
56-56: LGTM!The change correctly updates the baseUrl retrieval to match the new string-based provider schema. The fallback logic properly handles the new "custom" provider (where
baseUrlisundefined) by falling back to the user-providedbase_urlfrom config or an empty string, which is then validated on line 59-61.apps/desktop/src/components/settings/ai/shared/index.tsx (1)
43-43: LGTM!The removal of the
hiddenprop simplifies the component. Visibility control is now handled at a higher level through conditional rendering (e.g.,{!config.baseUrl && <form.Field...>}in the parent components), which is a cleaner approach.apps/desktop/src/components/settings/ai/llm/shared.tsx (3)
1-1: LGTM!The
Iconimport is added to support the new "custom" provider's icon rendering.
13-53: LGTM!The migration from object-based
baseUrlstructure ({ value, immutable }) to plain string is consistent across all providers and aligns with the PR's goal of simplifying provider configuration.
55-62: LGTM!The new "custom" provider entry is well-structured with:
baseUrl: undefinedto require user inputapiKey: trueto indicate API key requirement- Appropriate icon and display name
This enables users to configure custom OpenAI-compatible endpoints.
apps/desktop/src/components/settings/ai/stt/configure.tsx (3)
55-55: LGTM!The default value now properly falls back to
config.baseUrl, which provides sensible defaults for predefined providers while allowing the "custom" provider to start with an empty string.
99-138: LGTM! Well-designed conditional UI pattern.The conditional rendering creates two distinct UX flows:
- Custom provider (
!config.baseUrl): Base URL field shown upfront (required input)- Predefined providers (
config.baseUrl): Base URL pre-configured, editable in Advanced section (optional override)This is intuitive and follows good UX principles.
As per coding guidelines, the
cnusage properly passes an array with logical grouping.
391-396: No issues found — messaging is intentionally different based on backend API requirements.The STT and LLM custom provider messages reflect accurate, distinct backend implementations:
- STT (Deepgram): Uses the
deepgram::DeepgramSDK, which requires "Deepgram compatible" endpoints- LLM (OpenAI): Uses the
async_openaiSDK with customapi_basesupport, which requires "OpenAI compatible" endpointsBoth backends support custom base URLs through their respective SDKs. The messaging differences are intentional and correct—not inconsistencies requiring updates.
Likely an incorrect or invalid review comment.
apps/desktop/src/components/settings/ai/llm/configure.tsx (5)
19-28: LGTM!The refactoring to a single
configprop improves maintainability and aligns with the config-driven provider model. This reduces prop drilling and makes the component more cohesive.
35-35: LGTM!The default
base_urlproperly usesconfig.baseUrl ?? "", which provides sensible defaults for predefined providers while allowing the "custom" provider to start with an empty string.
60-66: LGTM!The UI elements now correctly derive from the
configobject properties (icon,displayName,badge), maintaining consistency with the config-driven approach.As per coding guidelines, the
cnusage properly passes an array.
78-119: LGTM! Excellent conditional rendering logic.The multi-tiered conditional rendering is well-designed:
- Base URL field (lines 78-88): Shown upfront when
!config.baseUrl(custom provider)- API Key field (lines 89-101): Shown only when
config.apiKeyis true (excludes Ollama, LM Studio, Hyprnote)- Advanced section (lines 102-119): Shown when
config.baseUrlexists, allowing users to override predefined Base URLsThis creates an intuitive UX that adapts to each provider's requirements.
As per coding guidelines, the
cnusage is not applicable here as there are no conditional Tailwind classes.
129-130: LGTM!The custom provider context message appropriately indicates "OpenAI compatible" endpoint support, which is accurate for LLM providers.
apps/desktop/src/components/settings/ai/stt/shared.tsx (2)
1-1: LGTM!The
Iconimport is added to support the new "custom" provider's icon rendering.
30-65: LGTM!The changes are consistent with the LLM provider refactoring:
baseUrlmigrated from object structure to plain string across all providers- New "custom" provider added with
baseUrl: undefinedand emptymodelsarrayThe empty
modelsarray for the custom provider is appropriate since custom endpoints would have their own model lists.
No description provided.