-
Notifications
You must be signed in to change notification settings - Fork 431
input-modality support in llm listing functions #1746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Return partition results plus metadata map Stop doing extractMetadata during partitioning and instead return the partitioned results alongside an extracted metadata map. This keeps partition focused on splitting data and moves metadata extraction to a separate map, simplifying responsibilities and allowing callers to access both partitioned items and their metadata. The change wraps partition output in an object spread so additional fields (like the metadata map) can be returned without altering partition logic.
✅ Deploy Preview for hyprnote ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Warning Rate limit exceeded@yujonglee has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 4 minutes and 24 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughAdds per-model metadata (input modalities) to AI model listing results. Introduces Changes
Sequence DiagramsequenceDiagram
autonumber
participant Caller as Caller/UI
participant Provider as Provider Module (list-*)
participant Common as list-common
Caller->>Provider: request listModels(config)
Provider->>Provider: fetch raw models\npartition -> { models, ignored }
Provider->>Common: extractMetadataMap(items, extractId, extractMetadata)
Common->>Common: build Record<string, ModelMetadata>
Common-->>Provider: metadata map
Provider->>Provider: combine -> { ...partition, metadata }
Provider-->>Caller: { models, ignored, metadata }
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
apps/desktop/src/components/settings/ai/shared/list-openrouter.ts (1)
71-85: Input modality metadata wiring looks solid; consider default for unknown modalities
extractMetadataMapplusgetInputModalitiesis wired correctly and keeps the metadata shape constrained to"text" | "image". One nuance:supportsTextInputtreats missingarchitecture.input_modalitiesas text-capable, whilegetInputModalitiesreturns[]in that case, causinghasMetadatato drop the entry entirely. If downstream callers should assume “text” when modalities are unknown (to align withsupportsTextInput), consider defaultinggetInputModalitiesto["text"]for the non-array/absent case, or document that missing metadata means “text-only/unknown” so consumers can apply that default themselves.Also applies to: 88-99
apps/desktop/src/components/settings/ai/shared/list-anthropic.ts (1)
44-61: Anthropic metadata is structurally correct; verify that all models are truly multimodalThe new
metadata: extractMetadataMap(...)wiring is consistent with other providers and the helpergetInputModalitiesis simple and type-safe. However, returning["text", "image"]for every Anthropic model id may over-declare capabilities if any models are text-only. Please verify against the latest Anthropic model docs/API; if there are text-only variants, consider a per-id mapping (or a default of["text"]with an explicit allowlist for image-capable models) to avoid enabling image input in the UI where it will fail.Also applies to: 68-70
apps/desktop/src/components/settings/ai/shared/list-openai.ts (1)
33-50: OpenAI metadata wiring is consistent; confirm image support assumptionsBoth
listOpenAIModelsandlistGenericModelsnow enrich results withmetadatausingextractMetadataMap, and the shapes (["text", "image"]vs["text"]) line up with the sharedModelMetadata/InputModalitytypes. The remaining question is behavioral:listOpenAIModelsmarks every returned model as["text", "image"]. If the/modelsendpoint can return text-only or otherwise non-vision models, this may cause the UI or callers to offer image input where it is not actually supported. It might be safer to default to["text"]and whitelist known vision-capable IDs, or derive capabilities from model naming/docs.Also applies to: 68-85
apps/desktop/src/components/settings/ai/shared/list-common.ts (1)
63-71: hasMetadata + extractMetadataMap enforce non-empty metadata; ensure callers and providers align
hasMetadataandextractMetadataMapcorrectly avoid storing entries whereinput_modalitiesis missing or an empty array, so the returnedmetadatamap is intentionally sparse. That means providers need to return a non-emptyinput_modalitiesarray (e.g.,["text"]) to create an entry, and consumers must treat missing keys as “no metadata / fallback behavior” rather than an error. If you later extendModelMetadatawith more fields, remember to updatehasMetadataaccordingly so valid metadata isn’t filtered out just becauseinput_modalitiesis empty.Also applies to: 73-93, 95-111
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
apps/desktop/src/components/settings/ai/shared/list-anthropic.ts(3 hunks)apps/desktop/src/components/settings/ai/shared/list-common.ts(2 hunks)apps/desktop/src/components/settings/ai/shared/list-google.ts(2 hunks)apps/desktop/src/components/settings/ai/shared/list-lmstudio.ts(4 hunks)apps/desktop/src/components/settings/ai/shared/list-ollama.ts(3 hunks)apps/desktop/src/components/settings/ai/shared/list-openai.ts(5 hunks)apps/desktop/src/components/settings/ai/shared/list-openrouter.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
apps/desktop/src/components/settings/ai/shared/list-ollama.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (1)
ModelMetadata(16-18)
apps/desktop/src/components/settings/ai/shared/list-google.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (5)
partition(73-93)extractMetadataMap(95-111)REQUEST_TIMEOUT(31-31)DEFAULT_RESULT(26-30)InputModality(14-14)
apps/desktop/src/components/settings/ai/shared/list-openai.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (2)
partition(73-93)extractMetadataMap(95-111)
apps/desktop/src/components/settings/ai/shared/list-anthropic.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (5)
partition(73-93)extractMetadataMap(95-111)REQUEST_TIMEOUT(31-31)DEFAULT_RESULT(26-30)InputModality(14-14)
apps/desktop/src/components/settings/ai/shared/list-lmstudio.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (1)
ModelMetadata(16-18)
apps/desktop/src/components/settings/ai/shared/list-openrouter.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (5)
partition(73-93)extractMetadataMap(95-111)REQUEST_TIMEOUT(31-31)DEFAULT_RESULT(26-30)InputModality(14-14)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Redirect rules - hyprnote
- GitHub Check: Header rules - hyprnote
- GitHub Check: Pages changed - hyprnote
- GitHub Check: fmt
🔇 Additional comments (4)
apps/desktop/src/components/settings/ai/shared/list-google.ts (1)
55-64: Google model metadata enrichment and Gemini heuristic look correct; keep an eye on namingThe metadata enrichment via
extractMetadataMap(models, extractModelId, ...)is consistent with other providers and correctly constrainsinput_modalitiesto["text"]or["text", "image"]. ThegetInputModalitiesheuristic based on/gemini/in the model id is reasonable today, but it does assume that all and only multimodal models are branded “gemini”. Please double-check this against the current Google models API and be prepared to update the heuristic if new multimodal families use different prefixes or if text-only “gemini” variants appear.Also applies to: 70-74
apps/desktop/src/components/settings/ai/shared/list-ollama.ts (1)
89-127: Ollama metadata integration is consistent and conservativeAdding a
metadata: Record<string, ModelMetadata>alongsidemodels/ignoredis consistent with the sharedListModelsResultshape, and defaulting supported Ollama models to{ input_modalities: ["text"] }is a sensible conservative choice given the lack of modality introspection. Just ensure downstream consumers treatmetadataas partial (present only for supported models) and handle missing entries gracefully.apps/desktop/src/components/settings/ai/shared/list-lmstudio.ts (1)
72-79: LM Studio metadata addition matches shared contractThe added
metadatamap inprocessLMStudioModelsis correctly typed viaModelMetadataand conservatively marks accepted models as["text"], which aligns with the TODO about lacking modality introspection. Returning{ models, ignored, metadata }keeps this in sync withListModelsResultand other providers. As with Ollama, just ensure any consumer code tolerates missing metadata entries for ignored or unprocessed models.Also applies to: 94-98, 114-115
apps/desktop/src/components/settings/ai/shared/list-common.ts (1)
14-24: New metadata types and ListModelsResult shape are coherentDefining
InputModality,ModelMetadata, and extendingListModelsResult/DEFAULT_RESULTwith ametadatamap gives a clear, shared contract for per-model capabilities and matches how the provider-specific list functions are now structured. This should make it straightforward to attach additional metadata fields later if needed without breaking existing callers.Also applies to: 26-30
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)
149-164: <!-- [scratchpad_start]
[task verify_review 3/3]
Verification complete with conclusive findings.[observations]
- ListModelsResult type definition (list-common.ts:20-24) requires: models, ignored, metadata
- DEFAULT_RESULT pattern (list-common.ts:26-30) shows: { models: [], ignored: [], metadata: {} }
- ModelCombobox expects (model-combobox.tsx:65): listModels: () => Promise | ListModelsResult
- Current code (select.tsx:151): returns { models: [], ignored: [] } - MISSING metadata field
- Only ONE instance found in codebase with this issue
[analysis]
- Static verification confirms the type mismatch exists
- Proposed fix { models: [], ignored: [], metadata: {} } is correct
- Fix aligns with the established DEFAULT_RESULT pattern
- Fix satisfies the ListModelsResult type requirement
- Return type becomes: ListModelsResult | Promise matching expected type
[pending]
- None - verification conclusive
[done]
- Confirmed ListModelsResult requires metadata field
- Confirmed current code is missing metadata
- Confirmed proposed fix is correct and follows codebase patterns
- Confirmed this is the only instance requiring this fix
</scratchpad_end] -->Add
metadatafield to thelistModelsfallback inapps/desktop/src/components/settings/ai/llm/select.tsxline 151The current fallback returns
{ models: [], ignored: [] }, butListModelsResultrequires ametadatafield. The type is defined as:export type ListModelsResult = { models: string[]; ignored: IgnoredModel[]; metadata: Record<string, ModelMetadata>; };The codebase already establishes the correct pattern in
DEFAULT_RESULTwithmetadata: {}. Apply the proposed diff to match:const listModels = () => { if (!maybeListModels || locked) { return { models: [], ignored: [], metadata: {}, }; } return maybeListModels(); };This resolves the TS2322 type error at the
listModels={listModels}prop assignment.
🧹 Nitpick comments (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)
185-188: AlignuseConfiguredMappingreturn type withundefinedsentinel (null no longer used)
useConfiguredMappingis typed and cast as returningRecord<string, null | (() => Promise<ListModelsResult>)>, but all “disabled/unavailable” paths now returnundefined, notnull. The cast hides this mismatch, but it’s a potential source of confusion.Consider updating the signature and cast to use
undefinedinstead ofnull:-function useConfiguredMapping(): Record< - string, - null | (() => Promise<ListModelsResult>) -> { +function useConfiguredMapping(): Record< + string, + (() => Promise<ListModelsResult>) | undefined +> { … - ) as Record<string, null | (() => Promise<ListModelsResult>)>; + ) as Record<string, (() => Promise<ListModelsResult>) | undefined>;This matches the actual values you return in the
requiresPro, unauthenticated hyprnote, missingbase_url, and missingapi_keybranches, and keeps the sentinel semantics (!configured) but with accurate typing.Also applies to: 196-200, 203-206, 221-225, 227-229, 263-266
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/desktop/src/components/settings/ai/llm/select.tsx(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (2)
ListModelsResult(20-24)InputModality(14-14)
🪛 GitHub Actions: .github/workflows/desktop_ci.yaml
apps/desktop/src/components/settings/ai/llm/select.tsx
[error] 163-163: Type '() => Promise | { models: never[]; ignored: never[]; }' is not assignable to type '() => ListModelsResult | Promise'. (TS2322) during 'pnpm -F desktop typecheck' (tsc --noEmit).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Redirect rules - hyprnote
- GitHub Check: Header rules - hyprnote
- GitHub Check: Pages changed - hyprnote
- GitHub Check: fmt
🔇 Additional comments (2)
apps/desktop/src/components/settings/ai/llm/select.tsx (2)
18-21: New metadata-related type imports look correctImporting
InputModalityandListModelsResultfromlist-commonaligns this component with the shared metadata shape used across providers; usage inuseConfiguredMappingand the hyprnote block is consistent.
203-219: HyprnoteAutometadata wiring is consistent with the new model list contractThe static
ListModelsResultfor the"hyprnote"provider (single"Auto"model withinput_modalities: ["text", "image"]) matches the updatedListModelsResultshape and enables downstream consumers to treat hyprnote like other providers with metadata. This looks good as a first pass; just ensure the declared modalities stay in sync with hyprnote’s actual capabilities over time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
apps/desktop/src/components/settings/ai/llm/select.tsx (2)
203-214: Hyprnote Auto metadata wiring looks good; consider a slightly safer typing patternThe
ListModelsResultstub forhyprnotewithAutoandinput_modalities: ["text", "image"]is consistent with the new shared types and gives the UI enough information to know this model supports both text and image.If you want a bit more type safety, you could avoid a broad cast and use
satisfiesto ensure the array really matchesInputModality[]:- metadata: { - Auto: { - input_modalities: ["text", "image"] as InputModality[], - }, - }, + metadata: { + Auto: { + input_modalities: ["text", "image"] as const satisfies InputModality[], + }, + },This way, if someone ever mistypes a modality string, TypeScript will catch it at compile time instead of being hidden by the cast.
Please confirm your TypeScript version supports
satisfies(TS ≥ 4.9); if not, the current cast is fine functionally.
180-183:useConfiguredMappingnull→undefined change and return type look consistentSwitching the mapping type to
function useConfiguredMapping(): Record< string, undefined | (() => Promise<ListModelsResult>) >and returning
undefinedfor:
- locked providers (
provider.requiresPro && !billing.isPro),- unauthenticated
hyprnote,- missing
base_url,- missing
api_key,is consistent with the call sites in this file, which already use truthy checks (
!!configured,!maybeListModels || locked) rather than explicitnullcomparisons. The final cast onObject.fromEntriesalso matches this type.Two small suggestions:
- To reduce duplication and improve readability, you might introduce a local alias:
type ListModelsFn = () => Promise<ListModelsResult>;and use
Record<string, ListModelsFn | undefined>both in the return type and theascast.
- If
useConfiguredMappingis used elsewhere, double‑check those callers for any=== nullchecks on mapping entries; they should now only handleundefined.Overall, the semantics of “unavailable provider →
undefinedentry” are clear and line up with the UI logic here.If this hook has external consumers, please verify there are no remaining null checks against its return map.
Also applies to: 195-201, 219-224, 260-261
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/desktop/src/components/settings/ai/llm/select.tsx(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (2)
ListModelsResult(20-24)InputModality(14-14)
🪛 GitHub Actions: .github/workflows/desktop_ci.yaml
apps/desktop/src/components/settings/ai/llm/select.tsx
[error] 158-158: Type '(() => Promise) | undefined' is not assignable to type '() => ListModelsResult | Promise'.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: fmt
🔇 Additional comments (2)
apps/desktop/src/components/settings/ai/llm/select.tsx (2)
18-21: Shared list-common type imports look correctImporting
InputModalityandListModelsResultfromlist-commonmatches the shared type definitions and keeps the hook aligned with the new model-listing shape. No issues here.
149-159: TypeScript type error confirmed—fix restores type safetyThe issue is real and accurately described.
listModelscan beundefinedwhenlockedis true, butModelComboboxrequires a non-optional function. TypeScript cannot use thedisabledprop to narrow the union type.Your proposed fix is correct:
- Always passes a function (never undefined) to satisfy the prop type
- Returns an empty
ListModelsResultwith the correct structure when locked or unconfigured- Maintains runtime safety (no network calls)
Only one call site exists for
ModelCombobox, so this fix is isolated and low-risk.
No description provided.