Skip to content

Conversation

@yujonglee
Copy link
Contributor

No description provided.

Return partition results plus metadata map

Stop doing extractMetadata during partitioning and instead return the partitioned
results alongside an extracted metadata map. This keeps partition focused on
splitting data and moves metadata extraction to a separate map, simplifying
responsibilities and allowing callers to access both partitioned items and their
metadata. The change wraps partition output in an object spread so additional
fields (like the metadata map) can be returned without altering partition logic.
@netlify
Copy link

netlify bot commented Nov 20, 2025

Deploy Preview for hyprnote ready!

Name Link
🔨 Latest commit a2db269
🔍 Latest deploy log https://app.netlify.com/projects/hyprnote/deploys/691ece78eab26f000896b5f2
😎 Deploy Preview https://deploy-preview-1746--hyprnote.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link

coderabbitai bot commented Nov 20, 2025

Warning

Rate limit exceeded

@yujonglee has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 4 minutes and 24 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between ffe8d63 and a2db269.

📒 Files selected for processing (3)
  • apps/desktop/src/components/settings/ai/shared/model-combobox.tsx (3 hunks)
  • apps/desktop/src/hooks/useCurrentModelModalitySupport.ts (1 hunks)
  • apps/desktop/src/hooks/useModelMetadata.ts (1 hunks)
📝 Walkthrough

Walkthrough

Adds per-model metadata (input modalities) to AI model listing results. Introduces InputModality and ModelMetadata types and extractMetadataMap utility in shared code, then updates multiple provider modules and UI code to return partitioned model lists augmented with a metadata map.

Changes

Cohort / File(s) Summary
Common infrastructure
apps/desktop/src/components/settings/ai/shared/list-common.ts
Added InputModality and ModelMetadata types; extended ListModelsResult to include metadata: Record<string, ModelMetadata>; updated DEFAULT_RESULT; narrowed partition return to { models, ignored }; added extractMetadataMap and internal hasMetadata.
Anthropic provider
apps/desktop/src/components/settings/ai/shared/list-anthropic.ts
Imported extractMetadataMap and InputModality; replaced direct partition return with { ...partitionResult, metadata: extractMetadataMap(...) }; added getInputModalities(modelId) returning ["text","image"].
Google provider
apps/desktop/src/components/settings/ai/shared/list-google.ts
Imported extractMetadataMap and InputModality; added getInputModalities(modelId) returning ["text","image"] for Gemini-like ids, else ["text"]; return augmented { ...partition, metadata }.
OpenAI & Generic
apps/desktop/src/components/settings/ai/shared/list-openai.ts
Imported extractMetadataMap; listOpenAIModels and listGenericModels now return { ...partition, metadata }; OpenAI models labeled ["text","image"], generic models ["text"].
OpenRouter provider
apps/desktop/src/components/settings/ai/shared/list-openrouter.ts
Imported extractMetadataMap and InputModality; added getInputModalities(model) that derives modalities from model.architecture.input_modalities; return augmented { ...partition, metadata }.
LMStudio provider
apps/desktop/src/components/settings/ai/shared/list-lmstudio.ts
Imported ModelMetadata; added local metadata map and populate metadata[model.path] = { input_modalities: ["text"] } for accepted models; return { models, ignored, metadata }.
Ollama provider
apps/desktop/src/components/settings/ai/shared/list-ollama.ts
Imported ModelMetadata; added metadata map and populate entries with input_modalities: ["text"] for supported models; return { models, ignored, metadata }.
UI selection
apps/desktop/src/components/settings/ai/llm/select.tsx
Replaced several null returns with undefined; for hyprnote provider, when authenticated return a concrete ListModelsResult including metadata (e.g., "Auto" model → ["text","image"]); imported InputModality and ListModelsResult; updated useConfiguredMapping return type to use undefined instead of null.

Sequence Diagram

sequenceDiagram
    autonumber
    participant Caller as Caller/UI
    participant Provider as Provider Module (list-*)
    participant Common as list-common

    Caller->>Provider: request listModels(config)
    Provider->>Provider: fetch raw models\npartition -> { models, ignored }
    Provider->>Common: extractMetadataMap(items, extractId, extractMetadata)
    Common->>Common: build Record<string, ModelMetadata>
    Common-->>Provider: metadata map
    Provider->>Provider: combine -> { ...partition, metadata }
    Provider-->>Caller: { models, ignored, metadata }
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Focus areas:
    • apps/desktop/src/components/settings/ai/shared/list-common.ts: verify type additions, DEFAULT_RESULT change, partition return narrowing, and extractMetadataMap filtering.
    • Provider modules (list-*.ts): confirm correct id extraction and input-modality heuristics (Anthropic, Google/Gemini, OpenRouter architecture).
    • apps/desktop/src/components/settings/ai/llm/select.tsx: validate undefined/null semantics and the hyprnote branch return shape.

Possibly related PRs

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive No description was provided by the author, making it impossible to assess relevance to the changeset. Add a pull request description explaining the purpose and scope of the input-modality support changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately describes the main change: adding input-modality support across all LLM listing functions.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
apps/desktop/src/components/settings/ai/shared/list-openrouter.ts (1)

71-85: Input modality metadata wiring looks solid; consider default for unknown modalities

extractMetadataMap plus getInputModalities is wired correctly and keeps the metadata shape constrained to "text" | "image". One nuance: supportsTextInput treats missing architecture.input_modalities as text-capable, while getInputModalities returns [] in that case, causing hasMetadata to drop the entry entirely. If downstream callers should assume “text” when modalities are unknown (to align with supportsTextInput), consider defaulting getInputModalities to ["text"] for the non-array/absent case, or document that missing metadata means “text-only/unknown” so consumers can apply that default themselves.

Also applies to: 88-99

apps/desktop/src/components/settings/ai/shared/list-anthropic.ts (1)

44-61: Anthropic metadata is structurally correct; verify that all models are truly multimodal

The new metadata: extractMetadataMap(...) wiring is consistent with other providers and the helper getInputModalities is simple and type-safe. However, returning ["text", "image"] for every Anthropic model id may over-declare capabilities if any models are text-only. Please verify against the latest Anthropic model docs/API; if there are text-only variants, consider a per-id mapping (or a default of ["text"] with an explicit allowlist for image-capable models) to avoid enabling image input in the UI where it will fail.

Also applies to: 68-70

apps/desktop/src/components/settings/ai/shared/list-openai.ts (1)

33-50: OpenAI metadata wiring is consistent; confirm image support assumptions

Both listOpenAIModels and listGenericModels now enrich results with metadata using extractMetadataMap, and the shapes (["text", "image"] vs ["text"]) line up with the shared ModelMetadata/InputModality types. The remaining question is behavioral: listOpenAIModels marks every returned model as ["text", "image"]. If the /models endpoint can return text-only or otherwise non-vision models, this may cause the UI or callers to offer image input where it is not actually supported. It might be safer to default to ["text"] and whitelist known vision-capable IDs, or derive capabilities from model naming/docs.

Also applies to: 68-85

apps/desktop/src/components/settings/ai/shared/list-common.ts (1)

63-71: hasMetadata + extractMetadataMap enforce non-empty metadata; ensure callers and providers align

hasMetadata and extractMetadataMap correctly avoid storing entries where input_modalities is missing or an empty array, so the returned metadata map is intentionally sparse. That means providers need to return a non-empty input_modalities array (e.g., ["text"]) to create an entry, and consumers must treat missing keys as “no metadata / fallback behavior” rather than an error. If you later extend ModelMetadata with more fields, remember to update hasMetadata accordingly so valid metadata isn’t filtered out just because input_modalities is empty.

Also applies to: 73-93, 95-111

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 364e53c and 1b3dbf4.

📒 Files selected for processing (7)
  • apps/desktop/src/components/settings/ai/shared/list-anthropic.ts (3 hunks)
  • apps/desktop/src/components/settings/ai/shared/list-common.ts (2 hunks)
  • apps/desktop/src/components/settings/ai/shared/list-google.ts (2 hunks)
  • apps/desktop/src/components/settings/ai/shared/list-lmstudio.ts (4 hunks)
  • apps/desktop/src/components/settings/ai/shared/list-ollama.ts (3 hunks)
  • apps/desktop/src/components/settings/ai/shared/list-openai.ts (5 hunks)
  • apps/desktop/src/components/settings/ai/shared/list-openrouter.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
apps/desktop/src/components/settings/ai/shared/list-ollama.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (1)
  • ModelMetadata (16-18)
apps/desktop/src/components/settings/ai/shared/list-google.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (5)
  • partition (73-93)
  • extractMetadataMap (95-111)
  • REQUEST_TIMEOUT (31-31)
  • DEFAULT_RESULT (26-30)
  • InputModality (14-14)
apps/desktop/src/components/settings/ai/shared/list-openai.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (2)
  • partition (73-93)
  • extractMetadataMap (95-111)
apps/desktop/src/components/settings/ai/shared/list-anthropic.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (5)
  • partition (73-93)
  • extractMetadataMap (95-111)
  • REQUEST_TIMEOUT (31-31)
  • DEFAULT_RESULT (26-30)
  • InputModality (14-14)
apps/desktop/src/components/settings/ai/shared/list-lmstudio.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (1)
  • ModelMetadata (16-18)
apps/desktop/src/components/settings/ai/shared/list-openrouter.ts (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (5)
  • partition (73-93)
  • extractMetadataMap (95-111)
  • REQUEST_TIMEOUT (31-31)
  • DEFAULT_RESULT (26-30)
  • InputModality (14-14)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Redirect rules - hyprnote
  • GitHub Check: Header rules - hyprnote
  • GitHub Check: Pages changed - hyprnote
  • GitHub Check: fmt
🔇 Additional comments (4)
apps/desktop/src/components/settings/ai/shared/list-google.ts (1)

55-64: Google model metadata enrichment and Gemini heuristic look correct; keep an eye on naming

The metadata enrichment via extractMetadataMap(models, extractModelId, ...) is consistent with other providers and correctly constrains input_modalities to ["text"] or ["text", "image"]. The getInputModalities heuristic based on /gemini/ in the model id is reasonable today, but it does assume that all and only multimodal models are branded “gemini”. Please double-check this against the current Google models API and be prepared to update the heuristic if new multimodal families use different prefixes or if text-only “gemini” variants appear.

Also applies to: 70-74

apps/desktop/src/components/settings/ai/shared/list-ollama.ts (1)

89-127: Ollama metadata integration is consistent and conservative

Adding a metadata: Record<string, ModelMetadata> alongside models/ignored is consistent with the shared ListModelsResult shape, and defaulting supported Ollama models to { input_modalities: ["text"] } is a sensible conservative choice given the lack of modality introspection. Just ensure downstream consumers treat metadata as partial (present only for supported models) and handle missing entries gracefully.

apps/desktop/src/components/settings/ai/shared/list-lmstudio.ts (1)

72-79: LM Studio metadata addition matches shared contract

The added metadata map in processLMStudioModels is correctly typed via ModelMetadata and conservatively marks accepted models as ["text"], which aligns with the TODO about lacking modality introspection. Returning { models, ignored, metadata } keeps this in sync with ListModelsResult and other providers. As with Ollama, just ensure any consumer code tolerates missing metadata entries for ignored or unprocessed models.

Also applies to: 94-98, 114-115

apps/desktop/src/components/settings/ai/shared/list-common.ts (1)

14-24: New metadata types and ListModelsResult shape are coherent

Defining InputModality, ModelMetadata, and extending ListModelsResult/DEFAULT_RESULT with a metadata map gives a clear, shared contract for per-model capabilities and matches how the provider-specific list functions are now structured. This should make it straightforward to attach additional metadata fields later if needed without breaking existing callers.

Also applies to: 26-30

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)

149-164: <!-- [scratchpad_start]
[task verify_review 3/3]
Verification complete with conclusive findings.

[observations]

  • ListModelsResult type definition (list-common.ts:20-24) requires: models, ignored, metadata
  • DEFAULT_RESULT pattern (list-common.ts:26-30) shows: { models: [], ignored: [], metadata: {} }
  • ModelCombobox expects (model-combobox.tsx:65): listModels: () => Promise | ListModelsResult
  • Current code (select.tsx:151): returns { models: [], ignored: [] } - MISSING metadata field
  • Only ONE instance found in codebase with this issue

[analysis]

  • Static verification confirms the type mismatch exists
  • Proposed fix { models: [], ignored: [], metadata: {} } is correct
  • Fix aligns with the established DEFAULT_RESULT pattern
  • Fix satisfies the ListModelsResult type requirement
  • Return type becomes: ListModelsResult | Promise matching expected type

[pending]

  • None - verification conclusive

[done]

  • Confirmed ListModelsResult requires metadata field
  • Confirmed current code is missing metadata
  • Confirmed proposed fix is correct and follows codebase patterns
  • Confirmed this is the only instance requiring this fix
    </scratchpad_end] -->

Add metadata field to the listModels fallback in apps/desktop/src/components/settings/ai/llm/select.tsx line 151

The current fallback returns { models: [], ignored: [] }, but ListModelsResult requires a metadata field. The type is defined as:

export type ListModelsResult = {
  models: string[];
  ignored: IgnoredModel[];
  metadata: Record<string, ModelMetadata>;
};

The codebase already establishes the correct pattern in DEFAULT_RESULT with metadata: {}. Apply the proposed diff to match:

              const listModels = () => {
                if (!maybeListModels || locked) {
                  return {
                    models: [],
                    ignored: [],
                    metadata: {},
                  };
                }
                return maybeListModels();
              };

This resolves the TS2322 type error at the listModels={listModels} prop assignment.

🧹 Nitpick comments (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)

185-188: Align useConfiguredMapping return type with undefined sentinel (null no longer used)

useConfiguredMapping is typed and cast as returning Record<string, null | (() => Promise<ListModelsResult>)>, but all “disabled/unavailable” paths now return undefined, not null. The cast hides this mismatch, but it’s a potential source of confusion.

Consider updating the signature and cast to use undefined instead of null:

-function useConfiguredMapping(): Record<
-  string,
-  null | (() => Promise<ListModelsResult>)
-> {
+function useConfiguredMapping(): Record<
+  string,
+  (() => Promise<ListModelsResult>) | undefined
+> {-    ) as Record<string, null | (() => Promise<ListModelsResult>)>;
+    ) as Record<string, (() => Promise<ListModelsResult>) | undefined>;

This matches the actual values you return in the requiresPro, unauthenticated hyprnote, missing base_url, and missing api_key branches, and keeps the sentinel semantics (!configured) but with accurate typing.

Also applies to: 196-200, 203-206, 221-225, 227-229, 263-266

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1b3dbf4 and 2bb7d23.

📒 Files selected for processing (1)
  • apps/desktop/src/components/settings/ai/llm/select.tsx (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (2)
  • ListModelsResult (20-24)
  • InputModality (14-14)
🪛 GitHub Actions: .github/workflows/desktop_ci.yaml
apps/desktop/src/components/settings/ai/llm/select.tsx

[error] 163-163: Type '() => Promise | { models: never[]; ignored: never[]; }' is not assignable to type '() => ListModelsResult | Promise'. (TS2322) during 'pnpm -F desktop typecheck' (tsc --noEmit).

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Redirect rules - hyprnote
  • GitHub Check: Header rules - hyprnote
  • GitHub Check: Pages changed - hyprnote
  • GitHub Check: fmt
🔇 Additional comments (2)
apps/desktop/src/components/settings/ai/llm/select.tsx (2)

18-21: New metadata-related type imports look correct

Importing InputModality and ListModelsResult from list-common aligns this component with the shared metadata shape used across providers; usage in useConfiguredMapping and the hyprnote block is consistent.


203-219: Hyprnote Auto metadata wiring is consistent with the new model list contract

The static ListModelsResult for the "hyprnote" provider (single "Auto" model with input_modalities: ["text", "image"]) matches the updated ListModelsResult shape and enables downstream consumers to treat hyprnote like other providers with metadata. This looks good as a first pass; just ensure the declared modalities stay in sync with hyprnote’s actual capabilities over time.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
apps/desktop/src/components/settings/ai/llm/select.tsx (2)

203-214: Hyprnote Auto metadata wiring looks good; consider a slightly safer typing pattern

The ListModelsResult stub for hyprnote with Auto and input_modalities: ["text", "image"] is consistent with the new shared types and gives the UI enough information to know this model supports both text and image.

If you want a bit more type safety, you could avoid a broad cast and use satisfies to ensure the array really matches InputModality[]:

-            metadata: {
-              Auto: {
-                input_modalities: ["text", "image"] as InputModality[],
-              },
-            },
+            metadata: {
+              Auto: {
+                input_modalities: ["text", "image"] as const satisfies InputModality[],
+              },
+            },

This way, if someone ever mistypes a modality string, TypeScript will catch it at compile time instead of being hidden by the cast.

Please confirm your TypeScript version supports satisfies (TS ≥ 4.9); if not, the current cast is fine functionally.


180-183: useConfiguredMapping null→undefined change and return type look consistent

Switching the mapping type to

function useConfiguredMapping(): Record<
  string,
  undefined | (() => Promise<ListModelsResult>)
>

and returning undefined for:

  • locked providers (provider.requiresPro && !billing.isPro),
  • unauthenticated hyprnote,
  • missing base_url,
  • missing api_key,

is consistent with the call sites in this file, which already use truthy checks (!!configured, !maybeListModels || locked) rather than explicit null comparisons. The final cast on Object.fromEntries also matches this type.

Two small suggestions:

  1. To reduce duplication and improve readability, you might introduce a local alias:
type ListModelsFn = () => Promise<ListModelsResult>;

and use Record<string, ListModelsFn | undefined> both in the return type and the as cast.

  1. If useConfiguredMapping is used elsewhere, double‑check those callers for any === null checks on mapping entries; they should now only handle undefined.

Overall, the semantics of “unavailable provider → undefined entry” are clear and line up with the UI logic here.

If this hook has external consumers, please verify there are no remaining null checks against its return map.

Also applies to: 195-201, 219-224, 260-261

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2bb7d23 and ffe8d63.

📒 Files selected for processing (1)
  • apps/desktop/src/components/settings/ai/llm/select.tsx (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/desktop/src/components/settings/ai/llm/select.tsx (1)
apps/desktop/src/components/settings/ai/shared/list-common.ts (2)
  • ListModelsResult (20-24)
  • InputModality (14-14)
🪛 GitHub Actions: .github/workflows/desktop_ci.yaml
apps/desktop/src/components/settings/ai/llm/select.tsx

[error] 158-158: Type '(() => Promise) | undefined' is not assignable to type '() => ListModelsResult | Promise'.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: fmt
🔇 Additional comments (2)
apps/desktop/src/components/settings/ai/llm/select.tsx (2)

18-21: Shared list-common type imports look correct

Importing InputModality and ListModelsResult from list-common matches the shared type definitions and keeps the hook aligned with the new model-listing shape. No issues here.


149-159: TypeScript type error confirmed—fix restores type safety

The issue is real and accurately described. listModels can be undefined when locked is true, but ModelCombobox requires a non-optional function. TypeScript cannot use the disabled prop to narrow the union type.

Your proposed fix is correct:

  • Always passes a function (never undefined) to satisfy the prop type
  • Returns an empty ListModelsResult with the correct structure when locked or unconfigured
  • Maintains runtime safety (no network calls)

Only one call site exists for ModelCombobox, so this fix is isolated and low-risk.

@yujonglee yujonglee merged commit c12740d into main Nov 20, 2025
9 checks passed
@yujonglee yujonglee deleted the yl-branch-24 branch November 20, 2025 08:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants