-
Notifications
You must be signed in to change notification settings - Fork 52
Add support for more providers and conversation history #71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Claude finished @tysonthomas9's task —— View job PR Review: Add support for more providers and conversation historyThis is a substantial PR that adds significant functionality to the AI Chat panel. After reviewing the code, I've identified several areas for improvement and found both strengths and concerns. Todo List:
✅ StrengthsCode Organization & Architecture
Conversation History Implementation
Security Practices
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds multiple new LLM providers and a registry-based provider system with custom-provider management, conversation persistence (IndexedDB) and history UI, a Readability-based extractor and chunking utilities, refactors provider settings to a generic/config-driven surface, and integrates these across agent/tooling and UI. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as AIChat UI
participant Client as LLMClient
participant Registry as LLMProviderRegistry
participant Storage as localStorage/CustomProviderManager
participant Provider as Provider (built-in/custom)
UI->>Client: getProviderCredentials(providerId)
Client->>Registry: getProviderCredentials(providerId)
Registry->>Storage: read apiKey/endpoint or custom provider config
Storage-->>Registry: values
Registry-->>Client: credentials
Client-->>UI: credential result
sequenceDiagram
participant Agent as AgentService
participant CM as ConversationManager
participant SM as ConversationStorageManager
participant DB as IndexedDB
Agent->>Agent: scheduleAutoSave() (debounced)
Note over Agent: wait ~1000ms
Agent->>CM: autoSaveConversation(state, sessions)
CM->>SM: saveConversation(serialized)
SM->>DB: transaction put conversation & metadata
DB-->>SM: success
SM-->>CM: saved id
CM-->>Agent: conversationId
Agent->>Agent: dispatch CONVERSATION_SAVED
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| case 'cerebras': | ||
| return new CerebrasProvider(''); | ||
| case 'anthropic': | ||
| return new AnthropicProvider(''); | ||
| case 'googleai': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temporary provider instances ignore API keys when fetching models
The registry builds fallback provider instances with empty API keys (e.g. new CerebrasProvider('')) when a provider hasn’t been registered yet. The settings UI calls LLMProviderRegistry.fetchProviderModels and passes the user’s key, but the new provider classes read credentials from the instance’s constructor and ignore the arguments. As a result, model fetch/tests for providers that have not yet been initialised will always hit the API without authentication and fail, making it impossible to discover models before saving the configuration. The temporary instance needs the supplied key injected before performing requests.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for additional LLM providers (Cerebras, Anthropic, Google AI) and implements conversation history persistence. The changes refactor the provider architecture to use a generic settings class and centralized configurations, introduce custom provider support, and add full conversation management with IndexedDB storage.
Key Changes
- Refactored provider settings to use
GenericProviderSettingswith declarative configurations - Added three new LLM providers: Cerebras, Anthropic, and Google AI
- Implemented conversation history with IndexedDB persistence
- Added custom provider management allowing users to add OpenAI-compatible providers
Reviewed Changes
Copilot reviewed 30 out of 30 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| types.ts | Changed provider types from union to string for extensibility |
| GenericProviderSettings.ts | New generic settings class replacing provider-specific implementations |
| providerConfigs.ts | Centralized provider configurations |
| CerebrasProvider.ts | New Cerebras API provider implementation |
| AnthropicProvider.ts | New Anthropic API provider implementation |
| GoogleAIProvider.ts | New Google AI (Gemini) provider implementation |
| GenericOpenAIProvider.ts | Generic OpenAI-compatible provider for custom endpoints |
| ConversationTypes.ts | Type definitions for conversation persistence |
| ConversationStorageManager.ts | IndexedDB storage layer for conversations |
| ConversationManager.ts | High-level conversation lifecycle management |
| ConversationHistoryList.ts | UI component for conversation history |
| CustomProviderManager.ts | Manager for custom provider CRUD operations |
| CustomProviderDialog.ts | UI dialog for managing custom providers |
| AIChatPanel.ts | Integration of conversation history and custom providers |
| AgentService.ts | Auto-save and conversation loading functionality |
| FileStorageManager.ts | Session ID management for file-conversation association |
| LLMClient.ts | Support for custom providers and new provider methods |
| LLMProviderRegistry.ts | Centralized provider credential and validation management |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| value: string; | ||
| label: string; | ||
| type: 'openai' | 'litellm' | 'groq' | 'openrouter' | 'browseroperator'; | ||
| type: string; // Supports standard providers and custom providers (e.g., 'custom:my-provider') |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing from a union type to string loses type safety for provider types. This could lead to runtime errors if invalid provider IDs are passed. Consider using a branded type or keeping the union type with a fallback for custom providers (e.g., type: LLMProvider | \custom:${string}``)
| } | ||
| const template = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'; | ||
| return template.replace(/[xy]/g, c => { | ||
| const r = Math.random() * 16 | 0; |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fallback UUID generation uses a bitwise OR which has lower precedence than multiplication. This should be (Math.random() * 16) | 0 or use Math.floor() for clarity. While it works due to JavaScript's evaluation order, it's error-prone.
| const r = Math.random() * 16 | 0; | |
| const r = Math.floor(Math.random() * 16); |
| */ | ||
| private getGenerateContentEndpoint(modelName: string): string { | ||
| // Model name format: "models/gemini-2.5-pro" or just "gemini-2.5-pro" | ||
| const normalizedModel = modelName.startsWith('models/') ? modelName : `models/${modelName}`; |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The hardcoded 'models/' prefix should be extracted as a constant (e.g., MODEL_PREFIX = 'models/') for consistency and easier maintenance, especially since it's used in multiple places including the model ID extraction at line 389.
| } | ||
|
|
||
| // Simple confirmation | ||
| if (confirm(`Are you sure you want to delete the provider "${provider.name}"?`)) { |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Using native confirm() dialogs is not ideal for DevTools UI consistency. Consider using the DevTools dialog system (UI.Dialog.Dialog) with custom buttons for better UX and consistency with the rest of the codebase.
| CustomProviderManager.saveProviders(providers); | ||
|
|
||
| // Also clean up the API key from localStorage | ||
| const apiKeyStorageKey = `ai_chat_custom_${id}_api_key`; |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When deleting a provider, the API key cleanup uses the provider ID directly, but the API key storage key should use the same format as getApiKeyStorageKey(). If the ID contains the 'custom:' prefix, this will create a mismatch. Use getApiKeyStorageKey(id) instead of constructing the key inline.
| const apiKeyStorageKey = `ai_chat_custom_${id}_api_key`; | |
| const apiKeyStorageKey = CustomProviderManager.getApiKeyStorageKey(id); |
| } else { | ||
| // If API key is optional and empty, remove from storage | ||
| if (this.config.apiKeyOptional) { | ||
| localStorage.removeItem(this.config.apiKeyStorageKey); |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Direct use of localStorage.removeItem() bypasses the setStorageItem() utility function. For consistency with the rest of the file, consider using setStorageItem(this.config.apiKeyStorageKey, '') or extracting a removeStorageItem() utility.
| localStorage.removeItem(this.config.apiKeyStorageKey); | |
| setStorageItem(this.config.apiKeyStorageKey, ''); |
| try { | ||
| // Create a temporary custom provider config | ||
| const tempConfig = { | ||
| id: `custom:${name.toLowerCase().replace(/\s+/g, '-')}`, |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The temporary custom provider config generation in testCustomProviderConnection doesn't sanitize special characters beyond spaces. This could create invalid IDs if the name contains characters like quotes, slashes, or other special characters. Use the same sanitization logic as CustomProviderManager.generateId() for consistency.
| id: `custom:${name.toLowerCase().replace(/\s+/g, '-')}`, | |
| id: CustomProviderManager.generateId(name), |
|
|
||
| // Try a simple completion to verify the API works | ||
| const testPrompt = 'Respond with "OK" to confirm connection.'; | ||
| const response = await this.call(testModel, testPrompt, '', { |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused variable response.
| import { | ||
| type ConversationMetadata, | ||
| deserializeAgentSession, | ||
| deserializeAgentState, | ||
| extractMetadata, | ||
| generatePreview, | ||
| generateTitle, | ||
| serializeAgentSession, | ||
| serializeAgentState, | ||
| type StoredConversation, | ||
| } from './ConversationTypes.js'; |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused import extractMetadata.
| import {createLogger} from '../core/Logger.js'; | ||
| import type {ConversationMetadata} from '../persistence/ConversationTypes.js'; | ||
| import {getConversationHistoryStyles} from './conversationHistoryStyles.js'; | ||
|
|
||
| const logger = createLogger('ConversationHistoryList'); | ||
|
|
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused variable logger.
| import {createLogger} from '../core/Logger.js'; | |
| import type {ConversationMetadata} from '../persistence/ConversationTypes.js'; | |
| import {getConversationHistoryStyles} from './conversationHistoryStyles.js'; | |
| const logger = createLogger('ConversationHistoryList'); | |
| import type {ConversationMetadata} from '../persistence/ConversationTypes.js'; | |
| import {getConversationHistoryStyles} from './conversationHistoryStyles.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
front_end/panels/ai_chat/core/LLMConfigurationManager.ts (1)
121-140: Custom provider API key/endpoint are never read from CustomProviderManager
getApiKey/getEndpointalways delegate toLLMProviderRegistry, so when the active provider is custom (e.g.custom:z-ai), these methods will return empty values even if an API key exists inCustomProviderManager. If any downstream code expectsLLMConfigurationManager.getApiKey()to work for custom providers, this will silently break those flows.Consider branching here: use
CustomProviderManager.getApiKey(provider)(and baseURL from the custom provider config) forisCustomProvider(provider), and keep the registry path for built‑ins.front_end/panels/ai_chat/ui/SettingsDialog.ts (2)
5-28: Fix import ordering and re-export placement to satisfy ESLintESLint is complaining that several imports (GenericProviderSettings, LiteLLMSettings, OpenRouterSettings, providerConfigs) appear “in the body” and that
CustomProviderManagershould be beforeLogger. This is because:
export { isVectorDBEnabled };appears before some imports.- There are comment blocks splitting what ESLint treats as one import group.
To address:
- Move all imports (including provider settings and providerConfigs) to the very top of the module, before any
exportstatements.- Optionally fold the re-export into a single line:
export { isVectorDBEnabled } from './settings/utils/storage.js';and drop the separate import, which also simplifies the import groups.
533-545: Update disclaimer text to reflect new providersThe disclaimer still says:
“We currently support OpenAI, Groq and OpenRouter providers directly. And we support LiteLLM as a proxy…”
Now that Cerebras, Anthropic, GoogleAI, and custom providers are supported, this is out of date and may confuse users.
Consider updating this line to a more generic statement (or listing all supported providers, including custom providers) to match current functionality.
front_end/panels/ai_chat/ui/AIChatPanel.ts (3)
29-76: Avoid logging credential values inStorageMonitor
StorageMonitoroverrideslocalStorage.setItem/removeItemand, for keys containing'openrouter'or'ai_chat', logs:
Value exists,Value length, and- A
Value preview(first 50 characters of the stored value).Given that these keys can contain API keys and other sensitive configuration, logging even a truncated preview of the actual value is a security/privacy risk and can leak credentials into logs.
Please remove the value preview entirely and avoid logging raw credential values. At most, keep boolean/length metadata if useful for debugging, and consider gating this entire monitor behind a debug flag or development‑only build.
78-93: Move imports at lines 78–92 to the top of the file before theStorageMonitorclass to comply with ESLint'simport/firstrule.The imports for
chatViewStyles(line 78) andConversationHistoryList(line 92) appear after theStorageMonitorclass definition (lines 29–76) and the logger initialization (line 23), violating theimport/firstrule. Consolidate all imports at the top of the file before any executable code to maintain proper module structure and keep ESLint compliant.
262-299: Fix return types to support generic providers ingetNanoModelWithProvider,getMiniModelWithProvider, andgetProviderForModelThese functions declare
provider: LLMProviderin their return types but perform unsafe casts on values that may be custom provider strings (e.g.,custom:my-provider). The code already supports custom providers throughout the codebase, but the type signatures don't reflect this.Specific issues:
- Line 288:
(modelOption?.type as LLMProvider)unsafely casts a string that may be a custom provider ID- Line 296:
return currentProvider as LLMProviderunsafely casts a localStorage string- Return types should be
ProviderType(string) to accurately represent what these functions handleChange return signatures from
{ model: string, provider: LLMProvider }to{ model: string, provider: ProviderType }and remove unsafe casts. Update call sites to be explicit:LLMProviderRegistry.hasProvider()only supports built-inLLMProvidervalues, so callers that pass custom providers should handle them separately.
🧹 Nitpick comments (41)
front_end/panels/ai_chat/ui/ChatView.ts (1)
285-294: Improve type safety and add error handling.Line 289 uses
anyto bypass type checking. Consider defining a proper interface for the file list display element, and add error handling around the async refresh call to prevent unhandled promise rejections.Apply this diff to improve type safety and error handling:
/** * Refreshes the file list display */ async refreshFileList(): Promise<void> { - const fileListDisplay = this.#shadow.querySelector('ai-file-list-display') as any; - if (fileListDisplay && typeof fileListDisplay.refresh === 'function') { - await fileListDisplay.refresh(); - logger.debug('FileListDisplay refreshed'); + try { + const fileListDisplay = this.#shadow.querySelector('ai-file-list-display') as HTMLElement & { refresh?: () => Promise<void> }; + if (fileListDisplay && typeof fileListDisplay.refresh === 'function') { + await fileListDisplay.refresh(); + logger.debug('FileListDisplay refreshed'); + } + } catch (error) { + logger.error('Failed to refresh FileListDisplay:', error); } }front_end/panels/ai_chat/ui/conversationHistoryStyles.ts (1)
8-170: Conversation history dialog styles look solidThe layout, hover/active states, and delete-button visibility behavior are coherent and match expected UX for a history list. I don’t see any functional issues here. If you ever tune perf, you might narrow
transition: allto only the properties you actually animate, but it’s fine as-is.front_end/panels/ai_chat/core/CustomProviderManager.ts (3)
161-181: Potential ID collisions when generating provider IDs from names
generateIdderives the ID purely from a sanitized name, whileaddProvideronly checks for name collisions, not ID collisions. Names like"Z AI"and"Z-AI"would produce the same ID (custom:z-ai) but pass the name-uniqueness check, resulting in duplicate IDs in storage.Consider also checking
providerExists(id)before pushing, and, on collision, either rejecting the name or appending a numeric suffix/UUID fragment to keep IDs unique.
231-252: Unify API-key storage key derivation and avoid hardcoded format
deleteProvideruses a hardcoded template for the API-key storage key:const apiKeyStorageKey = `ai_chat_custom_${id}_api_key`; localStorage.removeItem(apiKeyStorageKey);Given you already have
getApiKeyStorageKey, it’s safer to reuse it so key derivation stays consistent:- const apiKeyStorageKey = `ai_chat_custom_${id}_api_key`; - localStorage.removeItem(apiKeyStorageKey); + const apiKeyStorageKey = CustomProviderManager.getApiKeyStorageKey(id); + localStorage.removeItem(apiKeyStorageKey);This reduces drift if the storage key format ever changes.
12-20: Two differentCustomProviderConfiginterfaces may cause confusionThis file defines a
CustomProviderConfigthat includesmodels,enabled,createdAt, andupdatedAt, whileGenericOpenAIProvider.tsalso exports aCustomProviderConfigwith a different shape (optionalmodels, optionalapiKey, no timestamps).Even though they’re in separate modules, having two exported interfaces with the same name but different contracts is easy to misuse. Consider:
- Renaming one of them (e.g.,
PersistedCustomProviderConfigvsRuntimeCustomProviderConfig), or- Reusing a single shared type and extending it in the specific module.
This will make cross-module usage less error-prone.
front_end/panels/ai_chat/LLM/LLMProviderRegistry.ts (1)
294-309: Clarify typing and expectations aroundfetchModels
fetchProviderModelsrelies on a non-interface method:if ('fetchModels' in provider && typeof provider.fetchModels === 'function') { return await provider.fetchModels(apiKey, endpoint); }This assumes:
- Some providers expose a
fetchModels(apiKey, endpoint?)method.LLMProviderInterfaceeither includes this method or allows arbitrary members (e.g., via an index signature).If
LLMProviderInterfacedoesn’t definefetchModels, this will be a type hole. Consider either:
- Extending
LLMProviderInterfacewith an optionalfetchModels?(apiKey: string, endpoint?: string): Promise<ModelInfo[]>, or- Narrowing the type when you call it, e.g.,
(provider as any).fetchModels(apiKey, endpoint)with a clear comment that this is an ad-hoc extension.Also, make sure all providers that are expected to be used via
fetchProviderModelsactually implementfetchModelswith a compatible signature.front_end/panels/ai_chat/persistence/ConversationTypes.ts (2)
255-274: Preview/title helpers are straightforward; consider using enum value for user entityThe logic for
generatePreview/generateTitle(first user message, truncated to 100/50 chars) is simple and matches the intended UI usage.If
ChatMessageEntity.USERexists, you might optionally prefer it over the string literal'user'to keep this consistent with the AGENT_SESSION checks above, but it’s not required for correctness.
129-140: Apply typed helper to eliminateanycasts in both serialize and deserialize functionsThe review correctly identifies two
anycasts at lines 132 and 165 that triggerno-explicit-any. The proposed helper typeChatMessage & { agentSession: AgentSession }is type-safe and valid for this discriminated union pattern. SinceAgentSessionis already imported, apply the suggested refactoring to bothserializeAgentStateanddeserializeAgentState:+type AgentSessionChatMessage = ChatMessage & { agentSession: AgentSession }; + export function serializeAgentState(state: AgentState): SerializableAgentState { return { messages: state.messages.map(msg => { if (msg.entity === ChatMessageEntity.AGENT_SESSION) { - const agentSessionMsg = msg as any; + const agentSessionMsg = msg as AgentSessionChatMessage; return { ...agentSessionMsg, agentSession: serializeAgentSession(agentSessionMsg.agentSession), }; } return msg; }), export function deserializeAgentState(serialized: SerializableAgentState): AgentState { return { messages: serialized.messages.map(msg => { if (msg.entity === ChatMessageEntity.AGENT_SESSION) { - const agentSessionMsg = msg as any; + const agentSessionMsg = msg as AgentSessionChatMessage; return { ...agentSessionMsg, agentSession: deserializeAgentSession(agentSessionMsg.agentSession), }; } return msg; }),front_end/panels/ai_chat/ui/settings/types.ts (1)
8-12: LooseningModelOption.typetostringis flexible but loses some type-safetyChanging
typefrom a union of known providers tostringenables custom providers (e.g.,custom:my-provider), which makes sense for this PR. The trade-off is that call sites no longer benefit from compile-time checks against the known provider set.If you want a middle ground, you could consider tying this to your LLM provider union plus a custom prefix, e.g.:
// Pseudocode, assuming you can import LLMProvider from LLMTypes: export type ProviderType = LLMProvider | `custom:${string}`; export interface ModelOption { value: string; label: string; type: ProviderType; }That still supports arbitrary
custom:providers while keeping built-in provider names typed.front_end/panels/ai_chat/tools/FileStorageManager.ts (1)
47-78: Session ID mutability looks good; verify caller usage timingMaking
sessionIdmutable withgetSessionId/setSessionIdand a default of'default'keeps file scoping intact, since all queries already usethis.sessionId. The main requirement now is that callers reliably callsetSessionId()to the conversation ID before performing any file operations, and avoid flipping it while operations for a previous conversation are still in flight.If that assumption holds in
AIChatPanel/ conversation management, this change is sound. Otherwise, consider passingsessionIdas an explicit argument into file APIs for stronger guarantees.front_end/panels/ai_chat/ui/chatView.css (1)
1650-1669: Conversation history dialog overlay matches settings dialog patternThe
.conversation-history-dialogand nested.widgetrules mirror the full-screen settings dialog, which should give a consistent overlay experience for history. If you end up adding more full-screen dialogs, consider factoring these into a shared “full-screen dialog” class to avoid duplication, but it’s fine as-is.front_end/panels/ai_chat/core/LLMConfigurationManager.ts (3)
297-336: Custom provider validation is minimal but coherentThe new validation path for custom providers (existence, enabled flag, and non-empty models list) is sensible and avoids credential checks in automated mode. You might later want to validate that
mainModelis one ofcustomProvider.models, but as-is the behavior is consistent and non-breaking.
342-355: Saving provider-specific settings doesn’t clear custom provider API keysFor custom providers you only call
CustomProviderManager.setApiKeywhenconfig.apiKeyis truthy, so saving with an empty/undefinedapiKeyleaves any previous key in place. That’s fine if custom API keys are meant to be managed exclusively viaCustomProviderDialog, but it’s different from the built‑in path where passingnullclears credentials.If you expect
saveConfigurationto also support clearing a custom provider API key, add anelsebranch that removes it viaCustomProviderManager.setApiKey(provider, '')or a dedicated delete helper.
360-379: Storage-change handling only reacts to legacy keysEven though you now collect API-key storage keys from
LLMProviderRegistry, the handler still early-returns unlessevent.keyis inSTORAGE_KEYS. That means storage events for any provider keys not listed there (e.g. newly added providers or custom-provider keys) won’t trigger logging or listeners.If you want cross-tab updates/logging when new provider keys change, consider broadening the condition to also include registry-derived keys (and possibly custom-provider API keys from
CustomProviderManager).front_end/panels/ai_chat/ui/CustomProviderDialog.ts (1)
146-315: Add/edit dialog flow is well structured; minor API-key UX gapThe add/edit dialog, test-connection, and model-management flows are solid, and you correctly keep user input in
textContentand per-node properties, so the ast-grepinnerHTMLwarnings (used only to clear containers) are not a real XSS risk.One UX gap: when editing an existing provider, leaving the API key field blank never clears a previously stored key because you only call
CustomProviderManager.setApiKeywhenapiKeyis truthy. If you want users to be able to remove an API key, consider explicitly clearing it when the field is emptied.front_end/panels/ai_chat/LLM/LLMTypes.ts (2)
246-269: Helper guards for built-in vs custom providers
isCustomProviderandisBuiltInProviderdo what they say and provide a useful runtime/type-level split. One small maintenance improvement would be to derivebuiltInProvidersfrom a single source (e.g.Object.keys(builtInNames)) so the list can’t drift from the union and display-name map.
271-302: Display-name helper behavior vs comment
getProviderDisplayNamecorrectly maps built-in IDs to friendly names and derives custom provider names from IDs by splitting on-and capitalizing. Note that this yields"Z Ai"forcustom:z-ai, whereas the comment example says"Z.AI"—the code is fine, but the example is slightly misleading; either adjust the comment or tweak the formatting if the punctuation matters.front_end/panels/ai_chat/ui/settings/providers/GenericProviderSettings.ts (2)
126-208: Dynamic fetchModels call works but typing and selector updates can be tightenedThe dynamic call to
LLMClient[this.config.fetchMethodName]is correct given the current LLMClient API set, but you’re casting it to(apiKey: string) => Promise<any[]>and relying onany[]. You could narrow the type with a dedicated interface or union for the supported fetch methods to avoidany.Also, within the success path you both:
- Refresh existing selectors via
refreshModelSelectOptions, and- Call
this.updateModelSelectors()again at the end (which tears down and recreates the selectors).The second call makes the first redundant. Dropping one of these (most likely the explicit
refreshModelSelectOptionscalls and just relying onupdateModelSelectors) would simplify the flow without changing behavior.Finally, the non-null assertions on
this.fetchModelsStatus!are unnecessary given the early guard—assigningconst statusEl = this.fetchModelsStatus;after the guard and usingstatusElavoids the lint violation.
284-299: API-key save semantics vs comment
save()usessetStorageItem, which removes the key when passed an empty string. That means both optional and required API keys are effectively removed from storage when the field is left blank—even though the comment suggests only optional keys should be removed.Behavior is still consistent with
getStorageItem(both cases read back as empty), but to match the intent you could either:
- Call
localStorage.setItem(this.config.apiKeyStorageKey, '')directly for required keys, or- Simplify and just rely on
setStorageItemin all cases and update the comment to reflect the actual behavior.front_end/panels/ai_chat/LLM/GenericOpenAIProvider.ts (3)
32-62: Clarify config type and provider name typing for custom providersTwo minor type-safety issues here:
CustomProviderConfigin this file duplicates the name of the config type used byCustomProviderManagerbut with a different shape. That’s easy to confuse and silently incompatible if someone imports the other one later. Prefer renaming this to something likeGenericOpenAIProviderConfigor reusing the shared type and picking only needed fields.nameis typed asLLMProviderbut assignedthis.providerId as LLMProvider, andproviderIdcan be arbitrary custom IDs (e.g.custom:z-ai), which are not part of theLLMProviderunion. This cast sidesteps the type system and can leak non‑standard provider IDs into places expecting a known provider. Consider widening the type (e.g.readonly name: LLMProvider | string) or introducing an explicitproviderId: stringseparate from a strictLLMProviderenum.
16-27: Replaceanyindex signature onGenericOpenAIModel
[key: string]: anytriggers the linter and loses type safety for model metadata. If you don’t need strongly typed extras, prefer something like:export interface GenericOpenAIModel { id: string; object: string; created?: number; owned_by?: string; [key: string]: unknown; }This keeps it flexible while avoiding
any.
239-273: Reduceanyusage in payload construction
payloadBodyand several intermediates are typed asany, which is why ESLint flags those lines. You don’t need a perfect schema, but you can usually get by withRecord<string, unknown>plus small helper types formessagesandtools, which will catch key typos and obvious shape errors.If you want to keep it light, even:
const payloadBody: Record<string, unknown> = { model: modelName, messages: this.convertMessagesToOpenAI(messages), };would already address the lints here and in
makeAPIRequest.front_end/panels/ai_chat/persistence/ConversationManager.ts (2)
5-19: Clean up imports and remove unusedextractMetadata
- ESLint is flagging
extractMetadataas unused; it’s imported but never referenced. It should be removed from the import list.- The linter also wants type imports grouped and separated from value imports (and value imports at the top). You can fix by moving type-only imports (
AgentState,AgentSession,ConversationMetadata,StoredConversation) together and keeping a blank line between type and value import groups, per your existing style.These are low‑risk cleanups but will keep CI happy.
152-174: SimplifyautoSaveConversationcontrol flowThe
if (!currentConversationId) { ... } else { ... }pattern after early returns is what ESLint is flagging (no-else-return). You can simplify:try { if (!currentConversationId) { const conversation = await this.createConversation(state, agentSessions); return conversation.id; } await this.saveConversation(currentConversationId, state, agentSessions); return currentConversationId; } catch (error) { logger.error('Failed to auto-save conversation', {error, currentConversationId}); return currentConversationId || ''; }This is purely stylistic but will clear the lint.
front_end/panels/ai_chat/ui/SettingsDialog.ts (5)
40-57: Avoid reusingProviderConfigname and reduceanyusageThis local
ProviderConfiginterface has a different shape than theProviderConfigused inGenericProviderSettings.ts, but shares the same name. That’s confusing and makes it easy to mix up types across files.Two suggestions:
- Rename this interface to something like
ProviderRegistryEntryto clearly distinguish it from the settings‑levelProviderConfig.- Replace the
config?: anywith the actual provider config type imported fromGenericProviderSettings.ts(orproviderConfigs.ts), e.g.:import type { ProviderConfig as GenericProviderConfig } from './settings/providers/GenericProviderSettings.js'; interface ProviderRegistryEntry { id: ProviderType; i18nKey: keyof typeof UIStrings; config?: GenericProviderConfig; settingsClass?: 'generic' | 'litellm' | 'openrouter'; }This will clear several
no-explicit-anylints and make registry entries type‑safe.
119-166: Harden custom provider helpers and type them explicitlyA few small robustness and typing points in the custom provider helpers:
createCustomProviderConfigandcreateCustomModelOptionsGetterboth useany. You can import the concrete type fromCustomProviderManagerand theModelOptiontype from./settings/types.jsto avoidanyentirely.createCustomModelOptionsGetterassumesconfig.modelsis always defined; if an older config without models slipped through,config.models.mapwould throw. A cheap guard avoids that:if (!config.models || !config.models.length) { return []; } return config.models.map(/* ... */);
- For readability and to satisfy the
curlyrule, convert single-line returns likeif (!customConfig) return null;into block form.These are minor but make the custom provider path more resilient.
246-308: Type provider settings map and avoid non‑null assertion onprovider.configIn the provider settings initialization:
const providerSettings = new Map<ProviderType, any>();and laterprovider.config!both trigger lints and weaken type safety.- If a registry entry with
settingsClass: 'generic'accidentally missesconfig, the non‑null assertion will mask it until runtime.You can improve this by:
type ProviderSettingsInstance = | GenericProviderSettings | LiteLLMSettings | OpenRouterSettings; const providerSettings = new Map<ProviderType, ProviderSettingsInstance>(); // ... } else { if (!provider.config) { logger.error(`Missing config for provider ${provider.id} with generic settingsClass`); return; } const hasUpdateModelOptions = ['groq', 'cerebras', 'anthropic', 'googleai'].includes(provider.id); settings = new GenericProviderSettings( content, provider.config, getModelOptions, addCustomModelOption, removeCustomModelOption, hasUpdateModelOptions ? updateModelOptions : undefined, ); }This addresses both the
anyand!concerns and gives you compile‑time coverage when updating the registry.
358-431: Provider change handler is correct but can be tightenedThe change handler correctly:
- Differentiates custom providers vs built‑ins.
- Handles LiteLLM as a special case (endpoint + wildcard models).
- Uses
providerAutoFetchMapfor Groq / OpenRouter / Cerebras / Anthropic / GoogleAI and refreshes selectors via the corresponding settings instance.Minor improvements:
- Several values and objects here are implicitly
any(models,settings), which you can type usingModelOptionand theProviderSettingsInstanceunion suggested earlier.- For readability and to satisfy
curly, wrap single-lineif (litellmSettings) litellmSettings.updateModelSelectors();(and similar) in braces.Otherwise, the flow looks solid.
578-648: Save flow for provider and models is correct; tweakcurrentProviderSettingstoconstThe save handler:
- Persists the selected provider ID.
- Calls
save()on all provider settings and any active custom provider settings.- Picks the appropriate settings instance (custom vs standard) to store mini/nano models into
MINI_MODEL_STORAGE_KEYandNANO_MODEL_STORAGE_KEY.- Saves advanced features and shows a clear success state.
Two small tweaks:
currentProviderSettingsis never reassigned; changelettoconstto quietprefer-const.- Optionally guard the custom branch with a null check to make it explicit that
customProviderSettingsmight benullif initialization failed.Functionally, the flow is good.
front_end/panels/ai_chat/persistence/ConversationStorageManager.ts (1)
78-100: SimplifyloadConversationcontrol flow and address lint
loadConversationcurrently uses anif/elsewith returns in both branches, which ESLint flags as unnecessaryelseafterreturn:if (conversation) { logger.info(...); return conversation; } else { logger.warn(...); return null; }You can simplify to:
if (conversation) { logger.info('Loaded conversation', {conversationId: id, title: conversation.title}); return conversation; } logger.warn('Conversation not found', {conversationId: id}); return null;Behavior is unchanged and the lint is satisfied.
front_end/panels/ai_chat/LLM/AnthropicProvider.ts (1)
144-155: Define OpenAI and Anthropic tool types to reduceany[]usageThe mapping logic is correct per Anthropic's API spec (name, description, input_schema fields match). However, adding type definitions will improve type safety and catch shape mismatches early.
Add to
LLMTypes.ts:export interface OpenAIFunctionTool { type: 'function'; function: { name: string; description?: string; parameters?: Record<string, unknown>; }; } export interface AnthropicToolSchema { name: string; description: string; input_schema: Record<string, unknown>; }Then update
AnthropicProvider.ts:import type { /* ... */, OpenAIFunctionTool, AnthropicToolSchema } from './LLMTypes.js'; private convertToolsToAnthropic(tools: OpenAIFunctionTool[]): AnthropicToolSchema[] { // existing implementation }Also update
LLMCallOptionsandUnifiedLLMOptionsinLLMTypes.tsto usetools?: OpenAIFunctionTool[]instead oftools?: any[].front_end/panels/ai_chat/LLM/LLMClient.ts (3)
26-30: Align provider typing across config, registry, and call sites
LLMProviderConfig.provideris now a plainstring, while other APIs (e.g.LLMCallRequest.provider,ModelInfo.provider, registry APIs) still appear to useLLMProviderunions. This can lead to unsafe casts for custom IDs likecustom:fooand makes it harder to reason about where custom providers are allowed.Consider introducing a shared
ProviderType = string(or similar) and using it consistently in config, registry, and call/request types, or explicitly separatingBuiltInProvidervsCustomProviderinstead of relying onas LLMProvidercasts. Please also re‑run the TypeScript compiler / ESLint to ensure there are no hidden type errors around these changes.
517-557:getProviderCredentialscustom vs standard provider handlingThe new
getProviderCredentialsnicely unifies:
- Custom providers: validate via
LLMClient.validateProviderCredentials, then pull API key and storage key fromCustomProviderManager.- Standard providers: delegate to
LLMProviderRegistry.getProviderCredentials.The error path logs with
logger.errorand returnscanProceed: false, which is a safe default for UI and initialization flows.The only nit is ESLint’s
member-delimiter-stylecomplaining about semicolons vs commas in the inline return type. If your config treats that as an error, you can resolve it by switching to comma delimiters or a named interface.
562-603:testCustomProviderConnectionis a reasonable generic connectivity checkCreating a temporary
CustomProviderConfigandGenericOpenAIProviderto probe model listing is a sensible way to test arbitrary OpenAI‑compatible endpoints without polluting persistent state. The success/failure branches and error messaging are clear.Given ESLint’s
no-else-returnwarning, you could slightly flatten the control flow after theif (modelObjects && modelObjects.length > 0)early return, but that’s purely stylistic and not functionally significant.front_end/panels/ai_chat/LLM/GoogleAIProvider.ts (2)
59-97: Content conversion for Gemini is reasonable; minor style nit
convertContentToGoogleAIcorrectly:
- Treats plain strings as
{ text },- Converts multimodal arrays into Gemini
parts(including base64inline_datafor data URLs, and a warning for plain URLs),- Falls back to stringifying unknown content.
This is a good abstraction for
MessageContent. ESLint’sno-else-returnwarning around theimage_urlbranch is stylistic; if you care about that rule, you can early‑return from theif (matches)branch and remove the trailingelse.
102-169: Message and tool conversion capture Gemini function-calling patterns
convertMessagesToGoogleAI:
- Separates system messages into a
systemInstructionstring (then prepends it as a first user message),- Maps user/assistant/tool roles into appropriate Gemini
role/partsstructures,- Handles assistant
tool_callsby converting each to afunctionCallwith parsed args,- Encodes tool results as
functionResponse.
convertToolsToGoogleAIconverts OpenAI‑style tools into Geminifunction_declarations. Together this should enable function calling with Gemini models.One small improvement would be to store the result of
this.convertContentToGoogleAI(msg.content)in a variable to avoid calling it twice on each message.Also applies to: 174-189
front_end/panels/ai_chat/ui/AIChatPanel.ts (3)
553-597:addCustomModelOptionstill assumes a small set of providers
addCustomModelOptionnow takes an optionalmodelType: ProviderTypebut only adjusts the label for'litellm','groq', and'openrouter', defaulting toOpenAIotherwise. This is fine if the method is only used for those providers, but if you ever use it for the new providers (cerebras/anthropic/googleai) or custom IDs, the label will be misleading.If you anticipate broader use, consider deriving the label prefix from
finalModelType(e.g., capitalizing or mapping known IDs) rather than hardcoding just three cases.
669-697: Conversation events are wired correctly; mind lint on single-lineifsBinding
#boundOnConversationSaved/#boundOnConversationChangedin the constructor and registering/unregistering them inside#initializeAgentServiceensures you don’t accumulate duplicate listeners across reinitializations. The handlers themselves are straightforward, and the event integration withAgentEvents.CONVERSATION_*looks good.ESLint is flagging the single-line
if (this.#boundOnConversationSaved) this.#agentService.removeEventListener(...)style (curlyrule). If your config treats this as an error, wrap these in braces to match the rest of the file’s style.Also applies to: 1400-1412
1711-1737: FileStorageManager session handling on conversation changes
#handleConversationChangedand#onNewChatClickboth set theFileStorageManagersessionId:
- On non-null
conversationId, the handler updates the session to that ID and refreshes the file list.- On “New chat”,
#onNewChatClickexplicitly resets the sessionId to'default'.This correctly associates file storage with the active conversation. Note that
#loadConversationalso sets the sessionId and refreshes the file list, so when selecting a conversation from history you’ll do this work twice (event + direct call). Not a bug, but you could rely solely on theCONVERSATION_CHANGEDevent to avoid duplication.Also applies to: 2185-2195
front_end/panels/ai_chat/core/AgentService.ts (2)
18-32: New imports for registry, custom providers, and conversation persistenceAdding
LLMProviderRegistry,CustomProviderManager,isCustomProvider, andConversationManager/ConversationMetadatais consistent with the new provider and persistence architecture. These imports are all used in later logic and help centralize config and storage handling.ESLint is flagging import-order issues (import groups and positioning). Consider reordering per your lint rules (e.g., external vs internal modules, then types) to keep CI green.
190-223: Provider config builder centralizes provider-specific requirements
#initializeLLMClientnow:
- Validates the overall configuration via
LLMConfigurationManager.validateConfiguration,- Builds a single provider config using
#buildProviderConfig(provider, apiKey, endpoint), and- Initializes
LLMClientwith that provider list.
#buildProviderConfigsensibly handles:
- Custom providers via
isCustomProvider+CustomProviderManager.getProvider, using the storedbaseURLand optional apiKey,litellmrequiring an endpoint and treating apiKey as optional,browseroperatornot requiring an apiKey, and- All other providers requiring an apiKey.
This is a nice consolidation of provider-specific logic. Just note that ESLint’s
member-delimiter-stylewarning on the return type can be fixed by moving the object type to a named interface or adjusting delimiters if needed.Also applies to: 229-281
| try { | ||
| const url = new URL(config.baseURL); | ||
| if (!url.protocol.startsWith('http')) { | ||
| errors.push('Base URL must use HTTP or HTTPS protocol'); | ||
| } | ||
| } catch (e) { | ||
| errors.push('Base URL is not a valid URL'); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Fix unused catch parameter in URL validation
The catch (e) block doesn’t use e, which triggers the ESLint no-unused-vars error. You can switch to a bare catch or log the error:
- } else {
- // Validate URL format
- try {
- const url = new URL(config.baseURL);
+ } else {
+ // Validate URL format
+ try {
+ const url = new URL(config.baseURL);
if (!url.protocol.startsWith('http')) {
errors.push('Base URL must use HTTP or HTTPS protocol');
}
- } catch (e) {
+ } catch {
errors.push('Base URL is not a valid URL');
}
}
🏁 Script executed:
#!/bin/bash
# Locate and inspect the CustomProviderManager.ts file
find . -type f -name "CustomProviderManager.ts" | head -20
Length of output: 56
🏁 Script executed:
#!/bin/bash
# Display lines 50-70 from the CustomProviderManager.ts file to verify the code snippet
fd -type f -name "CustomProviderManager.ts" --exec sed -n '50,70p' {}
Length of output: 170
🏁 Script executed:
#!/bin/bash
# Check for ESLint configuration files and their content
find . -type f \( -name ".eslintrc*" -o -name "eslint.config.*" \) | head -10
Length of output: 460
🏁 Script executed:
#!/bin/bash
# Check TypeScript configuration for target language version
fd -type f -name "tsconfig.json" --exec cat {} \; | head -100
Length of output: 170
🏁 Script executed:
#!/bin/bash
# Search for other catch blocks in the file to see error handling patterns
fd -type f -name "CustomProviderManager.ts" --exec grep -n "catch" {} \;
Length of output: 170
🏁 Script executed:
#!/bin/bash
# Display lines 50-70 from the CustomProviderManager.ts file
cat ./front_end/panels/ai_chat/core/CustomProviderManager.ts | sed -n '50,70p'
Length of output: 582
🏁 Script executed:
#!/bin/bash
# Show the complete validateConfig method to understand context
cat ./front_end/panels/ai_chat/core/CustomProviderManager.ts | sed -n '40,80p'
Length of output: 1113
🏁 Script executed:
#!/bin/bash
# Read the main ESLint config
cat ./eslint.config.mjs | head -150
Length of output: 4489
🏁 Script executed:
#!/bin/bash
# Find and read tsconfig.json
find . -name "tsconfig.json" -not -path "*/node_modules/*" | head -5 | xargs cat
Length of output: 2041
🏁 Script executed:
#!/bin/bash
# Search all catch blocks in the file
cat ./front_end/panels/ai_chat/core/CustomProviderManager.ts | grep -n "catch"
Length of output: 73
🏁 Script executed:
#!/bin/bash
# Get more of the ESLint config to see no-unused-vars or related rules
cat ./eslint.config.mjs | grep -A 5 -B 5 "no-unused"
Length of output: 1346
🏁 Script executed:
#!/bin/bash
# Search for no-unused-vars rule in eslint config
cat ./eslint.config.mjs | grep -i "unused"
Length of output: 327
🏁 Script executed:
#!/bin/bash
# Check if bare catch is used anywhere in the codebase
grep -r "} catch {" ./front_end --include="*.ts" --include="*.js" | head -20
Length of output: 1070
🏁 Script executed:
#!/bin/bash
# Get the lines around the catch blocks to see error parameter usage
cat ./front_end/panels/ai_chat/core/CustomProviderManager.ts | sed -n '88,110p'
Length of output: 667
🏁 Script executed:
#!/bin/bash
# Check TypeScript ESLint plugin rules in the config
cat ./eslint.config.mjs | grep -A 2 "@typescript-eslint" | head -50
Length of output: 1357
Fix unused catch parameter in URL validation
The catch (e) block has an unused parameter that violates the @typescript-eslint/no-unused-vars rule (parameters must start with _ to be ignored). Switch to a bare catch block, which is already used throughout the codebase:
} catch (e) {
+ } catch {
errors.push('Base URL is not a valid URL');
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const url = new URL(config.baseURL); | |
| if (!url.protocol.startsWith('http')) { | |
| errors.push('Base URL must use HTTP or HTTPS protocol'); | |
| } | |
| } catch (e) { | |
| errors.push('Base URL is not a valid URL'); | |
| } | |
| } | |
| try { | |
| const url = new URL(config.baseURL); | |
| if (!url.protocol.startsWith('http')) { | |
| errors.push('Base URL must use HTTP or HTTPS protocol'); | |
| } | |
| } catch { | |
| errors.push('Base URL is not a valid URL'); | |
| } | |
| } |
🧰 Tools
🪛 ESLint
[error] 60-60: 'e' is defined but never used.
(@typescript-eslint/no-unused-vars)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/core/CustomProviderManager.ts around lines 55 to 63,
the catch clause uses an unused parameter `e` which triggers the
@typescript-eslint/no-unused-vars rule; replace the current `catch (e)` with a
bare `catch {` block and keep the existing body (errors.push('Base URL is not a
valid URL')) so the unused parameter is removed and the code matches the
project's bare-catch style.
| * Clear all custom providers (mainly for testing/reset) | ||
| */ | ||
| static clearAllProviders(): void { | ||
| localStorage.removeItem(CustomProviderManager.STORAGE_KEY); | ||
|
|
||
| // Also clean up all custom provider API keys | ||
| const providers = CustomProviderManager.loadProviders(); | ||
| providers.forEach(provider => { | ||
| const apiKeyStorageKey = `ai_chat_custom_${provider.id}_api_key`; | ||
| localStorage.removeItem(apiKeyStorageKey); | ||
| }); | ||
|
|
||
| logger.info('Cleared all custom providers'); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
clearAllProviders doesn’t actually clean up API keys
clearAllProviders removes the main providers entry from localStorage before calling loadProviders, so loadProviders always returns [] and no per-provider API keys are removed:
localStorage.removeItem(STORAGE_KEY);
const providers = CustomProviderManager.loadProviders(); // always []
This means ai_chat_custom_*_api_key entries are left behind.
Reorder to load and clean keys first, then remove the providers blob, and reuse getApiKeyStorageKey:
static clearAllProviders(): void {
- localStorage.removeItem(CustomProviderManager.STORAGE_KEY);
-
- // Also clean up all custom provider API keys
- const providers = CustomProviderManager.loadProviders();
- providers.forEach(provider => {
- const apiKeyStorageKey = `ai_chat_custom_${provider.id}_api_key`;
- localStorage.removeItem(apiKeyStorageKey);
- });
-
- logger.info('Cleared all custom providers');
+ // Load existing providers to clean up their API keys
+ const providers = CustomProviderManager.loadProviders();
+ providers.forEach(provider => {
+ const apiKeyStorageKey = CustomProviderManager.getApiKeyStorageKey(provider.id);
+ localStorage.removeItem(apiKeyStorageKey);
+ });
+
+ // Remove the providers list itself
+ localStorage.removeItem(CustomProviderManager.STORAGE_KEY);
+
+ logger.info('Cleared all custom providers');
}
---
🏁 Script executed:
```shell
#!/bin/bash
# Find and inspect the loadProviders method
rg -nA 10 "loadProviders\s*\(" front_end/panels/ai_chat/core/CustomProviderManager.ts
Length of output: 3381
🏁 Script executed:
#!/bin/bash
# Find and inspect the getApiKeyStorageKey method
rg -nA 3 "getApiKeyStorageKey\s*\(" front_end/panels/ai_chat/core/CustomProviderManager.ts
Length of output: 398
🏁 Script executed:
#!/bin/bash
# Find and inspect the STORAGE_KEY definition
rg -nA 2 "STORAGE_KEY" front_end/panels/ai_chat/core/CustomProviderManager.ts
Length of output: 571
🏁 Script executed:
#!/bin/bash
# Get broader context around clearAllProviders to see exact current implementation
rg -nB 2 -A 15 "static clearAllProviders" front_end/panels/ai_chat/core/CustomProviderManager.ts
Length of output: 638
Fix clearAllProviders() to actually remove API keys
The bug is confirmed: localStorage.removeItem(STORAGE_KEY) executes at line 286 before loadProviders() is called at line 289, causing it to return an empty array. Per-provider API keys are never cleaned up.
Reorder to load providers first, then clean keys, then remove the providers storage key. Also use the getApiKeyStorageKey() helper instead of hardcoding the key format:
static clearAllProviders(): void {
- localStorage.removeItem(CustomProviderManager.STORAGE_KEY);
-
- // Also clean up all custom provider API keys
- const providers = CustomProviderManager.loadProviders();
- providers.forEach(provider => {
- const apiKeyStorageKey = `ai_chat_custom_${provider.id}_api_key`;
- localStorage.removeItem(apiKeyStorageKey);
- });
-
- logger.info('Cleared all custom providers');
+ // Load existing providers to clean up their API keys
+ const providers = CustomProviderManager.loadProviders();
+ providers.forEach(provider => {
+ const apiKeyStorageKey = CustomProviderManager.getApiKeyStorageKey(provider.id);
+ localStorage.removeItem(apiKeyStorageKey);
+ });
+
+ // Remove the providers list itself
+ localStorage.removeItem(CustomProviderManager.STORAGE_KEY);
+
+ logger.info('Cleared all custom providers');
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| * Clear all custom providers (mainly for testing/reset) | |
| */ | |
| static clearAllProviders(): void { | |
| localStorage.removeItem(CustomProviderManager.STORAGE_KEY); | |
| // Also clean up all custom provider API keys | |
| const providers = CustomProviderManager.loadProviders(); | |
| providers.forEach(provider => { | |
| const apiKeyStorageKey = `ai_chat_custom_${provider.id}_api_key`; | |
| localStorage.removeItem(apiKeyStorageKey); | |
| }); | |
| logger.info('Cleared all custom providers'); | |
| } | |
| /** | |
| * Clear all custom providers (mainly for testing/reset) | |
| */ | |
| static clearAllProviders(): void { | |
| // Load existing providers to clean up their API keys | |
| const providers = CustomProviderManager.loadProviders(); | |
| providers.forEach(provider => { | |
| const apiKeyStorageKey = CustomProviderManager.getApiKeyStorageKey(provider.id); | |
| localStorage.removeItem(apiKeyStorageKey); | |
| }); | |
| // Remove the providers list itself | |
| localStorage.removeItem(CustomProviderManager.STORAGE_KEY); | |
| logger.info('Cleared all custom providers'); | |
| } |
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/core/CustomProviderManager.ts around lines 283 to
296, clearAllProviders() currently removes the providers storage key before
calling loadProviders(), so loadProviders() returns empty and per-provider API
keys are never removed; change the order to first call
CustomProviderManager.loadProviders() to get the providers list, iterate it and
remove each provider's API key using the existing
getApiKeyStorageKey(provider.id) helper, and only after cleaning per-provider
keys call localStorage.removeItem(CustomProviderManager.STORAGE_KEY); finally
keep the logger.info call as-is.
| import { createLogger } from './Logger.js'; | ||
| import type { LLMProvider } from '../LLM/LLMTypes.js'; | ||
| import { isCustomProvider } from '../LLM/LLMTypes.js'; | ||
| import { LLMProviderRegistry } from '../LLM/LLMProviderRegistry.js'; | ||
| import { CustomProviderManager } from './CustomProviderManager.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import ordering triggers ESLint rules
ESLint is flagging these imports (import/order + inline-type-imports). You’ll likely need to reorder/merge them, e.g. group LLMProviderRegistry before LLMProvider import and combine isCustomProvider into the same import block to satisfy the configured rules.
🧰 Tools
🪛 ESLint
[error] 6-6: There should be at least one empty line between import groups
(import/order)
[error] 6-6: ./Logger.js import should occur after import of ./CustomProviderManager.js
(import/order)
[error] 7-7: ../LLM/LLMTypes.js type import should occur after import of ../LLM/LLMProviderRegistry.js
(import/order)
[error] 7-7: Type imports must be imported in the same import statement as values, using the type keyword
(rulesdir/inline-type-imports)
[error] 8-8: ../LLM/LLMTypes.js import should occur after import of ../LLM/LLMProviderRegistry.js
(import/order)
[error] 9-9: There should be at least one empty line between import groups
(import/order)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/core/LLMConfigurationManager.ts around lines 6–10,
the import order and inline type import violate ESLint import/order and
inline-type-imports rules; fix by reordering and merging imports: place
LLMProviderRegistry import before the LLM type import, combine isCustomProvider
with the LLM types in a single import statement, and import the interface using
an explicit type-only import (import type { LLMProvider } ...) while keeping
createLogger and CustomProviderManager imports in their current positions so the
final order satisfies the linter.
| /** | ||
| * Convert MessageContent to Anthropic format | ||
| */ | ||
| private convertContentToAnthropic(content: MessageContent | undefined): any { | ||
| if (!content) { | ||
| return []; | ||
| } | ||
|
|
||
| if (typeof content === 'string') { | ||
| return [{ type: 'text', text: content }]; | ||
| } | ||
|
|
||
| if (Array.isArray(content)) { | ||
| return content.map(item => { | ||
| if (item.type === 'text') { | ||
| return { type: 'text', text: item.text }; | ||
| } else if (item.type === 'image_url') { | ||
| // Anthropic uses a different image format | ||
| const url = item.image_url.url; | ||
| if (url.startsWith('data:')) { | ||
| // Extract mime type and base64 data | ||
| const matches = url.match(/^data:([^;]+);base64,(.+)$/); | ||
| if (matches) { | ||
| return { | ||
| type: 'image', | ||
| source: { | ||
| type: 'base64', | ||
| media_type: matches[1], | ||
| data: matches[2] | ||
| } | ||
| }; | ||
| } | ||
| } | ||
| // For URLs, Anthropic supports URL type | ||
| return { | ||
| type: 'image', | ||
| source: { | ||
| type: 'url', | ||
| url: url | ||
| } | ||
| }; | ||
| } | ||
| return { type: 'text', text: String(item) }; | ||
| }); | ||
| } | ||
|
|
||
| return [{ type: 'text', text: String(content) }]; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Content conversion to Anthropic format is generally correct
convertContentToAnthropic correctly:
- Maps string content to
{ type: 'text', text: ... }. - Translates OpenAI-style
image_urlblocks into Anthropic’s{ type: 'image', source: { type: 'base64' | 'url', ... } }. - Falls back to text for unknown types.
One minor robustness improvement: ensure item.image_url and item.image_url.url exist before calling url.startsWith, or you’ll risk a runtime error if the input is malformed. A simple guard:
if (item.type === 'image_url' && item.image_url?.url) {
const url = item.image_url.url;
// ...
}
would be enough.
🌐 Web query:
Confirm Anthropic Messages API image content format (base64 vs url source) to ensure this mapping matches current docs.
💡 Result:
Short answer: Anthropic's Messages API accepts image content blocks in the message.content array; each image block uses source.type = "base64" (with media_type and data) and the API also supports source.type = "url" for URL-referenced images. Example JSON patterns match your mapping.
Example (equivalent to Anthropic docs):
- Base64:
{"type":"image","source":{"type":"base64","media_type":"image/jpeg","data":"<BASE64_DATA>"}} - URL:
{"type":"image","source":{"type":"url","url":"https://.../image.jpg"}}
Sources: Anthropic Messages reference and Vision/examples pages. [1][2][3]
References
[1] Anthropic Messages API docs (messages content / image block).
[2] Anthropic Vision guide — base64 example.
[3] Anthropic Messages examples — URL and base64 examples.
Guard item.image_url access to prevent runtime errors on malformed input
convertContentToAnthropic correctly maps to Anthropic's image format (verified against current API docs). However, add a guard check before accessing item.image_url.url:
if (item.type === 'image_url' && item.image_url?.url) {
const url = item.image_url.url;
// ...
}
Without this guard, a malformed input (missing image_url or url properties) will throw a runtime error.
🧰 Tools
🪛 ESLint
[error] 38-38: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 51-76: Unnecessary 'else' after 'return'.
(no-else-return)
[error] 73-73: Expected property shorthand.
(object-shorthand)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/LLM/AnthropicProvider.ts around lines 35 to 82, the
conversion accesses item.image_url.url directly which can throw on malformed
input; add a guard so you only read item.image_url.url when item.type ===
'image_url' && item.image_url?.url is truthy, and if the guard fails handle it
gracefully (e.g. fall back to returning a text entry or skipping the image) to
avoid runtime errors.
| private convertMessagesToAnthropic(messages: LLMMessage[]): { system?: string, messages: any[] } { | ||
| let systemPrompt: string | undefined; | ||
| const anthropicMessages: any[] = []; | ||
|
|
||
| for (const msg of messages) { | ||
| if (msg.role === 'system') { | ||
| // Anthropic uses a separate system parameter | ||
| systemPrompt = typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content); | ||
| continue; | ||
| } | ||
|
|
||
| if (msg.role === 'user') { | ||
| anthropicMessages.push({ | ||
| role: 'user', | ||
| content: this.convertContentToAnthropic(msg.content) | ||
| }); | ||
| } else if (msg.role === 'assistant') { | ||
| if (msg.tool_calls && msg.tool_calls.length > 0) { | ||
| // Convert tool calls to Anthropic format | ||
| const toolUseBlocks = msg.tool_calls.map(tc => ({ | ||
| type: 'tool_use', | ||
| id: tc.id, | ||
| name: tc.function.name, | ||
| input: typeof tc.function.arguments === 'string' | ||
| ? JSON.parse(tc.function.arguments) | ||
| : tc.function.arguments | ||
| })); | ||
| anthropicMessages.push({ | ||
| role: 'assistant', | ||
| content: toolUseBlocks | ||
| }); | ||
| } else { | ||
| // Regular assistant message | ||
| anthropicMessages.push({ | ||
| role: 'assistant', | ||
| content: this.convertContentToAnthropic(msg.content) | ||
| }); | ||
| } | ||
| } else if (msg.role === 'tool') { | ||
| // Tool result - Anthropic expects this in a user message with tool_result type | ||
| anthropicMessages.push({ | ||
| role: 'user', | ||
| content: [{ | ||
| type: 'tool_result', | ||
| tool_use_id: msg.tool_call_id, | ||
| content: typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content) | ||
| }] | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| return { system: systemPrompt, messages: anthropicMessages }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Wrap JSON.parse for tool call arguments to avoid hard failures
In convertMessagesToAnthropic, tool call arguments are parsed without a safety net:
input: typeof tc.function.arguments === 'string'
? JSON.parse(tc.function.arguments)
: tc.function.arguments
If arguments is a string but not valid JSON (e.g. produced by a buggy tool call generator), this will throw and prevent the request from being sent.
Consider wrapping this in try/catch and falling back to a string or {}:
let input: unknown = tc.function.arguments;
if (typeof tc.function.arguments === 'string') {
try {
input = JSON.parse(tc.function.arguments);
} catch {
logger.warn('Failed to parse tool arguments for Anthropic; sending raw string');
input = tc.function.arguments;
}
}
This keeps the provider robust even when upstream formatting isn’t perfect.
🌐 Web query:
Check Anthropic Messages API docs for expected type of `tool_use.input` to confirm JSON object requirement and whether raw strings are acceptable.
💡 Result:
Short answer: tool_use.input must be a JSON object matching the tool's input_schema (not an arbitrary raw string). See the Messages API examples showing input: { "ticker": "^GSPC" } and the tool-use docs stating input is "an object containing the input" that "conform[s] to the tool’s input_schema." The tool-use system prompt also instructs lists/objects must use JSON format (strings/scalars may be specified directly). [1][2]
Sources:
- Messages API / tools and tool_use examples. [1]
- "How to implement tool use" (system prompt guidance: "lists and objects should use JSON format"). [2]
The proposed fallback to raw strings violates Anthropic's API contract; revise error handling strategy
The concern about wrapping JSON.parse for robustness is valid—it can throw on malformed input. However, the suggested fallback to sending the raw string is incorrect.
Per Anthropic's Messages API documentation, tool_use.input must be a JSON object conforming to the tool's input_schema, not an arbitrary string. Falling back to a raw string would either be rejected by the API or cause schema validation failures downstream.
Instead, consider:
- Wrap
JSON.parsein try/catch and log a clear error - Either reject the message or fail gracefully with diagnostic information
- Do not attempt to bypass the object requirement
🧰 Tools
🪛 ESLint
[error] 87-87: Optional property 'system' should be defined after required properties.
(rulesdir/enforce-optional-properties-last)
[error] 87-87: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 89-89: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
| export class ConversationHistoryList extends HTMLElement { | ||
| static readonly litTagName = Lit.StaticHtml.literal`ai-conversation-history-list`; | ||
| readonly #shadow = this.attachShadow({mode: 'open'}); | ||
| readonly #boundRender = this.#render.bind(this); | ||
|
|
||
| #conversations: ConversationMetadata[] = []; | ||
| #currentConversationId: string | null = null; | ||
| #onConversationSelected: ((id: string) => void) | null = null; | ||
| #onDeleteConversation: ((id: string) => void) | null = null; | ||
| #onClose: (() => void) | null = null; | ||
|
|
||
| get conversations(): ConversationMetadata[] { | ||
| return this.#conversations; | ||
| } | ||
|
|
||
| set conversations(value: ConversationMetadata[]) { | ||
| this.#conversations = value; | ||
| void ComponentHelpers.ScheduledRender.scheduleRender(this, this.#boundRender); | ||
| } | ||
|
|
||
| get currentConversationId(): string | null { | ||
| return this.#currentConversationId; | ||
| } | ||
|
|
||
| set currentConversationId(value: string | null) { | ||
| this.#currentConversationId = value; | ||
| void ComponentHelpers.ScheduledRender.scheduleRender(this, this.#boundRender); | ||
| } | ||
|
|
||
| get onConversationSelected(): ((id: string) => void) | null { | ||
| return this.#onConversationSelected; | ||
| } | ||
|
|
||
| set onConversationSelected(value: ((id: string) => void) | null) { | ||
| this.#onConversationSelected = value; | ||
| } | ||
|
|
||
| get onDeleteConversation(): ((id: string) => void) | null { | ||
| return this.#onDeleteConversation; | ||
| } | ||
|
|
||
| set onDeleteConversation(value: ((id: string) => void) | null) { | ||
| this.#onDeleteConversation = value; | ||
| } | ||
|
|
||
| get onClose(): (() => void) | null { | ||
| return this.#onClose; | ||
| } | ||
|
|
||
| set onClose(value: (() => void) | null) { | ||
| this.#onClose = value; | ||
| } | ||
|
|
||
| connectedCallback(): void { | ||
| void ComponentHelpers.ScheduledRender.scheduleRender(this, this.#boundRender); | ||
| } | ||
|
|
||
| #handleClose(): void { | ||
| if (this.#onClose) { | ||
| this.#onClose(); | ||
| } | ||
| } | ||
|
|
||
| #handleConversationSelected(id: string): void { | ||
| if (id !== this.#currentConversationId && this.#onConversationSelected) { | ||
| this.#onConversationSelected(id); | ||
| } | ||
| this.#handleClose(); | ||
| } | ||
|
|
||
| #handleDeleteConversation(event: Event, conversation: ConversationMetadata): void { | ||
| event.stopPropagation(); | ||
| if (this.#onDeleteConversation) { | ||
| this.#onDeleteConversation(conversation.id); | ||
| } | ||
| this.#handleClose(); | ||
| } | ||
|
|
||
| #formatDate(timestamp: number): string { | ||
| const date = new Date(timestamp); | ||
| const now = new Date(); | ||
| const diffMs = now.getTime() - date.getTime(); | ||
| const diffMins = Math.floor(diffMs / 60000); | ||
| const diffHours = Math.floor(diffMs / 3600000); | ||
| const diffDays = Math.floor(diffMs / 86400000); | ||
|
|
||
| if (diffMins < 1) { | ||
| return 'Just now'; | ||
| } else if (diffMins < 60) { | ||
| return `${diffMins}m ago`; | ||
| } else if (diffHours < 24) { | ||
| return `${diffHours}h ago`; | ||
| } else if (diffDays < 7) { | ||
| return `${diffDays}d ago`; | ||
| } else { | ||
| return date.toLocaleDateString(); | ||
| } | ||
| } | ||
|
|
||
| #render(): void { | ||
| Lit.render( | ||
| html` | ||
| <style> | ||
| ${unsafeHTML(getConversationHistoryStyles())} | ||
| </style> | ||
|
|
||
| <div class="history-content"> | ||
| <div class="history-header"> | ||
| <h2 class="history-title">Chat History</h2> | ||
| <button | ||
| class="history-close-button" | ||
| aria-label="Close chat history" | ||
| @click=${() => this.#handleClose()} | ||
| > | ||
| × | ||
| </button> | ||
| </div> | ||
|
|
||
| <div class="history-conversations-list"> | ||
| ${this.#conversations.length === 0 | ||
| ? html` | ||
| <div class="history-empty-state"> | ||
| <p>No saved conversations yet</p> | ||
| <p style="font-size: 12px; opacity: 0.8;">Start a new chat to begin</p> | ||
| </div> | ||
| ` | ||
| : html` | ||
| ${this.#conversations.map( | ||
| conversation => html` | ||
| <div | ||
| class="history-conversation-item ${conversation.id === this.#currentConversationId | ||
| ? 'active' | ||
| : ''}" | ||
| @click=${() => this.#handleConversationSelected(conversation.id)} | ||
| > | ||
| <div class="history-conversation-content"> | ||
| <div class="history-conversation-title">${conversation.title}</div> | ||
| ${conversation.preview | ||
| ? html`<div class="history-conversation-preview"> | ||
| ${conversation.preview} | ||
| </div>` | ||
| : nothing} | ||
| <div class="history-conversation-metadata"> | ||
| <span>${this.#formatDate(conversation.updatedAt)}</span> | ||
| <span>${conversation.messageCount} messages</span> | ||
| </div> | ||
| </div> | ||
| <button | ||
| class="history-delete-button" | ||
| title="Delete conversation" | ||
| @click=${(e: Event) => this.#handleDeleteConversation(e, conversation)} | ||
| > | ||
| 🗑️ | ||
| </button> | ||
| </div> | ||
| `, | ||
| )} | ||
| `} | ||
| </div> | ||
| </div> | ||
| `, | ||
| this.#shadow, | ||
| {host: this}, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Align Lit + custom-element patterns and add localization for UI strings
The component logic looks solid (render scheduling, click handling, delete behavior, and relative date formatting), but a few structural and localization issues need attention:
-
Lit render placement (lint:
no-lit-render-outside-of-view)
#render()callsLit.render(...)directly from the custom element. The DevTools pattern is to have Lit render calls inside a dedicated view function (e.g.,ConversationHistoryListView(...)) and let the element call that. Refactor so the render call lives in such a view function to satisfy the lint rule and match other components.As per coding guidelines.
-
Custom element definition location (lint:
enforce-custom-element-definitions-location)
Defining the element inui/ConversationHistoryList.tsappears to violate the project rule that custom elements live inui/components/or acomponentssubfolder under the panel. Consider moving this class/registration into acomponentsdirectory (and making this file a thin wrapper if needed) to keep CI and structure consistent.As per coding guidelines.
-
Unnecessary
elseafterreturnin#formatDate
Theif (diffMins < 1) { return 'Just now'; } else if (...)chain can be flattened (remove theelsekeywords after returns) to appease theno-else-returnrule. Logic remains the same.As per coding guidelines.
-
Hard-coded English UI strings
Strings like “Chat History”, “Close chat history”, “No saved conversations yet”, “Start a new chat to begin”, “Delete conversation”, and “messages” are hard-coded. For consistency with the rest of DevTools, these should be moved intoi18n-strings.tsand consumed via the project’s localization helper (likely the same lazy i18n pattern used elsewhere).As per coding guidelines.
-
Behavioral nuance on selecting current conversation
#handleConversationSelectedalways closes the history view, even when the clicked ID matches#currentConversationId(the callback is skipped but the dialog closes). This is probably intended, but if you expect “no-op” behavior when clicking the current conversation, you may want to only close when a different ID is chosen.
I'll verify the review comment by exploring the codebase structure and patterns. Let me gather the necessary information.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find similar component files to understand the pattern
fd -e ts -e tsx 'ui.*\.ts$' front_end/panels/ai_chat/ | head -20
Length of output: 44
🏁 Script executed:
#!/bin/bash
# Look for components directories in ai_chat panel
fd -type d 'components' front_end/panels/ai_chat/
Length of output: 170
🏁 Script executed:
#!/bin/bash
# Search for i18n-strings.ts file in the project
fd -e ts 'i18n.*strings' front_end/
Length of output: 52
🏁 Script executed:
#!/bin/bash
# Look for Lit.render patterns in ai_chat panel to understand established usage
rg -n 'Lit\.render\(' front_end/panels/ai_chat/ -A2 -B2
Length of output: 6409
🏁 Script executed:
#!/bin/bash
# Check for lint configuration files related to the mentioned rules
fd -e json -e js -e yaml 'eslint|lint' front_end/panels/ai_chat/ 2>/dev/null | head -10
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for custom element definitions to understand the pattern
rg -n 'customElements\.define|extends HTMLElement' front_end/panels/ai_chat/ui/ -B2 -A2 | head -30
Length of output: 2333
🏁 Script executed:
#!/bin/bash
# Look for i18n usage patterns in ai_chat panel
rg -n 'i18n|localization' front_end/panels/ai_chat/ -B2 -A2 | head -40
Length of output: 3321
I need to gather more specific information about the lint rules and project patterns.
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for @customElement decorator usage vs direct customElements.define
rg -n '@customElement|customElements\.define' front_end/panels/ai_chat/ui/ | head -20
Length of output: 1511
🏁 Script executed:
#!/bin/bash
# Look for any DevTools-wide components directory structure
fd -type d 'components' front_end/panels/ | head -20
Length of output: 170
🏁 Script executed:
#!/bin/bash
# Search for i18nString usage in ai_chat components (not just settings)
rg -n 'i18nString|UIStrings' front_end/panels/ai_chat/ui/ -B2 -A2 | head -50
Length of output: 4250
🏁 Script executed:
#!/bin/bash
# Look for lint rules configuration at project root or panels level
fd -e json -e yml -e yaml -e js '.eslintrc|eslint|lint' front_end/ -maxdepth 3 2>/dev/null
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for view function patterns in the codebase (exported render functions)
rg -n 'export.*function.*View|export.*function.*render' front_end/panels/ai_chat/ui/ -B1 -A3 | head -40
Length of output: 3463
---
🏁 Script executed:
```shell
#!/bin/bash
# Compare ChatView structure (uses @customElement) to understand if Lit.render() in #render is pattern
rg -n 'class ChatView|#render\(\)|Lit\.render' front_end/panels/ai_chat/ui/ChatView.ts -B2 -A5 | head -60
Length of output: 1567
🏁 Script executed:
#!/bin/bash
# Check for any coding guidelines or patterns documentation in the repo
fd -e md -e txt 'CODING|PATTERN|GUIDELINE|STYLE' front_end/ -maxdepth 3 2>/dev/null | head -10
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for other hard-coded English strings in similar ui components to compare
rg -n '(aria-label|placeholder|title).*=.*"[A-Z]' front_end/panels/ai_chat/ui/*.ts | head -15
Length of output: 1014
🏁 Script executed:
#!/bin/bash
# Look at the i18n-strings.ts file to understand its scope
cat front_end/panels/ai_chat/ui/settings/i18n-strings.ts | head -50
Length of output: 1163
Based on my codebase verification, I've identified significant issues with the review comment's core claims.
---
🏁 Script executed:
```shell
#!/bin/bash
# Final verification: search for any project-wide documentation on lint rules
find front_end -name '.eslintrc*' -o -name 'eslint.config*' 2>/dev/null | head -5
Length of output: 107
🏁 Script executed:
#!/bin/bash
# Check if @customElement and customElements.define patterns are both used elsewhere
rg 'customElements\.define|@customElement' front_end/panels/ai_chat/ui/ | wc -l
rg 'customElements\.define' front_end/panels/ai_chat/ui/
Length of output: 248
🏁 Script executed:
#!/bin/bash
# Verify if other components in ui/ root also use hard-coded strings without flagging
rg -n 'aria-label|title|textContent.*=.*"[A-Z]' front_end/panels/ai_chat/ui/ChatView.ts | head -10
Length of output: 907
Remove issues #1, #2, and partially #4; fix unnecessary else keywords in #formatDate
Analysis of the review against the codebase reveals:
-
Lit render placement is not a violation: ChatView (which uses
@customElementdecorator) callsLit.render()directly in its#render()method at lines 867 and 888—identical to ConversationHistoryList. No lint rule configuration files were found forno-lit-render-outside-of-view. This pattern is consistent across the ai_chat panel. -
Custom element location is correct: No
ui/components/directory exists in the project. ConversationHistoryList resides inui/root, matching FileListDisplay (which also uses directcustomElements.define()). Other components use semantic subdirectories (oauth/, version/, model_selector/, etc.) or the ui/ root—the actual established pattern. -
Unnecessary
elsekeywords (Issue Welcome to browser-operator-devtools-frontend Discussions! #3): Valid. In#formatDatemethod, removeelseafter eachreturnstatement (lines withif (diffMins < 1) { return ... } else if ...). All branches return, makingelseredundant. -
Hard-coded strings: While i18n-strings.ts exists for settings, other components (ChatView, SearchableAgentSelector, etc.) also have hard-coded English strings ("Running", "Completed", "Try one of these") without localization. This is inconsistently flagged and not currently enforced across the panel.
-
Behavioral nuance: Correctly noted—the dialog closes regardless of whether selection changed.
Action: Fix the unnecessary else keywords in #formatDate only. Issues #1 and #2 are not applicable; issue #4 is not currently enforced.
🧰 Tools
🪛 ESLint
[error] 19-184: A custom element definition was found in a folder that should not contain element definitions. If you want to define a custom element, either place it in ui/components/ or in a components sub-folder of a panel. E.g. panels/elements/components/.
(rulesdir/enforce-custom-element-definitions-location)
[error] 107-115: Unnecessary 'else' after 'return'.
(no-else-return)
[error] 119-182: Lit render calls should be inside of a view function
(rulesdir/no-lit-render-outside-of-view)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/ui/ConversationHistoryList.ts (around the
#formatDate method between lines ~ eighty to one hundred forty within the
provided snippet), remove the unnecessary else keywords following return
statements so each conditional branch returns directly (e.g., change "if (...) {
return ... } else if (...) { return ... }" to "if (...) { return ... } if (...)
{ return ... }" etc.), keeping the same logic and return values and ensuring
formatting remains consistent.
| import { i18nString, UIStrings } from './i18n-strings.js'; | ||
| import type { ProviderConfig } from './providers/GenericProviderSettings.js'; | ||
| import { | ||
| OPENAI_API_KEY_STORAGE_KEY, | ||
| BROWSEROPERATOR_API_KEY_STORAGE_KEY, | ||
| GROQ_API_KEY_STORAGE_KEY, | ||
| CEREBRAS_API_KEY_STORAGE_KEY, | ||
| ANTHROPIC_API_KEY_STORAGE_KEY, | ||
| GOOGLEAI_API_KEY_STORAGE_KEY, | ||
| } from './constants.js'; | ||
|
|
||
| /** | ||
| * OpenAI provider configuration | ||
| * - API key only | ||
| * - No model fetching (uses static model list) | ||
| * - Has model selectors | ||
| */ | ||
| export const OpenAIConfig: ProviderConfig = { | ||
| id: 'openai', | ||
| displayName: 'OpenAI', | ||
| apiKeyStorageKey: OPENAI_API_KEY_STORAGE_KEY, | ||
| apiKeyLabel: i18nString(UIStrings.apiKeyLabel), | ||
| apiKeyHint: i18nString(UIStrings.apiKeyHint), | ||
| apiKeyPlaceholder: 'Enter your OpenAI API key', | ||
| hasModelSelectors: true, | ||
| hasFetchButton: false, | ||
| }; | ||
|
|
||
| /** | ||
| * BrowserOperator provider configuration | ||
| * - Optional API key | ||
| * - No model fetching | ||
| * - No model selectors (managed service) | ||
| */ | ||
| export const BrowserOperatorConfig: ProviderConfig = { | ||
| id: 'browseroperator', | ||
| displayName: 'BrowserOperator', | ||
| apiKeyStorageKey: BROWSEROPERATOR_API_KEY_STORAGE_KEY, | ||
| apiKeyLabel: i18nString(UIStrings.browseroperatorApiKeyLabel), | ||
| apiKeyHint: i18nString(UIStrings.browseroperatorApiKeyHint), | ||
| apiKeyPlaceholder: 'Enter your BrowserOperator API key (optional)', | ||
| hasModelSelectors: false, | ||
| hasFetchButton: false, | ||
| apiKeyOptional: true, | ||
| }; | ||
|
|
||
| /** | ||
| * Groq provider configuration | ||
| * - API key required | ||
| * - Has model fetching | ||
| * - Has model selectors | ||
| * - Uses model.id for label (no name field) | ||
| */ | ||
| export const GroqConfig: ProviderConfig = { | ||
| id: 'groq', | ||
| displayName: 'Groq', | ||
| apiKeyStorageKey: GROQ_API_KEY_STORAGE_KEY, | ||
| apiKeyLabel: i18nString(UIStrings.groqApiKeyLabel), | ||
| apiKeyHint: i18nString(UIStrings.groqApiKeyHint), | ||
| apiKeyPlaceholder: 'Enter your Groq API key', | ||
| hasModelSelectors: true, | ||
| hasFetchButton: true, | ||
| fetchButtonLabel: i18nString(UIStrings.fetchGroqModelsButton), | ||
| fetchMethodName: 'fetchGroqModels', | ||
| useNameAsLabel: false, | ||
| }; | ||
|
|
||
| /** | ||
| * Cerebras provider configuration | ||
| * - API key required | ||
| * - Has model fetching | ||
| * - Has model selectors | ||
| * - Uses model.id for label (no name field) | ||
| */ | ||
| export const CerebrasConfig: ProviderConfig = { | ||
| id: 'cerebras', | ||
| displayName: 'Cerebras', | ||
| apiKeyStorageKey: CEREBRAS_API_KEY_STORAGE_KEY, | ||
| apiKeyLabel: 'Cerebras API Key', | ||
| apiKeyHint: 'Your Cerebras API key for authentication', | ||
| apiKeyPlaceholder: 'Enter your Cerebras API key', | ||
| hasModelSelectors: true, | ||
| hasFetchButton: true, | ||
| fetchButtonLabel: 'Fetch Cerebras Models', | ||
| fetchMethodName: 'fetchCerebrasModels', | ||
| useNameAsLabel: false, | ||
| }; | ||
|
|
||
| /** | ||
| * Anthropic provider configuration | ||
| * - API key required | ||
| * - Has model fetching | ||
| * - Has model selectors | ||
| * - Uses model.name || model.id for label | ||
| */ | ||
| export const AnthropicConfig: ProviderConfig = { | ||
| id: 'anthropic', | ||
| displayName: 'Anthropic', | ||
| apiKeyStorageKey: ANTHROPIC_API_KEY_STORAGE_KEY, | ||
| apiKeyLabel: 'Anthropic API Key', | ||
| apiKeyHint: 'Your Anthropic API key for authentication', | ||
| apiKeyPlaceholder: 'Enter your Anthropic API key', | ||
| hasModelSelectors: true, | ||
| hasFetchButton: true, | ||
| fetchButtonLabel: 'Fetch Anthropic Models', | ||
| fetchMethodName: 'fetchAnthropicModels', | ||
| useNameAsLabel: true, | ||
| }; | ||
|
|
||
| /** | ||
| * Google AI provider configuration | ||
| * - API key required | ||
| * - Has model fetching | ||
| * - Has model selectors | ||
| * - Uses model.name || model.id for label | ||
| */ | ||
| export const GoogleAIConfig: ProviderConfig = { | ||
| id: 'googleai', | ||
| displayName: 'Google AI', | ||
| apiKeyStorageKey: GOOGLEAI_API_KEY_STORAGE_KEY, | ||
| apiKeyLabel: 'Google AI API Key', | ||
| apiKeyHint: 'Your Google AI API key for authentication', | ||
| apiKeyPlaceholder: 'Enter your Google AI API key', | ||
| hasModelSelectors: true, | ||
| hasFetchButton: true, | ||
| fetchButtonLabel: 'Fetch Google AI Models', | ||
| fetchMethodName: 'fetchGoogleAIModels', | ||
| useNameAsLabel: true, | ||
| }; | ||
|
|
||
| /** | ||
| * List of all generic providers for easy iteration | ||
| */ | ||
| export const GENERIC_PROVIDERS = [ | ||
| OpenAIConfig, | ||
| BrowserOperatorConfig, | ||
| GroqConfig, | ||
| CerebrasConfig, | ||
| AnthropicConfig, | ||
| GoogleAIConfig, | ||
| ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Fix i18n usage at module scope and complete localization for new providers
Functionally the configs look coherent (IDs, storage keys, fetch flags, and fetchMethodName values line up with expectations), but there are a few issues to address:
-
i18nStringat module instantiation time (lint error)
The calls likei18nString(UIStrings.apiKeyLabel)/apiKeyHintinOpenAIConfig,BrowserOperatorConfig, andGroqConfigare executed at module load, which violates thel10n-no-i18nString-calls-module-instantiationrule. These should be converted to use the lazy i18n pattern (e.g.,i18nLazyStringor whatever helper this codebase standardizes on), andProviderConfig/its consumers adjusted accordingly to resolve the ESLint error.As per coding guidelines.
-
Hard-coded English strings for Cerebras/Anthropic/GoogleAI
CerebrasConfig,AnthropicConfig, andGoogleAIConfighave hard-coded label/hint/button text instead of going through theUIStringscatalog. For consistency with the rest of DevTools and to keep the panel fully localizable, these should useUIStringsentries plus the same lazy i18n helper as above (e.g.,UIStrings.cerebrasApiKeyLabel, etc.), rather than raw strings.As per coding guidelines.
-
LLMClient method presence
ThefetchMethodNameassignments ('fetchGroqModels','fetchCerebrasModels','fetchAnthropicModels','fetchGoogleAIModels') rely onLLMClientactually exporting these methods (and with matching names). TypeScript should catch mismatch, but please double‑check thatLLMClientdefines all four methods and that they behave consistently with the flags (hasFetchButton,useNameAsLabel). -
Import ordering lint
ESLint also flagged the import order between./i18n-strings.jsand./constants.js. Reordering to satisfyimport/orderwill keep the CI green.As per coding guidelines.
🏁 Script executed:
#!/bin/bash
# Find LLMClient class and verify fetch methods exist
find . -type f -name "*.ts" -o -name "*.js" | xargs grep -l "class LLMClient\|export.*LLMClient" | head -5
Length of output: 163
🏁 Script executed:
#!/bin/bash
# Find the i18nString function definition and usage pattern
rg -n "export.*i18nString|function i18nString" --type ts --type js -A 5
Length of output: 24793
🏁 Script executed:
#!/bin/bash
# Check UIStrings catalog for available keys related to these providers
rg -n "cerebras|anthropic|googleai|Cerebras|Anthropic|GoogleAI" front_end/panels/ai_chat/ui/settings/ --type ts --type js
Length of output: 3162
🏁 Script executed:
#!/bin/bash
# Look for lazy i18n patterns (i18nLazyString or similar)
rg -n "i18nLazy|getLazyString|UIStrings" front_end/panels/ai_chat/ui/settings/ --type ts --type js -B 2 -A 2 | head -50
Length of output: 4399
🏁 Script executed:
#!/bin/bash
# Find and inspect the ProviderConfig type definition
rg -n "type ProviderConfig|interface ProviderConfig" --type ts -A 15
Length of output: 3518
🏁 Script executed:
#!/bin/bash
# Verify LLMClient has all referenced fetch methods
rg -n "fetchGroqModels|fetchCerebrasModels|fetchAnthropicModels|fetchGoogleAIModels" --type ts --type js
Length of output: 1999
🏁 Script executed:
#!/bin/bash
# Check ESLint configuration for import order and i18n rules
find . -name ".eslintrc*" -o -name "eslint.config.*" | xargs cat 2>/dev/null | head -100
Length of output: 3376
🏁 Script executed:
#!/bin/bash
# Search for the ESLint rule definition
find . -type f -name "*.ts" -o -name "*.js" | xargs grep -l "l10n-no-i18nString-calls-module-instantiation" | head -5
Length of output: 202
🏁 Script executed:
#!/bin/bash
# Get full UIStrings catalog to see available keys
cat front_end/panels/ai_chat/ui/settings/i18n-strings.ts | head -150
Length of output: 3973
🏁 Script executed:
#!/bin/bash
# Search for all UIStrings keys containing "Api" or "Key" or "Label"
rg -n "apiKeyLabel|apiKeyHint|fetchButtonLabel|Fetch.*Button|Api.*Label|Api.*Hint" front_end/panels/ai_chat/ui/settings/i18n-strings.ts
Length of output: 727
🏁 Script executed:
#!/bin/bash
# Check if there's a lazy i18n pattern used in the ai_chat codebase
rg -n "i18nLazy|UIStrings\." front_end/panels/ai_chat/ui/ --type ts -B 2 -A 2 | head -80
Length of output: 6135
🏁 Script executed:
#!/bin/bash
# Verify current import order in providerConfigs.ts
head -15 front_end/panels/ai_chat/ui/settings/providerConfigs.ts
Length of output: 531
🏁 Script executed:
#!/bin/bash
# Search for lazy i18n patterns in DevTools codebase
rg -n "LocalizedString|i18nLazyString|UIStrings as const|Lazy" front_end/core/i18n/ --type ts -B 1 -A 1 | head -50
Length of output: 3657
🏁 Script executed:
#!/bin/bash
# Check how ProviderConfig is consumed to see if it could use lazy values
rg -n "apiKeyLabel|apiKeyHint|fetchButtonLabel" front_end/panels/ai_chat/ui/settings/providers/GenericProviderSettings.ts -B 2 -A 2
Length of output: 1350
🏁 Script executed:
#!/bin/bash
# Look for how other configs handle i18n at module level
rg -n "export const.*Config.*=" front_end/panels/ai_chat/ui/ --type ts -B 1 -A 5 | head -60
Length of output: 4011
Fix i18n usage at module scope and complete localization for new providers
Verification confirms all identified issues. The code has ESLint violations and incomplete localization coverage that must be addressed:
-
i18nStringat module instantiation time (ESLint violation)
Calls likei18nString(UIStrings.apiKeyLabel)inOpenAIConfig(lines 26–27),BrowserOperatorConfig(lines 43–44),GroqConfig(lines 62–63, 67), and hard-coded strings inCerebrasConfig,AnthropicConfig, andGoogleAIConfigviolate thel10n-no-i18nString-calls-module-instantiationESLint rule. These must be refactored usinggetLazilyComputedLocalizedString()from the i18n module, andProviderConfigmust accept lazy string callbacks instead of immediate values. -
Missing UIStrings entries for new providers
Add localization keys forCerebrasConfig,AnthropicConfig, andGoogleAIConfigto the UIStrings catalog:cerebrasApiKeyLabel,cerebrasApiKeyHint,fetchCerebrasModelsButton,anthropicApiKeyLabel,anthropicApiKeyHint,fetchAnthropicModelsButton,googleaiApiKeyLabel,googleaiApiKeyHint,fetchGoogleAIModelsButton. -
LLMClient method presence
Verified: all four fetch methods (fetchGroqModels,fetchCerebrasModels,fetchAnthropicModels,fetchGoogleAIModels) are correctly defined inLLMClient.tsand match thefetchMethodNamereferences.
[resolved] -
Import ordering
Reorder imports to satisfy ESLint'simport/orderrule if CI flags violations.
🧰 Tools
🪛 ESLint
[error] 7-14: ./constants.js import should occur before import of ./i18n-strings.js
(import/order)
[error] 26-26: Calls to i18nString are disallowed at module instantiation time. Use i18nLazyString instead.
(rulesdir/l10n-no-i18nString-calls-module-instantiation)
[error] 27-27: Calls to i18nString are disallowed at module instantiation time. Use i18nLazyString instead.
(rulesdir/l10n-no-i18nString-calls-module-instantiation)
[error] 43-43: Calls to i18nString are disallowed at module instantiation time. Use i18nLazyString instead.
(rulesdir/l10n-no-i18nString-calls-module-instantiation)
[error] 44-44: Calls to i18nString are disallowed at module instantiation time. Use i18nLazyString instead.
(rulesdir/l10n-no-i18nString-calls-module-instantiation)
[error] 62-62: Calls to i18nString are disallowed at module instantiation time. Use i18nLazyString instead.
(rulesdir/l10n-no-i18nString-calls-module-instantiation)
[error] 63-63: Calls to i18nString are disallowed at module instantiation time. Use i18nLazyString instead.
(rulesdir/l10n-no-i18nString-calls-module-instantiation)
[error] 67-67: Calls to i18nString are disallowed at module instantiation time. Use i18nLazyString instead.
(rulesdir/l10n-no-i18nString-calls-module-instantiation)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/ui/settings/providerConfigs.ts lines 5-145, replace
all module-scope calls to i18nString(...) with lazy localization callbacks using
getLazilyComputedLocalizedString from the i18n module (e.g., set apiKeyLabel,
apiKeyHint, fetchButtonLabel to functions that call
getLazilyComputedLocalizedString(UIStrings.xxx)) and update the ProviderConfig
type to accept these lazy string callbacks instead of immediate strings; add the
missing UIStrings keys (cerebrasApiKeyLabel, cerebrasApiKeyHint,
fetchCerebrasModelsButton, anthropicApiKeyLabel, anthropicApiKeyHint,
fetchAnthropicModelsButton, googleaiApiKeyLabel, googleaiApiKeyHint,
fetchGoogleAIModelsButton) to the i18n catalog file; remove hard-coded
label/hint strings in Cerebras/Anthropic/GoogleAI configs and use the new lazy
UIStrings entries; and finally reorder imports at the top to satisfy
import/order linting.
| import { BaseProviderSettings } from './BaseProviderSettings.js'; | ||
| import { createModelSelector, refreshModelSelectOptions } from '../components/ModelSelectorFactory.js'; | ||
| import { i18nString, UIStrings } from '../i18n-strings.js'; | ||
| import { getValidModelForProvider } from '../utils/validation.js'; | ||
| import { getStorageItem, setStorageItem } from '../utils/storage.js'; | ||
| import { MINI_MODEL_STORAGE_KEY, NANO_MODEL_STORAGE_KEY } from '../constants.js'; | ||
| import type { UpdateModelOptionsFunction, GetModelOptionsFunction, AddCustomModelOptionFunction, RemoveCustomModelOptionFunction, ModelOption, ProviderType } from '../types.js'; | ||
| import { LLMClient } from '../../../LLM/LLMClient.js'; | ||
| import { createLogger } from '../../../core/Logger.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ESLint/type-lint cleanups
Lint hints here mainly cover:
- Import ordering/grouping (bring the
../*imports before./BaseProviderSettings.js, and group type imports inline). anyusages (logger and several casts).- Non-null assertions on
fetchModelsStatus.
None of these affect runtime behavior but will need addressing to keep CI green.
🧰 Tools
🪛 ESLint
[error] 5-5: There should be at least one empty line between import groups
(import/order)
[error] 6-6: ../components/ModelSelectorFactory.js import should occur before import of ./BaseProviderSettings.js
(import/order)
[error] 7-7: ../i18n-strings.js import should occur before import of ./BaseProviderSettings.js
(import/order)
[error] 8-8: ../utils/validation.js import should occur before import of ./BaseProviderSettings.js
(import/order)
[error] 9-9: ../utils/storage.js import should occur before import of ./BaseProviderSettings.js
(import/order)
[error] 10-10: ../constants.js import should occur before import of ./BaseProviderSettings.js
(import/order)
[error] 11-11: ../types.js type import should occur before import of ./BaseProviderSettings.js
(import/order)
[error] 12-12: ../../../LLM/LLMClient.js import should occur before import of ./BaseProviderSettings.js
(import/order)
[error] 13-13: ../../../core/Logger.js import should occur before import of ./BaseProviderSettings.js
(import/order)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/ui/settings/providers/GenericProviderSettings.ts
around lines 5 to 13, the PR needs ESLint/type-lint cleanups: reorder imports so
relative parent imports (../*) come before same-folder/child imports
(./BaseProviderSettings.js), combine and inline type-only imports instead of
separate groups, remove any uses of the any type by adding proper types (e.g.,
type the logger from createLogger and avoid casting to any), and eliminate
non-null assertions on fetchModelsStatus by narrowing its type (add runtime
null/undefined checks or default values) so all variables are explicitly typed
and safe. Ensure imports and types are updated together and run the linter to
confirm CI passes.
| export class GenericProviderSettings extends BaseProviderSettings { | ||
| private config: ProviderConfig; | ||
| private logger: any; | ||
| private apiKeyInput: HTMLInputElement | null = null; | ||
| private fetchModelsButton: HTMLButtonElement | null = null; | ||
| private fetchModelsStatus: HTMLElement | null = null; | ||
| private updateModelOptions: UpdateModelOptionsFunction | null = null; | ||
|
|
||
| constructor( | ||
| container: HTMLElement, | ||
| config: ProviderConfig, | ||
| getModelOptions: GetModelOptionsFunction, | ||
| addCustomModelOption: AddCustomModelOptionFunction, | ||
| removeCustomModelOption: RemoveCustomModelOptionFunction, | ||
| updateModelOptions?: UpdateModelOptionsFunction | ||
| ) { | ||
| super(container, config.id, getModelOptions, addCustomModelOption, removeCustomModelOption); | ||
| this.config = config; | ||
| this.logger = createLogger(`${config.displayName}Settings`); | ||
| this.updateModelOptions = updateModelOptions || null; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logger typed as any
logger is declared as any, and ESLint is complaining. You can import the Logger type from your Logger module and tighten this:
import type { Logger } from '../../../core/Logger.js';
private logger: Logger;
This keeps the dynamic module name while avoiding any.
🧰 Tools
🪛 ESLint
[error] 39-39: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/ui/settings/providers/GenericProviderSettings.ts
around lines 37 to 57, the logger field is typed as any; update the file to
import the Logger type from the core Logger module (using a type-only import)
and change the private logger declaration to use that Logger type instead of
any, keeping the existing createLogger call unchanged so runtime behavior is
unaffected.
|
Claude finished @tysonthomas9's task —— View job PR Review: Add support for more providers and conversation historyI've conducted a thorough review of this substantial PR that adds significant functionality to the AI Chat panel. After examining 36 changed files and analyzing the architecture, here are my findings: Todo List:
✅ StrengthsCode Organization & Architecture
Conversation History Implementation
Security Practices
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts (2)
514-514: Use DevTools dialog system instead of native confirm().As previously noted, native
confirm()dialogs are inconsistent with DevTools UI patterns. Consider usingUI.Dialog.Dialogwith custom buttons for better UX and consistency.
5-12: ESLint: Fix import grouping and inline type imports.The imports need adjustments to satisfy ESLint rules:
- Combine
CustomProviderManagerandtype CustomProviderConfiginto a single import using thetypekeyword- Reorder imports according to the project's import/order configuration
Apply this diff:
import * as UI from '../../../ui/legacy/legacy.js'; import * as Geometry from '../../../models/geometry/geometry.js'; -import { CustomProviderManager } from '../core/CustomProviderManager.js'; -import type { CustomProviderConfig } from '../core/CustomProviderManager.js'; -import { LLMClient } from '../LLM/LLMClient.js'; import { createLogger } from '../core/Logger.js'; +import { CustomProviderManager, type CustomProviderConfig } from '../core/CustomProviderManager.js'; +import { LLMClient } from '../LLM/LLMClient.js'; import { PROVIDER_SELECTION_KEY } from './settings/constants.js'; import { applyCustomProviderStyles } from './customProviderStyles.js';
🧹 Nitpick comments (3)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts (1)
150-458: Consider extracting helper methods to reduce complexity.This method is over 300 lines and handles multiple responsibilities (dialog creation, form setup, test connection, model management, save logic). While it works correctly, extracting helpers would improve maintainability.
Consider extracting:
- Form field creation logic
- Test connection handler
- Model management UI
- Save handler logic
This would make the method easier to test and maintain.
front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)
13-16: Chunking path is sound; consider small robustness and consistency tweaksThe token-based gating and chunked path look correct: you estimate tokens from the accessibility tree, branch to
processWithChunkingaboveTOKEN_LIMIT_FOR_CHUNKING, and then process chunks in bounded parallel batches (size 4), preserving order viamarkdownChunks[chunkIndex]. This should exercise the new accessibility-tree strategy as intended.Two follow-ups to consider:
- Keep token heuristics in sync:
executeusesCHARS_PER_TOKEN, whileContentChunkeruses its own default chars-per-token. To avoid future drift, consider explicitly passingcharsPerToken: this.CHARS_PER_TOKENintothis.contentChunker.chunk(...)so both thresholds are derived from the same constant.- Surfacing partial failures:
processWithChunkingreplaces failed chunks with empty strings and still returns a success result. That’s reasonable for resilience, but if many chunks fail you may want to either:
- Track a
hadChunkErrorsflag and propagate a warning/error in theHTMLToMarkdownResult, or- At least log a final summary that includes the number of failed chunks, so callers (and tests) can distinguish “complete” from “partial” successes.
These are refinements rather than blockers; the current implementation should function correctly with the new evaluation tests.
Also applies to: 37-47, 68-88, 115-154, 352-435
front_end/panels/ai_chat/utils/ContentChunker.ts (1)
29-56: Align chunk metadata across strategies and address unusedpreserveContextThe overall structure of
ContentChunkerlooks good and the strategies cover the expected cases, but there are a couple of details worth tightening:
- Sub‑chunk positions for large heading sections
When a heading section exceeds
maxTokens,chunkByHeadingsfalls back tochunkByParagraphs(section.content, ...)and then spreads eachsubChunkinto the final array:const subChunks = this.chunkByParagraphs(section.content, maxTokens, charsPerToken); subChunks.forEach((subChunk) => { chunks.push({ ...subChunk, id: `chunk-${chunkId++}`, heading: section.heading, level: section.level, }); });
chunkByParagraphscomputesstartPosition/endPositionrelative to its owncontentargument, so these sub‑chunks end up with positions relative tosection.content, not the original fullcontent. Other paths (e.g.chunkByAccessibilityNodes, direct paragraph chunking) use coordinates in the full string.If downstream consumers rely on
startPosition/endPositionbeing global, this inconsistency can be surprising. You can fix this by offsetting sub‑chunk positions bysection.start:- const subChunks = this.chunkByParagraphs(section.content, maxTokens, charsPerToken); - subChunks.forEach((subChunk) => { - chunks.push({ - ...subChunk, - id: `chunk-${chunkId++}`, - heading: section.heading, - level: section.level, - }); - }); + const subChunks = this.chunkByParagraphs(section.content, maxTokens, charsPerToken); + subChunks.forEach((subChunk) => { + chunks.push({ + ...subChunk, + id: `chunk-${chunkId++}`, + heading: section.heading, + level: section.level, + startPosition: section.start + subChunk.startPosition, + endPosition: section.start + subChunk.endPosition, + }); + });
preserveContextcurrently a no‑op
ChunkingOptions.preserveContextis threaded intochunk,chunkByHeadings, andchunkHybrid, but it is not actually used to modify chunk contents (e.g., by injecting parent headings). If you don’t need this behavior yet, consider either:
- Removing the option from the public API for now, or
- Adding a brief TODO comment clarifying that context-preserving behavior is planned but not implemented, so callers don’t assume it’s effective today.
These adjustments will make the chunk metadata more reliable for future consumers while keeping the current behavior intact.
Also applies to: 95-182, 248-308, 334-429
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
front_end/panels/ai_chat/evaluation/test-cases/html-to-markdown-tests.ts(1 hunks)front_end/panels/ai_chat/tools/FetcherTool.ts(3 hunks)front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts(4 hunks)front_end/panels/ai_chat/ui/CustomProviderDialog.ts(1 hunks)front_end/panels/ai_chat/ui/EvaluationDialog.ts(2 hunks)front_end/panels/ai_chat/ui/customProviderStyles.ts(1 hunks)front_end/panels/ai_chat/utils/ContentChunker.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
front_end/panels/ai_chat/ui/EvaluationDialog.ts (1)
front_end/panels/ai_chat/evaluation/test-cases/html-to-markdown-tests.ts (1)
htmlToMarkdownTests(241-248)
front_end/panels/ai_chat/utils/ContentChunker.ts (1)
front_end/panels/ai_chat/core/Logger.ts (1)
createLogger(316-318)
front_end/panels/ai_chat/tools/FetcherTool.ts (1)
front_end/panels/ai_chat/tools/Tools.ts (2)
Tool(43-52)LLMContext(57-65)
front_end/panels/ai_chat/evaluation/test-cases/html-to-markdown-tests.ts (2)
front_end/panels/ai_chat/evaluation/framework/types.ts (1)
TestCase(12-26)front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)
HTMLToMarkdownArgs(29-32)
🪛 ast-grep (0.40.0)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts
[warning] 281-281: Direct modification of innerHTML or outerHTML properties detected. Modifying these properties with unsanitized user input can lead to XSS vulnerabilities. Use safe alternatives or sanitize content first.
Context: modelsListContainer.innerHTML = ''
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://owasp.org/www-community/xss-filter-evasion-cheatsheet
- https://cwe.mitre.org/data/definitions/79.html
(dom-content-modification)
[warning] 483-483: Direct modification of innerHTML or outerHTML properties detected. Modifying these properties with unsanitized user input can lead to XSS vulnerabilities. Use safe alternatives or sanitize content first.
Context: element.innerHTML = ''
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://owasp.org/www-community/xss-filter-evasion-cheatsheet
- https://cwe.mitre.org/data/definitions/79.html
(dom-content-modification)
🔇 Additional comments (14)
front_end/panels/ai_chat/ui/customProviderStyles.ts (2)
8-481: LGTM! Well-structured CSS for custom provider dialog.The CSS is comprehensive and follows the established patterns from the codebase (similar to EvaluationDialog.ts styling). The use of CSS custom properties ensures consistency with the theming system, and the organization is logical.
486-490: LGTM! Simple and effective style application.The approach of creating a style element and appending it to the dialog works well for this use case.
front_end/panels/ai_chat/ui/CustomProviderDialog.ts (9)
32-34: LGTM! Simple provider loading.The method correctly loads providers from the CustomProviderManager.
39-98: LGTM! Well-structured dialog initialization.The method properly creates and displays the dialog with appropriate guards, styling, and event handlers.
103-145: LGTM! Clean list item construction.The method correctly creates provider list items with appropriate actions and information display.
180-180: Verify: Provider name cannot be changed when editing.The name field is disabled when editing an existing provider. This prevents users from fixing typos or updating provider names. Is this intentional, or should name editing be allowed?
If name changes should be supported, remove the disabled attribute and ensure the update logic in
CustomProviderManager.updateProviderhandles name changes correctly.
281-281: Static analysis false positive: innerHTML clearing is safe.The static analysis tool flagged this line, but it's a false positive. The code is simply clearing the container (
innerHTML = '') before re-rendering, with no unsanitized user input involved. This is safe.
463-477: LGTM! Clean status message display.The method correctly handles different status types and safely sets message content using
textContent.
482-502: LGTM! Safe model list rendering.The static analysis flagged line 484, but it's a false positive (clearing container). Model names are safely rendered using
textContent(line 496).
518-524: LGTM! Good fallback handling for deleted active provider.The code correctly checks if the deleted provider is currently selected and switches to a default provider, preventing a broken state.
540-545: LGTM! Proper dialog cleanup.The method correctly hides the dialog and cleans up the reference.
front_end/panels/ai_chat/ui/EvaluationDialog.ts (1)
11-16: HTML-to-Markdown tool tests are correctly integrated into the evaluation UIThe new
html_to_markdownentry cleanly plugs the HTML-to-Markdown test suite into the existingTOOL_TEST_MAPPING, and the key matchesHTMLToMarkdownTool.name, so the tool selector and runners will pick these up without additional changes.Also applies to: 27-43
front_end/panels/ai_chat/tools/FetcherTool.ts (1)
39-52: FetcherTool documentation now correctly delegates chunking to HTMLToMarkdownToolThe updated comments and description make it explicit that chunking of large pages is handled inside
HTMLToMarkdownTool, whileFetcherToolremains a coordinator that returns merged markdown per source. The reflowedfetchContentFromUrlsignature and final return comment align with the new HTML-to-Markdown behavior and do not change runtime semantics.Also applies to: 127-135, 211-217
front_end/panels/ai_chat/evaluation/test-cases/html-to-markdown-tests.ts (1)
5-348: HTML-to-Markdown evaluation tests are well-structured and consistent with the tool APIThe test cases comprehensively cover baseline, chunking (including threshold and stress tests), complex layouts, and technical documentation, and the
inputobjects matchHTMLToMarkdownArgs(instruction?,reasoning). Grouped exports (htmlToMarkdownTests,chunkingTests,stableHtmlToMarkdownTests, etc.) and helpers (getHtmlToMarkdownTestById, tag/duration filters) are internally consistent and align with the evaluation framework’sTestCaseshape and thehtml_to_markdowntool name.
|
I'll analyze this and get back to you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (11)
front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (2)
13-47: Token estimation & chunking thresholds are reasonable; consider centralizing and aligning withContentChunkerThe new token-based branching and chunking flow looks solid and should prevent single-shot prompts from blowing up on very large pages. One thing to tighten up:
estimatedTokensusesthis.CHARS_PER_TOKEN, butContentChunker.chunkcurrently relies on its owncharsPerTokendefault. If that default ever diverges fromCHARS_PER_TOKEN, your “should I chunk?” gate and the actual per-chunk token estimates will drift out of sync.To keep behavior coherent, consider either:
- Passing
charsPerToken: this.CHARS_PER_TOKENintothis.contentChunker.chunk(...), or- Deriving
CHARS_PER_TOKENfrom a shared constant exported byContentChunker(or a shared config) instead of duplicating the value here.You might also later want these limits (
TOKEN_LIMIT_FOR_CHUNKING/CHUNK_TOKEN_LIMIT) to come from a shared config keyed by provider/model context, but that’s more of a future tuning concern than a blocker.Also applies to: 117-120, 131-154, 365-369
373-425: Batching strategy is sensible; consider reusing the system prompt and making batch size configurableThe batched
Promise.allloop withBATCH_SIZE = 4is a good way to bound parallelism against the provider. Two small optional refinements:
const systemPrompt = this.createSystemPrompt();is recomputed for every chunk. Since it’s static, you can compute it once perprocessWithChunkingcall (outside the loops) and reuse it in all chunk requests.- If you expect to tune throughput vs. rate limits across environments/providers, making
BATCH_SIZEa private field or config value (rather than an inline constant) would make this easier to adjust without touching logic.These are micro-optimizations / ergonomics improvements; the current implementation is functionally fine.
front_end/panels/ai_chat/tools/Tools.ts (2)
529-543: Lifecycle-basedwaitForPageLoadlooks solid; consider main-frame filtering and disabling lifecycle events afterwardThe migration from a load-event wait to a
networkAlmostIdle/networkIdle+ LCP + timeout race is well-structured, with proper timeout handling and listener cleanup. Two things to consider:
- You currently ignore
frameIdin the lifecycle event and resolve on the firstnetworkAlmostIdle/networkIdlefrom any frame. If DevTools reports these per-frame, a subframe could prematurely satisfy the wait while the main frame is still busy. You may want to filter on the main frame’sframeId(e.g., fromresourceTreeModel.mainFrame()?.id) before resolving.setLifecycleEventsEnabled(true)is never reverted. If this helper is called repeatedly, explicitly disabling lifecycle events afterward (setLifecycleEventsEnabled(false)) would keep the target’s state closer to its original configuration, though this is more of a hygiene improvement than a correctness bug.Also applies to: 544-557, 625-629, 640-643
3585-3590: ReadabilityExtractorTool re-exports are correct but not included ingetToolsThe new exports for
ReadabilityExtractorTool,ReadabilityExtractorArgs, andReadabilityExtractorResultare wired correctly for consumers importing fromTools.ts. Note thatgetTools()still does not instantiateReadabilityExtractorTool, so this tool is currently only available via direct construction/ToolRegistry, not via the genericgetToolslist. If the intent is to expose it to all generic tool callers, consider adding it togetTools()as well; otherwise this is fine as-is.front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
215-219: Parallel chunk extraction and merge logging look good; tighten filtering and empty‑result handlingThe batched parallelization with
BATCH_SIZE = 4and per-index storage preserves order while improving throughput and preventing a single failed chunk from aborting the whole run, which is a nice improvement. Two minor robustness points:
validChunkResults = chunkResults.filter(result => result !== null)will keepundefinedentries if any chunk happens to produceundefined. If you want to treat all non-values as failures, preferresult != null(loose inequality) or an explicit check for bothnullandundefined.- When all chunks fail,
validChunkResults.length === 0and you still callcallMergeLLMwith an empty array. Consider short‑circuiting in that case with a clearer error like “all chunk extractions failed” instead of relying on the merge LLM’s behavior.These are minor, but would make the failure modes more explicit.
Also applies to: 220-263, 265-273
front_end/panels/ai_chat/agent_framework/implementation/agents/ResearchAgent.ts (1)
177-178: Title-awaresanitizeUrlToFilenameis a solid usability improvementThe new
sanitizeUrlToFilename(url, title?):
- Prefers a cleaned, length‑bounded slug from the page title when present, improving file readability.
- Falls back to a URL‑derived slug (domain + first two path segments) when the title is missing or sanitizes to empty.
- Appends an 8‑char hash of the full URL and keeps the original
.mdsuffix, preserving collision resistance.The call site in
afterExecuteis updated to passsource.title, and the helper remains file-local, so signature changes don’t affect external callers. This looks correct and robust across typical edge cases.Also applies to: 235-237, 238-275
front_end/panels/ai_chat/tools/FetcherTool.ts (2)
130-201: Propagate aborts instead of converting them into generic fetch failuresRight now, any
AbortErrorthrown insidefetchContentFromUrl(e.g., fromthrowIfAborted, navigation, or the extractor) is caught and turned into a generic"Error fetching content"result. That means upstream code never sees a real cancellation signal and may treat a user-initiated cancel as an ordinary tool failure.Consider special-casing aborts in the catch block so they bubble out unchanged, while still wrapping other errors:
} catch (error: any) { - logger.error('Error processing URL', { url, error: error.message, stack: error.stack }); - return { - url, - title: '', - markdownContent: '', - success: false, - error: `Error fetching content: ${error.message}` - }; + if (error?.name === 'AbortError') { + // Let caller handle cancellation semantics consistently. + throw error; + } + + logger.error('Error processing URL', { url, error: error.message, stack: error.stack }); + return { + url, + title: '', + markdownContent: '', + success: false, + error: `Error fetching content: ${error.message}` + }; }This keeps cancellation behavior consistent with the
execute‑levelthrowIfAbortedwhile still shielding callers from non-abort failures.
165-191: Clarify content format expectations when Readability falls back to LLM/Markdown
FetcherToolnow documents returning “plain text” while still using themarkdownContentfield. ViaReadabilityExtractorTool, the primary path returns plain text, but the LLM fallback path feeds markdown intotextContent, which then flows intomarkdownContenthere.Downstream this will work (markdown is a superset of plain text), but the mixed-format nature isn’t obvious from the types/docs. Consider either:
- Renaming the field in a future compatible change (e.g.
content+format), or- Explicitly documenting that
markdownContentmay be either plain text or markdown depending on the extraction path.This is mostly about avoiding surprises for future callers.
front_end/panels/ai_chat/tools/ReadabilityExtractorTool.ts (2)
64-82: Add abort-signal awareness to avoid uninterruptible waits and fallbacks
executedoesn’t currently consultctx?.abortSignal, so a cancellation request won’t interrupt the 15swaitForPageLoador a long LLM fallback. Other tools (e.g.FetcherTool) explicitly integrate aborts, so this could be surprising in long-running flows.Consider wiring in simple checks, e.g.:
async execute(args: ReadabilityExtractorArgs, ctx?: LLMContext): Promise<ReadabilityExtractorResult> { logger.info('Executing with args', { args }); const READINESS_TIMEOUT_MS = 15000; + const signal = ctx?.abortSignal; + const throwIfAborted = () => { + if (signal?.aborted) { + throw new DOMException('The operation was aborted', 'AbortError'); + } + }; try { - // Wait for page load + throwIfAborted(); + // Wait for page load const target = SDK.TargetManager.TargetManager.instance().primaryPageTarget(); ... try { logger.debug('Waiting for page to be ready...'); await waitForPageLoad(target, READINESS_TIMEOUT_MS); logger.debug('Page is ready'); } catch (error) { logger.warn('Page readiness timeout, proceeding anyway', { error }); } + throwIfAborted(); // Run extraction in the page context using bundled Readability ... const readabilityResult = await target.runtimeAgent().invoke_evaluate({ ... }); + throwIfAborted(); ... if (!isContentSufficient) { + throwIfAborted(); ... }This keeps the tool responsive to user cancellation while preserving the existing error handling.
Also applies to: 158-191
13-15: Align “plain text” contract with LLM/Markdown fallback behaviorThe interface and description advertise “plain text” output, but when Readability content is below
MIN_CONTENT_LENGTH, the tool falls back toHTMLToMarkdownTooland returnsllmResult.markdownContentastextContent. Callers likeFetcherToolthen pass that through as “plain text” even though it’s markdown in this branch.To avoid confusion for future consumers, consider either:
- Normalizing the LLM fallback output to true plain text (e.g. stripping markdown), or
- Relaxing/clarifying the docs to state that content may be markdown when Readability deems the page insufficient, while still being structurally similar (article text, headings, etc.).
Functionally this works; this is mostly a contract/documentation clarity issue.
Also applies to: 38-40, 158-191
front_end/panels/ai_chat/BUILD.gn (1)
18-24: Keep_ai_chat_sourcesin sync withsourcesfor new UI/persistence modulesThe new files are correctly added to the main
sourceslist, and most of them (providers, tools, vendor assets, tests) are also reflected in_ai_chat_sources. However, these appear only insources:
ui/ConversationHistoryList.tsui/conversationHistoryStyles.tsui/CustomProviderDialog.tsui/customProviderStyles.tspersistence/ConversationTypes.tspersistence/ConversationStorageManager.tspersistence/ConversationManager.tsGiven
_ai_chat_sourcesis used to derive JS outputs for release metadata, you likely want these in that list as well so they’re covered by the same metadata checks as the rest of the module. If their omission is intentional (e.g., they’re handled differently), a brief comment noting that would help future maintainers.Also applies to: 54-58, 69-77, 93-94, 111-115, 124-125, 179-180, 189-190, 226-227, 263-267, 296-297, 314-317, 327-328, 382-383, 392-393
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
front_end/panels/ai_chat/BUILD.gn(13 hunks)front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts(2 hunks)front_end/panels/ai_chat/agent_framework/implementation/agents/ResearchAgent.ts(4 hunks)front_end/panels/ai_chat/agent_framework/implementation/agents/SearchAgent.ts(4 hunks)front_end/panels/ai_chat/evaluation/test-cases/html-to-markdown-tests.ts(1 hunks)front_end/panels/ai_chat/tools/FetcherTool.ts(6 hunks)front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts(4 hunks)front_end/panels/ai_chat/tools/ReadabilityExtractorTool.ts(1 hunks)front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts(1 hunks)front_end/panels/ai_chat/tools/Tools.ts(5 hunks)front_end/panels/ai_chat/vendor/readability-source.ts(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- front_end/panels/ai_chat/vendor/readability-source.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- front_end/panels/ai_chat/evaluation/test-cases/html-to-markdown-tests.ts
🧰 Additional context used
🧬 Code graph analysis (4)
front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts (3)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (1)
ToolRegistry(219-266)front_end/panels/ai_chat/tools/ReadabilityExtractorTool.ts (1)
ReadabilityExtractorTool(42-227)front_end/panels/ai_chat/tools/Tools.ts (1)
ReadabilityExtractorTool(3588-3588)
front_end/panels/ai_chat/tools/FetcherTool.ts (2)
front_end/panels/ai_chat/tools/Tools.ts (3)
Tool(43-52)ReadabilityExtractorTool(3588-3588)LLMContext(57-65)front_end/panels/ai_chat/tools/ReadabilityExtractorTool.ts (1)
ReadabilityExtractorTool(42-227)
front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)
front_end/panels/ai_chat/utils/ContentChunker.ts (2)
ContentChunker(43-469)chunk(51-90)
front_end/panels/ai_chat/tools/ReadabilityExtractorTool.ts (4)
front_end/panels/ai_chat/core/Logger.ts (1)
createLogger(316-318)front_end/panels/ai_chat/tools/Tools.ts (6)
ReadabilityExtractorResult(3589-3589)ReadabilityExtractorArgs(3589-3589)ReadabilityExtractorTool(3588-3588)Tool(43-52)LLMContext(57-65)waitForPageLoad(519-646)front_end/panels/ai_chat/vendor/readability-source.ts (1)
READABILITY_SOURCE(13-2799)front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts (1)
HTMLToMarkdownTool(37-481)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: claude-review
🔇 Additional comments (5)
front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts (1)
15-15: ReadabilityExtractorTool is correctly wired into the ToolRegistryImporting
ReadabilityExtractorTooland registering it under'readability_extractor'matches the tool’snameand the usages in agent configs, so agents will be able to call it via the registry without additional plumbing.Also applies to: 52-52
front_end/panels/ai_chat/agent_framework/implementation/agents/SearchAgent.ts (1)
35-36: Consistent integration ofreadability_extractorinto SearchAgent toolingThe narrative guidance, tools list, and recovery
next_actionsall consistently referencereadability_extractor, and the name matches the registered tool and class. This cleanly swaps in the new readability pathway for long-document context checks alongsideextract_data/fetcher_toolwithout breaking existing flows.Also applies to: 61-61, 135-135, 276-276
front_end/panels/ai_chat/agent_framework/implementation/agents/ResearchAgent.ts (1)
48-48: ResearchAgent now correctly prefersreadability_extractoras a key toolSwitching the Key Tools section and tools list to use
readability_extractoraligns this agent with the new Readability-based extraction path and keeps the tool surface consistent with the rest of the PR. No behavioral issues here.Also applies to: 100-100
front_end/panels/ai_chat/tools/FetcherTool.ts (1)
39-52: Readability-based fetch flow and tool wiring look coherentThe switch to
ReadabilityExtractorTool(including the updated description and keepingmarkdownContentfor compatibility) fits well with the new plain-text-centric extraction path and keeps the public result shape stable.Also applies to: 72-74
front_end/panels/ai_chat/tools/ReadabilityExtractorTool.ts (1)
61-147: Solid extraction pipeline and error handlingThe combination of
waitForPageLoad, in-page Readability evaluation, structured error checks (exception vsresult.error), and final guarded return values (success+textContent/error) looks robust and well-instrumented. The tool integrates cleanly with the existingTool/LLMContextabstractions.Also applies to: 193-225
| private async processWithChunking( | ||
| content: string, | ||
| instruction: string | undefined, | ||
| apiKey: string, | ||
| provider: LLMProvider, | ||
| model: string | ||
| ): Promise<string> { | ||
| // Chunk the raw accessibility tree content | ||
| logger.info('Chunking raw accessibility tree content'); | ||
| const chunks = this.contentChunker.chunk(content, { | ||
| maxTokensPerChunk: this.CHUNK_TOKEN_LIMIT, | ||
| strategy: 'accessibility-tree', // Split on [nodeId] boundaries | ||
| preserveContext: false | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chunk-level failures currently yield silent content loss while overall result reports success: true
In processWithChunking, failed chunk LLM calls are logged and replaced with an empty string, but execute still returns { success: true, markdownContent } with no indication that portions of the page were dropped.
For large or critical documents this can be misleading to callers: they get a syntactically valid markdown string that may be missing whole sections with no explicit signal.
Consider:
- Tracking whether any chunk failed (e.g., a
hadFailuresflag orfailedChunkIdslist), and either:- Returning
success: falsewith an aggregated error when any chunk fails, or - Extending
HTMLToMarkdownResultwith a flag/metadata such aspartial: booleanand maybefailedChunkCountso callers can decide how to handle partial results.
- Returning
That would preserve the “best-effort” behavior while making partial failures observable instead of silent.
Also applies to: 401-410, 427-435
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts around lines 356-369
(and similarly at 401-410 and 427-435), the current processWithChunking swallows
failed per-chunk LLM calls by logging and inserting empty strings while still
returning success: true; change it to track chunk failures (e.g., hadFailures
boolean and/or failedChunkIds array), update the HTMLToMarkdownResult type to
include either a partial flag and failedChunkCount/failedChunkIds or return
success: false with an aggregated error when any chunk fails, and ensure execute
and any callers propagate that metadata/false result so callers can detect
partial results instead of silent content loss.
|
Claude finished @tysonthomas9's task —— View job PR Review: Add support for more providers and conversation historyI've conducted a comprehensive review of this substantial PR focusing on code quality, potential bugs, performance considerations, security concerns, and test coverage. Todo List:
✅ StrengthsCode Organization & Architecture
Conversation History Implementation
Security Practices
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (4)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts (1)
513-513: Nativeconfirm()is inconsistent with DevTools UI patterns.Using native browser dialogs breaks DevTools UI consistency. Consider using the DevTools dialog system (UI.Dialog.Dialog) with custom buttons.
front_end/panels/ai_chat/LLM/AnthropicProvider.ts (3)
534-550: Remove non-null assertion and validate against instance API key.Line 536 uses a non-null assertion (
storageKeys.apiKey!) which is unnecessary sincegetCredentialStorageKeys()always returns an object with anapiKeyfield.More importantly, the method reads from localStorage but API calls use
this.apiKeyfrom the constructor. If an API key is passed to the constructor but missing from localStorage, validation will incorrectly fail.Apply this diff:
validateCredentials(): {isValid: boolean, message: string, missingItems?: string[]} { - const storageKeys = this.getCredentialStorageKeys(); - const apiKey = localStorage.getItem(storageKeys.apiKey!); + const apiKey = this.apiKey || localStorage.getItem(this.getCredentialStorageKeys().apiKey); if (!apiKey) { return { isValid: false, message: 'Anthropic API key is required. Please add your API key in Settings.', missingItems: ['API Key'] }; } return { isValid: true, message: 'Anthropic credentials are configured correctly.' }; }
112-125: Wrap JSON.parse in try-catch to handle malformed tool arguments gracefully.Lines 118-120 parse
tc.function.argumentswithout error handling. If the arguments string is malformed JSON, this will throw and prevent the entire message conversion from completing.While Anthropic's API requires
tool_use.inputto be a JSON object (not a raw string), you should still handle parse failures gracefully by logging the error and either skipping the malformed tool call or throwing a descriptive error.Apply this diff:
const toolUseBlocks = msg.tool_calls.map(tc => ({ type: 'tool_use', id: tc.id, name: tc.function.name, - input: typeof tc.function.arguments === 'string' - ? JSON.parse(tc.function.arguments) - : tc.function.arguments + input: typeof tc.function.arguments === 'string' + ? (() => { + try { + return JSON.parse(tc.function.arguments); + } catch (error) { + logger.error(`Failed to parse tool arguments for ${tc.function.name}:`, error); + throw new Error(`Invalid tool arguments for ${tc.function.name}: ${error instanceof Error ? error.message : 'Unknown error'}`); + } + })() + : tc.function.arguments }));
59-84: Add guard to prevent runtime error on malformed image_url input.Line 61 accesses
item.image_url.urldirectly without checking ifitem.image_urlexists. If the input is malformed (missingimage_urlorurlproperties), this will throw a runtime error.Apply this diff:
- } else if (item.type === 'image_url') { + } else if (item.type === 'image_url' && item.image_url?.url) { // Anthropic uses a different image format const url = item.image_url.url; if (url.startsWith('data:')) {If the guard fails, either skip the item or fall back to a text representation.
🧹 Nitpick comments (4)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts (2)
280-313: PreferreplaceChildren()overinnerHTML = ''for clearing DOM.While
innerHTML = ''is safe for clearing content, usingreplaceChildren()(ortextContent = '') more clearly expresses the intent and avoids any potential confusion with XSS concerns.Apply this diff:
- modelsListContainer.innerHTML = ''; + modelsListContainer.replaceChildren();
481-501: PreferreplaceChildren()overinnerHTML = ''for clearing DOM.Same as the previous comment - use
replaceChildren()for clearer intent.Apply this diff:
- element.innerHTML = ''; + element.replaceChildren();front_end/panels/ai_chat/LLM/GoogleAIProvider.ts (2)
43-47: Extract hardcoded 'models/' prefix as a class constant.The prefix
'models/'appears here and at line 395. Extracting it as a constant improves maintainability.Apply this diff:
export class GoogleAIProvider extends LLMBaseProvider { private static readonly API_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta'; + private static readonly MODEL_PREFIX = 'models/'; readonly name: LLMProvider = 'googleai'; // ... private getGenerateContentEndpoint(modelName: string): string { // Model name format: "models/gemini-2.5-pro" or just "gemini-2.5-pro" - const normalizedModel = modelName.startsWith('models/') ? modelName : `models/${modelName}`; + const normalizedModel = modelName.startsWith(GoogleAIProvider.MODEL_PREFIX) ? modelName : `${GoogleAIProvider.MODEL_PREFIX}${modelName}`; return `${GoogleAIProvider.API_BASE_URL}/${normalizedModel}:generateContent?key=${this.apiKey}`; }Also update line 395:
- const modelId = model.name.replace('models/', ''); + const modelId = model.name.replace(GoogleAIProvider.MODEL_PREFIX, '');
194-228: Replaceanytypes with TypeScript interfaces for better type safety.The
payloadBodyparameter and return type useany, which reduces type safety. Introduce lightweight interfaces for the Google AI API request/response structure.Add these interfaces near the top of the file:
interface GoogleAIUsageMetadata { promptTokenCount: number; candidatesTokenCount: number; totalTokenCount: number; } interface GoogleAIPart { text?: string; functionCall?: { name: string; args: any; }; inline_data?: { mime_type: string; data: string; }; } interface GoogleAIContent { role: string; parts: GoogleAIPart[]; } interface GoogleAICandidate { content: GoogleAIContent; } interface GoogleAIGenerateResponse { candidates: GoogleAICandidate[]; usageMetadata?: GoogleAIUsageMetadata; }Then update the method signatures:
- private async makeAPIRequest(endpoint: string, payloadBody: any): Promise<any> { + private async makeAPIRequest(endpoint: string, payloadBody: any): Promise<GoogleAIGenerateResponse> {Also update
processGoogleAIResponse:- private processGoogleAIResponse(data: any): LLMResponse { + private processGoogleAIResponse(data: GoogleAIGenerateResponse): LLMResponse {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
front_end/panels/ai_chat/LLM/AnthropicProvider.ts(1 hunks)front_end/panels/ai_chat/LLM/GoogleAIProvider.ts(1 hunks)front_end/panels/ai_chat/ui/CustomProviderDialog.ts(1 hunks)front_end/panels/ai_chat/vendor/readability-source.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- front_end/panels/ai_chat/vendor/readability-source.ts
🧰 Additional context used
🧬 Code graph analysis (3)
front_end/panels/ai_chat/LLM/GoogleAIProvider.ts (4)
front_end/panels/ai_chat/core/Logger.ts (1)
createLogger(316-318)front_end/panels/ai_chat/LLM/LLMTypes.ts (6)
LLMProvider(145-145)MessageContent(150-152)LLMMessage(179-192)LLMResponse(209-220)LLMCallOptions(197-204)ModelInfo(235-240)front_end/panels/ai_chat/LLM/LLMErrorHandler.ts (1)
LLMRetryManager(156-351)front_end/panels/ai_chat/LLM/LLMResponseParser.ts (1)
LLMResponseParser(13-306)
front_end/panels/ai_chat/LLM/AnthropicProvider.ts (4)
front_end/panels/ai_chat/core/Logger.ts (1)
createLogger(316-318)front_end/panels/ai_chat/LLM/LLMTypes.ts (6)
LLMProvider(145-145)MessageContent(150-152)LLMMessage(179-192)LLMResponse(209-220)LLMCallOptions(197-204)ModelInfo(235-240)front_end/panels/ai_chat/LLM/LLMErrorHandler.ts (1)
LLMRetryManager(156-351)front_end/panels/ai_chat/LLM/LLMResponseParser.ts (1)
LLMResponseParser(13-306)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts (5)
front_end/panels/ai_chat/core/Logger.ts (1)
createLogger(316-318)front_end/panels/ai_chat/core/CustomProviderManager.ts (2)
CustomProviderConfig(12-20)CustomProviderManager(26-320)front_end/panels/ai_chat/ui/customProviderStyles.ts (1)
applyCustomProviderStyles(486-490)front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient(57-605)front_end/panels/ai_chat/ui/settings/constants.ts (1)
PROVIDER_SELECTION_KEY(14-14)
🪛 ast-grep (0.40.0)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts
[warning] 280-280: Direct modification of innerHTML or outerHTML properties detected. Modifying these properties with unsanitized user input can lead to XSS vulnerabilities. Use safe alternatives or sanitize content first.
Context: modelsListContainer.innerHTML = ''
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://owasp.org/www-community/xss-filter-evasion-cheatsheet
- https://cwe.mitre.org/data/definitions/79.html
(dom-content-modification)
[warning] 482-482: Direct modification of innerHTML or outerHTML properties detected. Modifying these properties with unsanitized user input can lead to XSS vulnerabilities. Use safe alternatives or sanitize content first.
Context: element.innerHTML = ''
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://owasp.org/www-community/xss-filter-evasion-cheatsheet
- https://cwe.mitre.org/data/definitions/79.html
(dom-content-modification)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: claude-review
🔇 Additional comments (6)
front_end/panels/ai_chat/ui/CustomProviderDialog.ts (3)
23-33: LGTM: Clean initialization pattern.The constructor and provider loading logic are straightforward and correct.
318-360: LGTM: Robust connection testing with proper error handling.The test connection flow correctly handles success/failure cases, merges fetched models, and skips duplicates.
389-448: LGTM: Well-structured save handler with proper validation.The save logic correctly validates inputs, handles both add and update scenarios, and properly notifies listeners.
front_end/panels/ai_chat/LLM/AnthropicProvider.ts (2)
168-211: LGTM: Robust API request handling with proper error logging.The request/response flow correctly handles errors, logs usage metrics, and supports beta headers.
284-288: No action required — beta header is current.The beta header
interleaved-thinking-2025-05-14is the current header for Anthropic's extended thinking feature, matching the code at line 287. No changes needed.front_end/panels/ai_chat/LLM/GoogleAIProvider.ts (1)
268-309: LGTM: Clean implementation with proper retry handling.The main call flow correctly converts messages, constructs payloads, and wraps operations with retry logic.
| } else if (item.type === 'image_url') { | ||
| // Google AI uses inline_data for images | ||
| const url = item.image_url.url; | ||
| if (url.startsWith('data:')) { | ||
| // Extract mime type and base64 data | ||
| const matches = url.match(/^data:([^;]+);base64,(.+)$/); | ||
| if (matches) { | ||
| return { | ||
| inline_data: { | ||
| mime_type: matches[1], | ||
| data: matches[2] | ||
| } | ||
| }; | ||
| } | ||
| } | ||
| // For URLs, Google AI expects blob_uri (not supported in all cases) | ||
| logger.warn('Image URLs are not fully supported, use base64 data URLs instead'); | ||
| return { text: '[Image URL not supported in this format]' }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add guard to prevent runtime error on malformed image_url input.
Lines 73-75 access item.image_url.url directly without checking if item.image_url exists. This will throw a runtime error if the input is malformed.
Apply this diff:
- } else if (item.type === 'image_url') {
+ } else if (item.type === 'image_url' && item.image_url?.url) {
// Google AI uses inline_data for images
const url = item.image_url.url;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } else if (item.type === 'image_url') { | |
| // Google AI uses inline_data for images | |
| const url = item.image_url.url; | |
| if (url.startsWith('data:')) { | |
| // Extract mime type and base64 data | |
| const matches = url.match(/^data:([^;]+);base64,(.+)$/); | |
| if (matches) { | |
| return { | |
| inline_data: { | |
| mime_type: matches[1], | |
| data: matches[2] | |
| } | |
| }; | |
| } | |
| } | |
| // For URLs, Google AI expects blob_uri (not supported in all cases) | |
| logger.warn('Image URLs are not fully supported, use base64 data URLs instead'); | |
| return { text: '[Image URL not supported in this format]' }; | |
| } | |
| } else if (item.type === 'image_url' && item.image_url?.url) { | |
| // Google AI uses inline_data for images | |
| const url = item.image_url.url; | |
| if (url.startsWith('data:')) { | |
| // Extract mime type and base64 data | |
| const matches = url.match(/^data:([^;]+);base64,(.+)$/); | |
| if (matches) { | |
| return { | |
| inline_data: { | |
| mime_type: matches[1], | |
| data: matches[2] | |
| } | |
| }; | |
| } | |
| } | |
| // For URLs, Google AI expects blob_uri (not supported in all cases) | |
| logger.warn('Image URLs are not fully supported, use base64 data URLs instead'); | |
| return { text: '[Image URL not supported in this format]' }; | |
| } |
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/LLM/GoogleAIProvider.ts around lines 73 to 91, the
code accesses item.image_url.url without verifying item.image_url exists which
can throw on malformed input; add a guard that checks item.image_url is an
object and item.image_url.url is a string before using it, and if the check
fails log a warning and return the existing fallback ({ text: '[Image URL not
supported in this format]' }) so the function safely handles missing or invalid
image_url values.
| if (msg.tool_calls && msg.tool_calls.length > 0) { | ||
| // Convert tool calls | ||
| const functionCalls = msg.tool_calls.map(tc => ({ | ||
| functionCall: { | ||
| name: tc.function.name, | ||
| args: typeof tc.function.arguments === 'string' | ||
| ? JSON.parse(tc.function.arguments) | ||
| : tc.function.arguments | ||
| } | ||
| })); | ||
| contents.push({ | ||
| role: 'model', | ||
| parts: functionCalls | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap JSON.parse in try-catch to handle malformed tool arguments gracefully.
Lines 126-128 parse tc.function.arguments without error handling. If the arguments string is malformed JSON, this will throw and prevent message conversion.
Apply this diff:
const functionCalls = msg.tool_calls.map(tc => ({
functionCall: {
name: tc.function.name,
- args: typeof tc.function.arguments === 'string'
- ? JSON.parse(tc.function.arguments)
- : tc.function.arguments
+ args: typeof tc.function.arguments === 'string'
+ ? (() => {
+ try {
+ return JSON.parse(tc.function.arguments);
+ } catch (error) {
+ logger.error(`Failed to parse tool arguments for ${tc.function.name}:`, error);
+ throw new Error(`Invalid tool arguments for ${tc.function.name}: ${error instanceof Error ? error.message : 'Unknown error'}`);
+ }
+ })()
+ : tc.function.arguments
}
}));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (msg.tool_calls && msg.tool_calls.length > 0) { | |
| // Convert tool calls | |
| const functionCalls = msg.tool_calls.map(tc => ({ | |
| functionCall: { | |
| name: tc.function.name, | |
| args: typeof tc.function.arguments === 'string' | |
| ? JSON.parse(tc.function.arguments) | |
| : tc.function.arguments | |
| } | |
| })); | |
| contents.push({ | |
| role: 'model', | |
| parts: functionCalls | |
| }); | |
| if (msg.tool_calls && msg.tool_calls.length > 0) { | |
| // Convert tool calls | |
| const functionCalls = msg.tool_calls.map(tc => ({ | |
| functionCall: { | |
| name: tc.function.name, | |
| args: typeof tc.function.arguments === 'string' | |
| ? (() => { | |
| try { | |
| return JSON.parse(tc.function.arguments); | |
| } catch (error) { | |
| logger.error(`Failed to parse tool arguments for ${tc.function.name}:`, error); | |
| throw new Error(`Invalid tool arguments for ${tc.function.name}: ${error instanceof Error ? error.message : 'Unknown error'}`); | |
| } | |
| })() | |
| : tc.function.arguments | |
| } | |
| })); | |
| contents.push({ | |
| role: 'model', | |
| parts: functionCalls | |
| }); |
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/LLM/GoogleAIProvider.ts around lines 121 to 134, the
code calls JSON.parse on tc.function.arguments without error handling; wrap the
parse in a try-catch so malformed JSON doesn't throw and break conversion.
Modify the mapping so you attempt JSON.parse when typeof arguments === 'string'
inside a try block, on parse failure catch the error and fall back to the
original string (or null) for args and optionally call a debug/warn logger with
the error and the offending string; ensure the functionCall object still gets
created and pushed so message conversion continues.
| validateCredentials(): {isValid: boolean, message: string, missingItems?: string[]} { | ||
| const storageKeys = this.getCredentialStorageKeys(); | ||
| const apiKey = localStorage.getItem(storageKeys.apiKey!); | ||
|
|
||
| if (!apiKey) { | ||
| return { | ||
| isValid: false, | ||
| message: 'Google AI API key is required. Please add your API key in Settings.', | ||
| missingItems: ['API Key'] | ||
| }; | ||
| } | ||
|
|
||
| return { | ||
| isValid: true, | ||
| message: 'Google AI credentials are configured correctly.' | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove non-null assertion and validate against instance API key.
Line 530 uses a non-null assertion (storageKeys.apiKey!) which is unnecessary. More importantly, the method should check this.apiKey first before falling back to localStorage, as API calls use the instance property.
Apply this diff:
validateCredentials(): {isValid: boolean, message: string, missingItems?: string[]} {
- const storageKeys = this.getCredentialStorageKeys();
- const apiKey = localStorage.getItem(storageKeys.apiKey!);
+ const apiKey = this.apiKey || localStorage.getItem(this.getCredentialStorageKeys().apiKey);
if (!apiKey) {
return {
isValid: false,
message: 'Google AI API key is required. Please add your API key in Settings.',
missingItems: ['API Key']
};
}
return {
isValid: true,
message: 'Google AI credentials are configured correctly.'
};
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| validateCredentials(): {isValid: boolean, message: string, missingItems?: string[]} { | |
| const storageKeys = this.getCredentialStorageKeys(); | |
| const apiKey = localStorage.getItem(storageKeys.apiKey!); | |
| if (!apiKey) { | |
| return { | |
| isValid: false, | |
| message: 'Google AI API key is required. Please add your API key in Settings.', | |
| missingItems: ['API Key'] | |
| }; | |
| } | |
| return { | |
| isValid: true, | |
| message: 'Google AI credentials are configured correctly.' | |
| }; | |
| } | |
| validateCredentials(): {isValid: boolean, message: string, missingItems?: string[]} { | |
| const apiKey = this.apiKey || localStorage.getItem(this.getCredentialStorageKeys().apiKey); | |
| if (!apiKey) { | |
| return { | |
| isValid: false, | |
| message: 'Google AI API key is required. Please add your API key in Settings.', | |
| missingItems: ['API Key'] | |
| }; | |
| } | |
| return { | |
| isValid: true, | |
| message: 'Google AI credentials are configured correctly.' | |
| }; | |
| } |
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/LLM/GoogleAIProvider.ts around lines 528 to 544,
remove the non-null assertion on storageKeys.apiKey and change validation to
prefer the instance property this.apiKey before falling back to localStorage;
specifically, check if this.apiKey is set and use that as the API key, otherwise
read localStorage with the safe storage key (no "!"), and return the same
isValid/message/missingItems structure when neither exists. Ensure you handle
the case where storageKeys.apiKey may be undefined by only accessing
localStorage if the key exists.
| const currentProvider = localStorage.getItem(PROVIDER_SELECTION_KEY); | ||
| if (currentProvider === providerId) { | ||
| // Switch to default provider (openai) | ||
| localStorage.setItem(PROVIDER_SELECTION_KEY, 'openai'); | ||
| logger.info(`Switched from deleted provider ${providerId} to openai`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Hardcoded fallback to 'openai' may fail if OpenAI is not configured.
When the deleted provider is currently selected, the code falls back to 'openai'. If OpenAI credentials are not configured or the provider is unavailable, users may encounter errors. Consider falling back to the first available configured provider or displaying a prompt to select a new provider.
Verify fallback behavior:
🏁 Script executed:
#!/bin/bash
# Find where PROVIDER_SELECTION_KEY is used and how providers are validated
rg -n "PROVIDER_SELECTION_KEY" --type ts -C 3
# Check if there's validation when a provider is selected
rg -n "localStorage.setItem.*provider" --type ts -C 3
Length of output: 20013
Based on the script output provided, I can now complete the verification. Let me analyze the evidence:
Based on my verification through the codebase search results, I can now provide the rewritten review comment:
Hardcoded fallback to 'openai' may fail if OpenAI is not configured.
When the deleted provider is currently selected, the code falls back to 'openai' at line 521. However, there is no validation to ensure OpenAI credentials are configured before using it. If OpenAI is unavailable, users may encounter errors. Consider falling back to the first available configured provider instead, or displaying a prompt to select a new provider.
🏁 Script executed:
#!/bin/bash
# Look at SettingsDialog.ts around the provider selection to see if validation occurs
rg -n "selectedProvider.*providerSelect" --type ts -B 5 -A 10
# Check if there's any validation of provider credentials before setting
rg -n "hasCredentials|isConfigured|validateProvider" --type ts -C 3
# Look for available providers list
rg -n "availableProviders|configuredProviders|PROVIDER" --type ts -C 2 | head -100
Length of output: 16085
🏁 Script executed:
#!/bin/bash
# Search for where provider credentials are checked
rg -n "hasCredentials|hasValidCredentials|isConfigured|availableProviders" --type ts -C 3
# Search for OpenAI credential check
rg -n "openai.*credential|credential.*openai|getOpenAIKey" --type ts -C 2
# Look for provider setup/validation in SettingsDialog
rg -n "class SettingsDialog" --type ts -A 100 | head -150
Length of output: 9621
Based on the verification results, I can now confirm the issue and provide the rewritten review comment:
Hardcoded fallback to 'openai' without credential validation.
When a custom provider is deleted and currently selected, the code falls back to 'openai' (line 521) without validating that OpenAI credentials exist. The system provides validateProviderCredentials() for runtime validation but does not check provider configuration before saving provider selection in SettingsDialog (line 590). Consider falling back to the first available configured provider instead, using LLMClient.validateProviderCredentials() to verify the fallback provider has valid credentials before storing it.
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/ui/CustomProviderDialog.ts around lines 518-523, the
code currently hardcodes a fallback to 'openai' when a deleted provider was
selected; instead, query the provider registry for candidates and pick the first
provider whose credentials validate (use LLMClient.validateProviderCredentials()
or LLMProviderRegistry.validateProviderCredentials()), store that provider id in
localStorage (or clear/unset the selection if none validate), and update the log
to include which provider was chosen; also add/ensure a corresponding validation
call in SettingsDialog before saving a user-selected provider so only configured
providers can be persisted.

Summary by CodeRabbit
New Features
UI/UX Improvements
Tests
✏️ Tip: You can customize this high-level summary in your review settings.