-
Notifications
You must be signed in to change notification settings - Fork 5
Gemini vercel ai sdk adapter #56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Greptile OverviewGreptile SummaryThis PR adds a comprehensive Vercel AI SDK adapter that enables BrowserOS to use multiple LLM providers (OpenAI, Anthropic, Google, Azure, Bedrock, OpenRouter, Ollama, LMStudio) through a unified interface. The implementation uses a clean strategy pattern for converting between Gemini and Vercel AI SDK formats. Key additions:
Issues found:
Confidence Score: 4/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant Client as Client/CLI
participant Adapter as VercelAIContentGenerator
participant MsgStrategy as MessageConversionStrategy
participant ToolStrategy as ToolConversionStrategy
participant RespStrategy as ResponseConversionStrategy
participant Provider as AI Provider (OpenAI/Anthropic/etc)
participant Stream as HonoSSEStream
Client->>Adapter: generateContentStream(request)
Adapter->>MsgStrategy: geminiToVercel(contents)
MsgStrategy-->>Adapter: CoreMessage[]
Adapter->>ToolStrategy: geminiToVercel(tools)
ToolStrategy->>ToolStrategy: normalizeForOpenAI(schema)
ToolStrategy-->>Adapter: VercelTool[]
Adapter->>Provider: streamText({messages, tools, system})
Provider-->>Adapter: fullStream (AsyncIterable)
loop For each stream chunk
Provider->>RespStrategy: chunk (text-delta/tool-call/finish)
alt text-delta chunk
RespStrategy->>Stream: write SSE (text-delta)
RespStrategy-->>Adapter: yield GenerateContentResponse
else tool-call chunk
RespStrategy->>Stream: write SSE (tool-call)
RespStrategy->>RespStrategy: accumulate in toolCallsMap
else finish chunk
RespStrategy->>RespStrategy: store finishReason
end
end
RespStrategy->>Provider: getUsage()
Provider-->>RespStrategy: usage metadata
RespStrategy->>Stream: write SSE (finish event)
RespStrategy->>ToolStrategy: vercelToGemini(toolCalls)
ToolStrategy-->>RespStrategy: FunctionCall[]
RespStrategy-->>Adapter: yield final GenerateContentResponse
Adapter-->>Client: AsyncGenerator<GenerateContentResponse>
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
14 files reviewed, 1 comment
| const propertyKeys = Object.keys(result.properties); | ||
| if (propertyKeys.length > 0) { | ||
| // Merge with existing required array (if any) and ensure all keys are included | ||
| const existingRequired = Array.isArray(result.required) ? result.required : []; | ||
| const allRequired = Array.from(new Set([...existingRequired, ...propertyKeys])); | ||
| result.required = allRequired; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: making all properties required may break tools with optional parameters
the normalization adds all property keys to required array, making every parameter mandatory. tools with genuinely optional params will fail if those params aren't provided.
| const propertyKeys = Object.keys(result.properties); | |
| if (propertyKeys.length > 0) { | |
| // Merge with existing required array (if any) and ensure all keys are included | |
| const existingRequired = Array.isArray(result.required) ? result.required : []; | |
| const allRequired = Array.from(new Set([...existingRequired, ...propertyKeys])); | |
| result.required = allRequired; | |
| } | |
| // 2. Add required array with ALL property keys | |
| if (result.properties && typeof result.properties === 'object') { | |
| const propertyKeys = Object.keys(result.properties); | |
| if (propertyKeys.length > 0) { | |
| // Only merge with existing required array, don't force all properties to be required | |
| if (Array.isArray(result.required) && result.required.length > 0) { | |
| result.required = Array.from(new Set(result.required)); | |
| } | |
| } | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/agent/src/agent/gemini-vercel-sdk-adapter/strategies/tool.ts
Line: 40:46
Comment:
**logic:** making all properties required may break tools with optional parameters
the normalization adds all property keys to `required` array, making every parameter mandatory. tools with genuinely optional params will fail if those params aren't provided.
```suggestion
// 2. Add required array with ALL property keys
if (result.properties && typeof result.properties === 'object') {
const propertyKeys = Object.keys(result.properties);
if (propertyKeys.length > 0) {
// Only merge with existing required array, don't force all properties to be required
if (Array.isArray(result.required) && result.required.length > 0) {
result.required = Array.from(new Set(result.required));
}
}
}
```
How can I resolve this? If you propose a fix, please make it concise.| /** | ||
| * Configuration for Vercel AI adapter | ||
| */ | ||
| export interface VercelAIConfig { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@greptileai what is a better way to have this structure? Can we have anthropic, openai, google as enums? and have a different structure for the config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the current VercelAIConfig interface, I can suggest a better structure that uses enums for providers and organizes the configuration more logically:
| export interface VercelAIConfig { | |
| /** | |
| * Supported AI providers | |
| */ | |
| export enum AIProvider { | |
| ANTHROPIC = 'anthropic', | |
| OPENAI = 'openai', | |
| GOOGLE = 'google', | |
| OPENROUTER = 'openrouter', | |
| AZURE = 'azure', | |
| OLLAMA = 'ollama', | |
| LMSTUDIO = 'lmstudio', | |
| AWS_BEDROCK = 'aws-bedrock', | |
| } | |
| /** | |
| * Provider-specific configuration | |
| */ | |
| export interface ProviderConfig { | |
| apiKey?: string; | |
| baseUrl?: string; | |
| region?: string; | |
| resourceName?: string; | |
| accessKeyId?: string; | |
| secretAccessKey?: string; | |
| sessionToken?: string; | |
| } | |
| /** | |
| * Configuration for Vercel AI adapter | |
| */ | |
| export interface VercelAIConfig { | |
| model: string; | |
| provider: AIProvider; | |
| providerConfig?: ProviderConfig; | |
| honoStream?: HonoSSEStream; | |
| } |
This structure provides several benefits:
- Type safety: Using enums prevents typos and provides autocomplete
- Cleaner organization: All provider-specific settings are grouped under
providerConfig - Extensibility: Easy to add new providers without cluttering the main interface
- Explicit provider selection: Forces users to specify which provider they're using
- Flexible configuration: The
ProviderConfigcan accommodate different providers' needs without optional fields for every possible provider
This way, instead of having separate optional fields for each provider's configuration scattered in the main interface, everything is properly organized and type-safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shivammittal274 could you update the code to use something similar to above design?
|
LGTM! |
No description provided.