-
-
Notifications
You must be signed in to change notification settings - Fork 7
feat: Add Gemini 3 Pro model support #389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Added Gemini 3 Pro as a provider option in getModel() function - Positioned as secondary choice after xAI, before AWS Bedrock - Uses gemini-3-pro-preview model identifier - Added GEMINI_3_PRO_API_KEY environment variable documentation - Includes error handling with fallback to next available provider - Added comprehensive integration documentation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughGemini 3 Pro is integrated as a new AI provider by adding environment variable configuration, implementing provider logic with fallback error handling, and documenting the integration. The provider is prioritized after xAI and before AWS Bedrock in the model selection order. Changes
Sequence DiagramsequenceDiagram
participant Client as Application
participant Model as getModel()
participant Google as Google Generative AI
participant Fallback as Next Provider
Client->>Model: Request model
Model->>Model: Check GEMINI_3_PRO_API_KEY
alt API Key Present
Model->>Google: Create & call Gemini 3 Pro
alt Success
Google-->>Model: Response
Model-->>Client: Gemini 3 Pro model
else Error/Unavailable
Note over Model: Log warning
Model->>Fallback: Fall back to next provider
Fallback-->>Model: Next provider response
Model-->>Client: Fallback provider model
end
else API Key Not Present
Model->>Fallback: Skip to next provider
Fallback-->>Model: Next provider response
Model-->>Client: Next provider model
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
|||||||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
.env.local.exampleGEMINI_3_PRO_INTEGRATION.mdlib/utils/index.ts
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
.env.local.example
[warning] 12-12: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
🪛 markdownlint-cli2 (0.18.1)
GEMINI_3_PRO_INTEGRATION.md
40-40: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
42-42: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
60-60: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (5)
.env.local.example (1)
10-13: LGTM! Environment variable configuration is correct.The new
GEMINI_3_PRO_API_KEYvariable follows the existing pattern and is properly documented. The static analysis warning about quotes is a false positive—this is standard practice for .env.example files.lib/utils/index.ts (2)
21-21: LGTM! Environment variable read is correctly implemented.The environment variable handling follows the established pattern used by other providers.
40-50: No issue identified. The model identifiergemini-3-pro-previewis valid and supported by the Vercel AI SDK @ai-sdk/google provider as of December 2025. Gemini 3 Pro is an official Google model release. The implementation correctly handles API failures with proper try-catch and fallback logic.GEMINI_3_PRO_INTEGRATION.md (2)
1-23: Documentation references potentially invalid model.The documentation describes "Gemini 3 Pro" and uses the model identifier
gemini-3-pro-preview, but this model may not exist. This issue is tied to the verification request inlib/utils/index.ts. Once the correct model identifier is confirmed, this documentation will need to be updated accordingly.
70-81: All reference documentation URLs are valid and accessible, and the technical details are accurate. Gemini 3 was announced on November 18, 2025 and is available in developer preview on the Gemini API/AI Studio as of December 2025. The model IDgemini-3-pro-previewis the correct format used across the Gemini API and Vertex AI.
| ## Configuration | ||
|
|
||
| To use Gemini 3 Pro in your QCX deployment: | ||
|
|
||
| 1. Obtain a Google AI API key from [Google AI Studio](https://aistudio.google.com/) | ||
| 2. Add the API key to your `.env.local` file: | ||
| ```bash | ||
| GEMINI_3_PRO_API_KEY="your_actual_api_key_here" | ||
| ``` | ||
| 3. Restart your development server or redeploy your application |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the GEMINI_3_PRO_INTEGRATION.md file
find . -name "GEMINI_3_PRO_INTEGRATION.md" -type fRepository: QueueLab/QCX
Length of output: 85
🏁 Script executed:
# Once found, read lines 34-43 to verify the formatting issue
sed -n '34,43p' GEMINI_3_PRO_INTEGRATION.md | cat -nRepository: QueueLab/QCX
Length of output: 453
🏁 Script executed:
# Also check a broader context to understand the markdown structure
sed -n '30,50p' GEMINI_3_PRO_INTEGRATION.md | cat -nRepository: QueueLab/QCX
Length of output: 856
🌐 Web query:
Google AI Studio access and API key configuration
💡 Result:
Summary — how to get access to Google AI Studio and configure an API key
- Access / projects
- Sign in to Google AI Studio (studio or ai.google.dev). If new, accept Terms — Studio may create a default Google Cloud project and API key for you. To use your own Cloud projects, import them into AI Studio (Dashboard → Projects → Import projects). [1]
- Create an API key
- You can create a key either from AI Studio (Dashboard → API Keys for an imported project) or in Google Cloud Console (APIs & Services → Credentials → Create credentials → API key). AI Studio also shows keys with permission to use the Gemini/Generative Language API. [1][2]
- Restrict and secure the key (best practice)
- In Cloud Console, add API restrictions (allow only the Generative Language / Gemini API) and application restrictions (IP, HTTP referrers, apps) to reduce exposure. Rotate and monitor keys periodically. [2]
- Configure the key for use
- Set an environment variable locally or in your service: GEMINI_API_KEY or GOOGLE_API_KEY (Google libraries pick them up; GOOGLE_API_KEY takes precedence if both set). You can also pass the key explicitly (header x-goog-api-key or ?key=) for REST/JS calls. [1][3]
- Verify the key (example curl)
- Example (replace YOUR_KEY and model):
curl -H 'Content-Type: application/json' -H "x-goog-api-key: YOUR_KEY" -d '{"prompt":{"text":"Hello"}}' "https://generativelanguage.googleapis.com/v1/models/text-bison-001:generateText"
(Or use ?key=YOUR_KEY). [3]
- Additional notes
- AI Studio limits: only keys with no restrictions or restricted to Generative Language API may be displayed; Studio can create a default project/key for new users; you may need to import projects you want to use. Some tools (Android Studio, MakerSuite) have direct “Get a Gemini API key” flows that open AI Studio. Monitor usage/limits in AI Studio. [1][3][4]
Sources
[1] Google AI Studio / setup & Gemini API keys (import projects, default project, Studio API Keys).
[2] Google Cloud — Manage API keys (create, restrict, rotate).
[3] Google Developers — Get an API key / Generative Language (curl examples; GEMINI_API_KEY / GOOGLE_API_KEY).
[4] Android Studio docs — add Gemini API key via AI Studio.
Add blank lines around the fenced code block in the ordered list.
Lines 37-39: The bash code block should be separated from list items by blank lines for proper markdown formatting and linting compliance.
1. Obtain a Google AI API key from [Google AI Studio](https://aistudio.google.com/)
2. Add the API key to your `.env.local` file:
+
```bash
GEMINI_3_PRO_API_KEY="your_actual_api_key_here"
```
+
3. Restart your development server or redeploy your application🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
40-40: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
42-42: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
GEMINI_3_PRO_INTEGRATION.md around lines 34 to 43: the fenced bash code block
inside the ordered list lacks blank lines before and after it which breaks
markdown linting; add a blank line above the opening ```bash and a blank line
after the closing ``` so the code block is separated from the list items,
preserving existing indentation and list numbering.
| ## Model Capabilities | ||
|
|
||
| Gemini 3 Pro (`gemini-3-pro-preview`) supports: | ||
|
|
||
| - **Advanced Reasoning**: State-of-the-art reasoning capabilities with optional thinking modes | ||
| - **Multimodal Understanding**: Text, image, and file inputs | ||
| - **Tool Usage**: Function calling and tool integration | ||
| - **Large Context Window**: 1M token context window | ||
| - **Agentic Capabilities**: Excellent for complex multi-step tasks | ||
| - **Coding**: Exceptional coding and technical capabilities | ||
|
|
||
| ## Provider Priority | ||
|
|
||
| The provider selection follows this priority order: | ||
|
|
||
| ``` | ||
| XAI_API_KEY exists? → Use Grok | ||
| ↓ No | ||
| GEMINI_3_PRO_API_KEY exists? → Use Gemini 3 Pro | ||
| ↓ No | ||
| AWS credentials exist? → Use AWS Bedrock | ||
| ↓ No | ||
| OPENAI_API_KEY exists? → Use OpenAI (default) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Verify capability claims and fix code block formatting.
The documentation makes specific capability claims (1M token context window, advanced reasoning modes, etc.) that should be verified against official documentation once the correct model identifier is confirmed.
Additionally, the fenced code block at line 60 should specify a language for proper rendering.
🔎 Proposed fix for code block language specification
The provider selection follows this priority order:
-```
+```text
XAI_API_KEY exists? → Use Grok
↓ No
GEMINI_3_PRO_API_KEY exists? → Use Gemini 3 Pro🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
60-60: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
|
@coderabbitai How to make it such that user decides what model to use in the settings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new Gemini branch is straightforward, but the current try/catch-based fallback in getModel() is likely non-functional because provider model construction usually doesn’t perform network I/O, so errors won’t be caught there. Logging in the xAI branch is now inaccurate relative to the updated provider order. Bedrock selection should validate AWS_REGION to avoid selecting a misconfigured provider and failing later.
Additional notes (3)
- Maintainability |
lib/utils/index.ts:19-19
Thetry/catcharoundreturn xai('grok-4-fast-non-reasoning')(and similarlyreturn google('gemini-3-pro-preview')) is very likely ineffective. In the Vercel AI SDK provider pattern, callingxai('...')/google('...')typically constructs a model object and does not make a network request at that moment, so it won’t throw for “API unavailable/credits exhausted.” That means this fallback logic probably never triggers, and failures will happen later during actual model usage (where you can’t switch providers anymore).
This introduces a correctness issue: the code claims to provide runtime fallback, but in practice it may silently select a provider that will fail later, without trying the next one.
-
Readability |
lib/utils/index.ts:37-37
console.warn('xAI API unavailable, falling back to OpenAI:')is now misleading given the new provider ordering (Gemini then Bedrock then OpenAI). If this log is relied on operationally, it will point responders to the wrong next step and complicate debugging. -
Readability |
lib/utils/index.ts:41-54
AWS Bedrock selection only checksawsAccessKeyId && awsSecretAccessKey, but the code passesregion: awsRegion. IfAWS_REGIONis unset, you’ll still enter the Bedrock branch and may fail later with a confusing runtime error. Given this function is about provider selection, it should validate the minimal required configuration for a provider before selecting it.
Summary of changes
Summary of changes
- Added a new environment variable example for Google Gemini 3 Pro:
GEMINI_3_PRO_API_KEYin.env.local.example. - Introduced a new documentation file:
GEMINI_3_PRO_INTEGRATION.mddescribing configuration, provider priority, and model capabilities. - Updated
getModel()inlib/utils/index.tsto attempt providers in this order:- xAI (
XAI_API_KEY) - Gemini 3 Pro (
GEMINI_3_PRO_API_KEY) usinggoogle('gemini-3-pro-preview') - AWS Bedrock (AWS credentials)
- (implied) OpenAI fallback
- xAI (
Notable behavior changes
getModel()now conditionally creates a Google provider client viacreateGoogleGenerativeAI()and returns the Gemini model when the new env var is set.- Added try/catch blocks intended to fall back when a provider is “unavailable.”
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 6661 🏁 Script executed: Length of output: 2304 🏁 Script executed: Length of output: 694 🏁 Script executed: Length of output: 13875 🏁 Script executed: Length of output: 422 🏁 Script executed: Length of output: 591 🏁 Script executed: Length of output: 7508 🏁 Script executed: Length of output: 8129 @ngoiyaeric Great question! I can see you already have a settings UI with a Model Selection form, but it's not currently connected to the actual model selection logic. Here's a comprehensive guide to implement user-controlled model selection: Implementation Steps1. Update Database SchemaFirst, add a user preferences table to store model selections: // lib/db/schema.ts - Add this table
export const userPreferences = pgTable('user_preferences', {
id: uuid('id').primaryKey().defaultRandom(),
userId: uuid('user_id').notNull().references(() => users.id, { onDelete: 'cascade' }).unique(),
selectedModel: varchar('selected_model', { length: 100 }),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
});
// Add relation
export const userPreferencesRelations = relations(userPreferences, ({ one }) => ({
user: one(users, {
fields: [userPreferences.userId],
references: [users.id],
}),
}));Then create and run a migration: npx drizzle-kit generate
npx drizzle-kit migrate2. Create Server Actions for Model PreferencesCreate a new file for preference actions: // lib/actions/preferences.ts
'use server'
import { db } from '@/lib/db'
import { userPreferences } from '@/lib/db/schema'
import { eq } from 'drizzle-orm'
import { revalidatePath } from 'next/cache'
export async function saveModelPreference(userId: string, modelId: string) {
try {
const existing = await db.query.userPreferences.findFirst({
where: eq(userPreferences.userId, userId),
})
if (existing) {
await db
.update(userPreferences)
.set({
selectedModel: modelId,
updatedAt: new Date()
})
.where(eq(userPreferences.userId, userId))
} else {
await db.insert(userPreferences).values({
userId,
selectedModel: modelId,
})
}
revalidatePath('/settings')
return { success: true }
} catch (error) {
console.error('Error saving model preference:', error)
return { error: 'Failed to save model preference' }
}
}
export async function getModelPreference(userId: string) {
try {
const preference = await db.query.userPreferences.findFirst({
where: eq(userPreferences.userId, userId),
})
return preference?.selectedModel || null
} catch (error) {
console.error('Error fetching model preference:', error)
return null
}
}3. Update ModelSelectionForm with Real ModelsUpdate the form to show actual available models: // components/settings/components/model-selection-form.tsx
"use client";
import type { UseFormReturn } from "react-hook-form";
import {
FormField,
FormItem,
FormLabel,
FormControl,
FormDescription,
FormMessage,
} from "@/components/ui/form";
import { RadioGroup, RadioGroupItem } from "@/components/ui/radio-group";
import { Card, CardContent } from "@/components/ui/card";
import { Badge } from "@/components/ui/badge";
import { Sparkles, Zap, Rocket, Cpu, Cloud } from "lucide-react";
interface ModelSelectionFormProps {
form: UseFormReturn<any>;
}
const models = [
{
id: "grok-4-fast-non-reasoning",
name: "xAI Grok",
description: "Fast reasoning model from xAI with real-time capabilities.",
icon: Zap,
badge: "Fast",
badgeVariant: "default" as const,
requiresEnv: "XAI_API_KEY",
},
{
id: "gemini-3-pro-preview",
name: "Gemini 3 Pro",
description: "Advanced multimodal model with 1M token context window.",
icon: Sparkles,
badge: "New",
badgeVariant: "secondary" as const,
requiresEnv: "GEMINI_3_PRO_API_KEY",
},
{
id: "bedrock-claude",
name: "AWS Bedrock Claude",
description: "Enterprise-grade Claude via AWS Bedrock infrastructure.",
icon: Cloud,
badge: "Enterprise",
badgeVariant: "outline" as const,
requiresEnv: "AWS_ACCESS_KEY_ID",
},
{
id: "gpt-4o",
name: "OpenAI GPT-4o",
description: "Reliable and versatile model from OpenAI. Default fallback.",
icon: Cpu,
badge: "Default",
badgeVariant: "outline" as const,
requiresEnv: "OPENAI_API_KEY",
},
];
export function ModelSelectionForm({ form }: ModelSelectionFormProps) {
return (
<FormField
control={form.control}
name="selectedModel"
render={({ field }) => (
<FormItem className="space-y-4">
<FormLabel>AI Model</FormLabel>
<FormControl>
<RadioGroup
onValueChange={field.onChange}
value={field.value}
className="space-y-3"
>
{models.map((model) => {
const Icon = model.icon;
return (
<FormItem key={model.id} className="space-y-0">
<FormControl>
<RadioGroupItem
value={model.id}
id={model.id}
className="peer sr-only"
/>
</FormControl>
<FormLabel htmlFor={model.id} className="cursor-pointer">
<Card className="border-2 transition-all peer-data-[state=checked]:border-primary">
<CardContent className="p-4 flex items-start gap-4">
<div className="h-10 w-10 rounded-full bg-primary/10 flex items-center justify-center text-primary">
<Icon className="h-5 w-5" />
</div>
<div className="flex-1 space-y-1">
<div className="flex items-center gap-2">
<h4 className="font-medium">{model.name}</h4>
<Badge variant={model.badgeVariant}>
{model.badge}
</Badge>
</div>
<p className="text-sm text-muted-foreground">
{model.description}
</p>
<p className="text-xs text-muted-foreground">
Requires: {model.requiresEnv}
</p>
</div>
</CardContent>
</Card>
</FormLabel>
</FormItem>
);
})}
</RadioGroup>
</FormControl>
<FormDescription>
Select your preferred AI model. Note: The selected model must have valid API credentials configured in your environment.
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
);
}4. Update Settings Component to Save Model PreferenceModify the settings form to save the model preference: // components/settings/components/settings.tsx
// Add this import at the top
import { saveModelPreference, getModelPreference } from '@/lib/actions/preferences'
// Update the onSubmit function
async function onSubmit(data: SettingsFormValues) {
setIsLoading(true)
try {
// Save the system prompt
const saveResult = await saveSystemPrompt(userId, data.systemPrompt);
if (saveResult?.error) {
throw new Error(saveResult.error);
}
// Save the model preference
const modelResult = await saveModelPreference(userId, data.selectedModel);
if (modelResult?.error) {
throw new Error(modelResult.error);
}
toast({
title: "Settings updated",
description: "Your settings have been saved successfully.",
})
} catch (error: any) {
toast({
title: "Something went wrong",
description: error.message || "Your settings could not be saved. Please try again.",
variant: "destructive",
})
} finally {
setIsLoading(false)
}
}
// Update the useEffect to load model preference
useEffect(() => {
async function fetchSettings() {
const existingPrompt = await getSystemPrompt(userId);
if (existingPrompt) {
form.setValue("systemPrompt", existingPrompt, { shouldValidate: true, shouldDirty: false });
}
const modelPreference = await getModelPreference(userId);
if (modelPreference) {
form.setValue("selectedModel", modelPreference, { shouldValidate: true, shouldDirty: false });
}
}
fetchSettings();
}, [form, userId]);5. Update getModel() FunctionModify the // lib/utils/index.ts
export function getModel(preferredModelId?: string | null) {
const xaiApiKey = process.env.XAI_API_KEY
const gemini3ProApiKey = process.env.GEMINI_3_PRO_API_KEY
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID
const awsSecretAccessKey = process.env.AWS_SECRET_ACCESS_KEY
const awsRegion = process.env.AWS_REGION
// If user has a preference, try to use it first
if (preferredModelId) {
try {
switch (preferredModelId) {
case 'grok-4-fast-non-reasoning':
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
})
return xai('grok-4-fast-non-reasoning')
}
break
case 'gemini-3-pro-preview':
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
})
return google('gemini-3-pro-preview')
}
break
case 'bedrock-claude':
if (awsAccessKeyId && awsSecretAccessKey) {
const bedrock = createAmazonBedrock({
bedrockOptions: {
region: awsRegion,
credentials: {
accessKeyId: awsAccessKeyId,
secretAccessKey: awsSecretAccessKey,
},
},
})
return bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0')
}
break
case 'gpt-4o':
const openaiPreferred = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
return openaiPreferred('gpt-4o')
}
} catch (error) {
console.warn(`Preferred model ${preferredModelId} unavailable, falling back to default priority`)
}
}
// Fallback to original priority-based selection
if (xaiApiKey) {
try {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
})
return xai('grok-4-fast-non-reasoning')
} catch (error) {
console.warn('xAI API unavailable, falling back')
}
}
if (gemini3ProApiKey) {
try {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
})
return google('gemini-3-pro-preview')
} catch (error) {
console.warn('Gemini 3 Pro API unavailable, falling back')
}
}
if (awsAccessKeyId && awsSecretAccessKey) {
const bedrock = createAmazonBedrock({
bedrockOptions: {
region: awsRegion,
credentials: {
accessKeyId: awsAccessKeyId,
secretAccessKey: awsSecretAccessKey,
},
},
})
return bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0')
}
// Default fallback (OpenAI)
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
return openai('gpt-4o')
}6. Update Chat API RouteUpdate the chat route to use the user's model preference: // app/api/chat/route.ts
import { getModelPreference } from '@/lib/actions/preferences'
import { getModel } from '@/lib/utils'
export async function POST(request: NextRequest) {
try {
const userId = await getCurrentUserIdOnServer();
if (!userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
}
// Get user's model preference
const modelPreference = await getModelPreference(userId);
// Get the model based on user preference
const model = getModel(modelPreference);
// ... rest of your chat logic using the model
} catch (error) {
console.error('Error in POST /api/chat:', error);
return NextResponse.json({ error: 'Internal Server Error' }, { status: 500 });
}
}7. Update Default ValuesFix the default value in settings to match actual model IDs: // components/settings/components/settings.tsx
const defaultValues: Partial<SettingsFormValues> = {
systemPrompt: "You are a planetary copilot...",
selectedModel: "gpt-4o", // Changed from "gpt-4o" to match actual model ID
users: [
{ id: "1", email: "admin@example.com", role: "admin" },
{ id: "2", email: "user@example.com", role: "editor" },
],
}SummaryThis implementation:
The user can now select their preferred model in settings, and the application will use that model for their chats while still falling back gracefully if the selected model becomes unavailable. |
This commit documents that the branch has been synchronized with the latest changes from main branch. All recent updates have been merged including: - Gemini 3 Pro model support (PR #389) - Image attachment token fixes (PR #388) - Comprehensive E2E test suite (PR #350) - Playwright GitHub Actions CI/CD - All dependency updates and bug fixes The Supabase backend implementation and collaboration features from this PR have been preserved and are compatible with the latest main branch changes.
User description
Overview
This PR adds support for Google's Gemini 3 Pro model to the QCX application, allowing users to leverage Google's most advanced reasoning model with state-of-the-art multimodal understanding and agentic capabilities.
Changes
1. Updated
lib/utils/index.tsgetModel()functiongemini-3-pro-previewmodel identifierGEMINI_3_PRO_API_KEYenvironment variable2. Updated
.env.local.exampleGEMINI_3_PRO_API_KEYenvironment variable3. Added
GEMINI_3_PRO_INTEGRATION.mdProvider Priority Order
After this change, the provider selection follows this order:
XAI_API_KEYis setGEMINI_3_PRO_API_KEYis set (NEW)Model Capabilities
Gemini 3 Pro (
gemini-3-pro-preview) provides:Configuration
Users can enable Gemini 3 Pro by adding to their
.env.local:GEMINI_3_PRO_API_KEY="your_api_key_here"API keys can be obtained from Google AI Studio.
Testing
@ai-sdk/google)References
Related Issues
Resolves the request to add Gemini 3 Pro support with
GEMINI_3_PRO_API_KEYenvironment variable.PR Type
Enhancement
Description
Adds Gemini 3 Pro as secondary AI provider option
Implements provider fallback chain with proper error handling
Configures environment variable for API key management
Includes comprehensive integration documentation
Diagram Walkthrough
flowchart LR A["getModel() function"] --> B{"XAI_API_KEY set?"} B -->|Yes| C["Use Grok"] B -->|No| D{"GEMINI_3_PRO_API_KEY set?"} D -->|Yes| E["Use Gemini 3 Pro"] D -->|No| F{"AWS credentials set?"} F -->|Yes| G["Use AWS Bedrock"] F -->|No| H["Use OpenAI"]File Walkthrough
index.ts
Integrate Gemini 3 Pro into provider selection logiclib/utils/index.ts
GEMINI_3_PRO_API_KEYenvironment variable checkcreateGoogleGenerativeAI().env.local.example
Document Gemini 3 Pro API key configuration.env.local.example
GEMINI_3_PRO_API_KEYenvironment variable documentationGEMINI_3_PRO_INTEGRATION.md
Add Gemini 3 Pro integration documentationGEMINI_3_PRO_INTEGRATION.md
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.