Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @kapicic, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new feature that provides intelligent prompt suggestions to users based on their selected data source. It includes a new backend API endpoint, a service for generating LLM-powered suggestions from database schemas, and front-end UI integration to display and interact with these suggestions, significantly enhancing the user's ability to quickly formulate queries.
Highlights
- New API Endpoint for Suggestions: I've added a new Next.js API route (
/api/suggestions) that handles requests for prompt suggestions. This endpoint dynamically provides suggestions based on the selected data source, offering hardcoded prompts for a sample database and leveraging an LLM for schema-based suggestions for other databases. - Integrated UI for Prompt Suggestions: The
HomepageTextareacomponent has been updated to include a new 'Suggestions' button. Clicking this button fetches suggestions from the new API. The UI now displays these suggestions as clickable buttons below the chat input, allowing users to quickly populate the textarea with a relevant prompt. - LLM-Powered Schema-Based Suggestions: A new service (
suggestionService.ts) has been introduced to generate intelligent prompt suggestions. This service connects to the user's selected database, extracts its schema, and then uses an LLM to generate three diverse and actionable prompts tailored to the database's structure, focusing on potential applications like web apps, dashboards, or analytics platforms. - Dedicated Suggestions UI Component: A new reusable React component,
Suggestions.tsx, has been created. This component is responsible for rendering the fetched prompt suggestions as a series of styled, clickable buttons, improving the modularity and reusability of the UI.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This PR introduces a new feature to provide prompt suggestions based on a selected data source. It adds a new API endpoint, a service to generate suggestions using an LLM, and integrates this into the UI. The implementation is solid, but I have a few suggestions to improve maintainability, robustness, and correctness, particularly regarding a contradiction in the LLM prompt and improving logging and error handling.
| model: llm.instance, | ||
| maxTokens: llm.maxOutputTokens, | ||
| system: systemPrompt, | ||
| messages: [{ role: 'user', content: 'Generate 3 dashboard suggestions based on this schema.' }], |
There was a problem hiding this comment.
The user message provided to the LLM, Generate 3 dashboard suggestions based on this schema., directly contradicts the instructions in the system prompt. The system prompt asks for a diverse set of suggestions (web apps, dashboards, etc.) and explicitly states "don't suggest 3 dashboards". This contradiction will likely cause the LLM to ignore the diversity requirement and only generate dashboard-related suggestions. The user message should be made more generic to align with the system prompt's goal.
| messages: [{ role: 'user', content: 'Generate 3 dashboard suggestions based on this schema.' }], | |
| messages: [{ role: 'user', content: 'Generate 3 suggestions based on this schema.' }], |
app/api/suggestions/route.ts
Outdated
| } | ||
|
|
||
| // Check if it's the sample database | ||
| const isSampleDatabase = dataSource.connectionString === 'sqlite://sample.db'; |
There was a problem hiding this comment.
app/api/suggestions/route.ts
Outdated
| suggestions, | ||
| }); | ||
| } catch (error) { | ||
| console.error('Error generating suggestions:', error); |
There was a problem hiding this comment.
For consistency with other services like suggestionService.ts, please use the structured logger instead of console.error. This helps standardize logging across the application, which is beneficial for debugging and monitoring.
import { logger } from '~/utils/logger';
logger.error('Error generating suggestions:', error);| console.error('Failed to fetch suggestions:', data.error); | ||
| } | ||
| } catch (error) { | ||
| console.error('Error fetching suggestions:', error); |
There was a problem hiding this comment.
The catch block currently only logs an error to the console. This means if a network error occurs or the server returns an invalid JSON response, the user won't receive any feedback that the operation failed. It would be a better user experience to display a toast notification here, similar to how API errors are handled in the else block.
toast.error('Failed to fetch suggestions. Please try again.');
console.error('Error fetching suggestions:', error);
app/components/chat/Suggestions.tsx
Outdated
| <div className="flex flex-wrap gap-2 mt-3"> | ||
| {suggestions.map((suggestion, index) => ( | ||
| <button | ||
| key={index} |
There was a problem hiding this comment.
Using the array index as a key for list items is an anti-pattern in React, as it can lead to incorrect behavior if the list is reordered or items are added or removed. It's better to use a unique and stable identifier from the data itself. Since the suggestion strings are short and likely to be unique within a single API response, you can use the suggestion string as the key.
key={suggestion}| throw new Error('Failed to create suggestions'); | ||
| } | ||
|
|
||
| logger.info(`Generated schema-based suggestions for data source ${dataSource}`); |
There was a problem hiding this comment.
Logging the entire dataSource object will result in [object Object] in the log output, which is not useful for debugging. Please log a specific, meaningful identifier like dataSource.id instead.
| logger.info(`Generated schema-based suggestions for data source ${dataSource}`); | |
| logger.info(`Generated schema-based suggestions for data source: ${dataSource.id}`); |
|
I think it would look better if the bubbles were centre aligned |
| suggestions = ['create a revenue dashboard', 'build a sales table']; | ||
| } else { | ||
| // Generate schema-based suggestions for non-sample databases | ||
| suggestions = await generateSchemaBasedSuggestions(dataSource); |
There was a problem hiding this comment.
I think we'd want to cache these suggestions along with the schema, to avoid calling LLM every time a user opens the homepage. Also they would load instantly.
app/api/suggestions/route.ts
Outdated
| let suggestions: string[]; | ||
|
|
||
| if (isSampleDatabase) { | ||
| suggestions = ['create a revenue dashboard', 'build a sales table']; |
There was a problem hiding this comment.
| suggestions = ['create a revenue dashboard', 'build a sales table']; | |
| suggestions = ['create a revenue dashboard', 'make user management app', 'build a sales overview page']; |
📋 Pull Request Summary
🔗 Related Issues
📝 Changes Made
🧪 Testing
Testing Details:
📚 Documentation
🔄 Type of Change
🚨 Breaking Changes
Breaking Change Details:
📸 Screenshots/Videos
📋 Additional Notes