-
-
Notifications
You must be signed in to change notification settings - Fork 26
Add the attachments feature to LLMSession #31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…s and improve message handling
… and updating API response structure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @tattn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the LocalLLMClient by integrating multi-modal attachment support directly into the LLMSession class. This refactoring centralizes message history and interaction logic within LLMSession, streamlining the API for sending prompts with attachments and improving the example application's architecture. Additionally, it includes improvements to file downloading robustness and verbosity control for Llama models.
Highlights
- Core LLMSession Enhancements: The
LLMSessionis now the central component for managing LLM interactions, including maintaining message history and handling multi-modal inputs (attachments). It has been made@Observablefor seamless integration with SwiftUI, and its internalGeneratorclass now usesLockedwrappers for thread-safe access to client and messages. - Multi-modal Attachment Support: A new
LLMAttachmentstruct has been introduced, allowing for generic attachment types (currently images) to be passed alongside text prompts to the LLM. TheLLMInput.Messagestruct has also been updated to conform toIdentifiableand include anidproperty for better SwiftUI integration. - Example Application Refactoring: The
LocalLLMClientExampleapp's UI components (BottomBar,ChatView) and ViewModel (ChatViewModel) have been updated to leverage the newLLMSessionandLLMAttachmentcapabilities, simplifying message flow, enabling image input, and centralizing message history management within theLLMSession. - Robust File Downloading: The underlying file downloading mechanism has been improved. The
FileDownloadernow verifies downloaded files by size in addition to existence, and theDownloadercleans up partially downloaded files upon error, enhancing reliability. TheHuggingFaceAPIhas been updated to retrieve file sizes for this purpose. - Centralized Verbosity Control: Verbosity settings for Llama models are now managed through
LlamaClient.Parameter.Options, providing a more consistent and centralized approach to debugging output across the Llama client and its multimodal context.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request introduces the attachments feature to LLMSession, enhancing the example app and the underlying library. Several improvements are suggested, including safer unwrapping of optionals, allowing messages with only attachments, improving the efficiency of file download checks, and using more type-safe access to attachment content.
… and update model handling
Uh oh!
There was an error while loading. Please reload this page.