Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MVP Plans #2

Open
6 of 18 tasks
Dax911 opened this issue Nov 5, 2023 · 3 comments
Open
6 of 18 tasks

MVP Plans #2

Dax911 opened this issue Nov 5, 2023 · 3 comments

Comments

@Dax911
Copy link
Collaborator

Dax911 commented Nov 5, 2023

Putting this here for communication and bc if I don't take notes I will loose my train of thought. Thanks ChatGPT for helping me organize this.

Creating a macOS application that integrates with a local Ollama model and is triggered by a hotkey involves several steps. Here's a high-level overview of the tasks you'd need to accomplish to create a minimum viable product (MVP):

  1. Set Up a Local Server for Ollama Model: We will need a local server that can act as a "parser" for the Ollama service (would just start it as brew service) and handle requests. This server would receive text and return the model's response. (Can eventually support plugins for prompt engineering things)

  2. Develop a Tauri Application: Tauri is a framework for building desktop applications using web technologies. You can use it to create a lightweight and secure window for your chat interface.

  3. Implement Hotkey Functionality: We will need to use a library that can register global hotkeys on macOS. This library would listen for your specific hotkey combination and trigger the Tauri window to open.

  4. Clipboard and Selection Integration: The application should be able to grab the current selection or clipboard content when the hotkey is pressed.

  5. Create a User Interface: The Tauri window should have a user-friendly interface for chatting with the Ollama model, including a model selector and a chat display.

  6. Communication Between Tauri and the Local Server: Implement the logic to pass messages back and forth between the Tauri application and the local Ollama server.

  7. Packaging and Distribution: Once your application is ready, you'll need to package it for distribution so that others can easily install and use it on their macOS systems.

Now, let's create a task list for a GitHub issue to organize the development of this MVP:


Title: Develop a macOS Application for Local Ollama Model Interaction with Global Hotkey

Body:

Objective

Create a macOS application that allows users to interact with a local Ollama model using a global hotkey. The application will present a chat interface where users can send and receive messages from the Ollama model.

MVP Features

  • Global Hotkey Activation: Application should be triggered by a hotkey (e.g., Shift + Space).
  • Clipboard/Selection Integration: Automatically use the selected text or clipboard content when the application is activated.
  • Chat Interface: A simple and clean chat window for sending and receiving messages.
  • Model Selector: A dropdown to select the Ollama model if multiple models are available.
  • Local Server Communication: The ability to send requests to and receive responses from a local Ollama server.
  • Tauri Window: Use Tauri for creating the application window to ensure a lightweight and secure application.

Tasks

Potential Libraries/Tools

  • Tauri for the application framework.
  • Rust or Go for the backend server. (I don't know Go and am learning Rust open to PRs for either)
  • A macOS hotkey library for global hotkey registration.
  • Frontend technologies (HTML, CSS, JS) for the chat interface. (My specialty)

Testing & Validation

  • Ensure the hotkey consistently activates the application.
  • Verify the application correctly handles text selection and clipboard content.
  • Test communication with the local Ollama server.
  • Validate the user interface is intuitive and responsive.

This issue outlines the basic requirements and tasks for the project. You can add more details or break down the tasks further as needed. Once you have this issue created, you can start organizing the work into milestones, assigning tasks to contributors, and tracking progress.

@Dax911
Copy link
Collaborator Author

Dax911 commented Nov 6, 2023

Note to self: ollama doesn't support conversation storage. Will have to write our own memory and context provider to pass back to the API.

@simoncollins
Copy link
Collaborator

Yes, we'll probably need something like the OpenAI Assistance API that provides:

  • management of separate conversation threads
  • logic to smartly pack as much of the thread history into the input context
  • interaction with tools/functions/some sort of knowledge store

Later on there are lots of possibilities for extracting knowledge out of threads for long term memory etc.

@sammcj
Copy link
Collaborator

sammcj commented Nov 16, 2023

Re: Assistants API - https://github.com/transitive-bullshit/OpenOpenAI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants