ConvoVault is an AI-powered desktop application built with Python and PyQt5. It provides a beautiful interface to parse, archive, and query your conversation histories from platforms like ChatGPT and Claude.
ConvoVault allows users to convert their exported chat histories into visually appealing, searchable local HTML archives. Furthermore, it features an integrated Multi-LLM Chat system powered by LangChain, allowing you to interact intelligently with your local chat archives using models from OpenAI, Anthropic, Gemini, DeepSeek, and locally via Ollama.
- Chat Parsing & Archiving: Convert exported ChatGPT and Claude conversation JSON/HTML into beautiful, responsive local HTML archives.
- Multi-LLM Question Answering: Chat with your document context using LangChain! Supports OpenAI, Anthropic, Google Gemini, DeepSeek, and local Ollama integrations.
- Seamless HTML Appending: AI responses seamlessly integrate into the beautiful Markdown UI of the loaded Chat Archive.
- Local Storage: Preferences and API Keys are securely stored locally via SQLite (
convovault.db) ensuring total privacy.
ConvoVault natively supports 100% offline, private inference using Ollama. To configure this:
- Download and install Ollama.
- Open your terminal and run a model (e.g.
ollama run llama3). - In ConvoVault, open the ⚙️ Settings tab and select Ollama as your provider with
llama3as your model name. No API key is required!
Ensure you have Python 3 installed. You can install the required dependencies using pip.
pip install -r requirements.txt
pip install PyQt5 PyQtWebEngine langchainNote: If you choose to use cloud APIs, ensure you input your API keys in the app's settings screen. They are strictly stored on your local disk in the SQLite DB.
To run ConvoVault locally:
python ConvoVault.pyThe project includes PyInstaller configurations (ConvoVault.spec) to build a standalone executable:
pip install pyinstaller
pyinstaller ConvoVault.specDeveloped by Mohamed Alaaeldin.
- Removed: Local AI Question-Answering using heavy PyTorch and HuggingFace dependencies (deleted
QuestionAnswering.py). - Added: Full LangChain integration via
LLMManager.pysupporting OpenAI, Anthropic, DeepSeek, Google Gemini, and fully offline models via Ollama. - Added: Persistent SQLite-backed Settings tab inside the AI Chat interface to store API keys and manage provider preferences securely on your local machine.
- Improved: AI Chat responses now seamlessly inject into the exported Markdown and Syntax-Highlighted HTML views.
- Note: The web LLM interactions have currently been primarily tested using the Google Gemini model.
