A desktop application that lets you chat with your Zotero library using large language models. Ask questions about your research and get answers grounded in your own documents, with citations and page numbers.
This tool indexes the PDFs in your Zotero library and uses retrieval-augmented generation (RAG) to answer questions based on their content. Every answer includes citations to the specific sources and page numbers used, making it easy to verify claims and follow up on interesting findings.
The application can run entirely on your local machine. Your documents and queries stay private.
- Semantic search across your library: Find relevant passages using natural language queries, not just keyword matching
- Cited answers: Responses include references to specific documents and page numbers
- Source transparency: View the exact text passages used to generate each answer
- Multiple LLM providers: Use local models via Ollama, or connect to OpenAI, Anthropic, Google, or other providers
- Profile support: Maintain separate workspaces with different settings and chat histories
- Automatic updates: Stay up to date with the latest features and improvements
- Cross-platform: Available for macOS, Windows, and Linux
To use this application locally, you need:
Install Zotero and sync your library:
- The app reads your local Zotero database to access PDFs and metadata
- Make sure Zotero is installed and your library is synced before first launch
- Default database location:
- macOS:
~/Zotero/zotero.sqlite - Windows:
C:\Users\{username}\Zotero\zotero.sqlite - Linux:
~/Zotero/zotero.sqlite
- macOS:
Install Ollama to run language models locally:
# macOS/Linux - Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Or download from: https://ollama.com/downloadDownload models:
# Recommended chat models (choose one based on your hardware)
ollama pull llama3.2 # Lightweight, fast (1B or 3B)
ollama pull llama3.1:8b # Good balance of speed and quality
ollama pull qwen2.5:7b # Strong alternative to LlamaNote: The app uses SentenceTransformers for embeddings (downloaded automatically on first use), not Ollama embedding models.
Verify installation:
ollama list
# Should show downloaded modelsAlternative: You can also use cloud providers (OpenAI, Anthropic, Google) by configuring API keys in the app settings.
Download the latest installer from Releases:
- Download
Zotero-RAG-Assistant-{version}-mac-arm64.dmg(Apple Silicon) or-mac-x64.dmg(Intel) - Open the DMG file
- Drag the app icon to the Applications folder
- Since the app isn't signed by Apple, you will need to run this command in your Terminal first:
xattr -dr com.apple.quarantine "/Applications/ZoteroRAG.app" - Launch from Applications or Spotlight
System Requirements: macOS 10.13 (High Sierra) or later
- Download
Zotero-RAG-Assistant-{version}-win-x64.exe - Run the installer (may show SmartScreen warning on first run - click "More info" then "Run anyway")
- Choose installation location (default:
C:\Users\{username}\AppData\Local\Programs\) - Launch from Start Menu or desktop shortcut
System Requirements: Windows 10 or later (64-bit)
Note: Windows builds are now available but may show SmartScreen warnings since the app is not code-signed. See docs/WINDOWS_BUILD_GUIDE.md for build instructions and troubleshooting.
Download the .deb package from Releases and install:
# Download the latest .deb package (amd64 or arm64)
wget https://github.com/aahepburn/Zotero-RAG-Assistant/releases/latest/download/ZoteroRAG-{version}-linux-amd64.deb
# Install (automatically handles dependencies)
sudo apt install ./ZoteroRAG-{version}-linux-amd64.deb
# Launch from application menu or terminal
zotero-rag-assistantSystem Requirements: Debian/Ubuntu-based distributions (Ubuntu 18.04+, Debian 10+, or equivalent)
Note: The installer includes Python and all dependencies. No additional setup required. For other distributions, see docs/LINUX_PACKAGING.md.
For development or customization:
Prerequisites:
- Python 3.8+
- Node.js 16+
- Zotero with local library
Setup:
git clone https://github.com/aahepburn/Zotero-RAG-Assistant.git
cd Zotero-RAG-Assistant
# Python environment
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
# Node dependencies
npm install
cd frontend && npm install && cd ..
# Run development mode (all services)
npm run devSee docs/DESKTOP_APP.md for detailed development instructions.
-
Zotero Database Location: The app needs to know where your Zotero database is located. Set this in the Settings panel or in a
.envfile:- macOS:
/Users/YOUR_USERNAME/Zotero/zotero.sqlite - Windows:
C:\Users\YOUR_USERNAME\Zotero\zotero.sqlite - Linux:
~/Zotero/zotero.sqlite
- macOS:
-
Choose an LLM Provider:
- Ollama (recommended for local use): Install from ollama.com, then run
ollama pull llama3.2or another model - OpenAI, Anthropic, etc.: Add your API key in Settings
- Ollama (recommended for local use): Install from ollama.com, then run
-
Index Your Library: Click "Index Library" to process your PDFs. This creates embeddings for semantic search. Initial indexing may take a while depending on library size.
- Type questions in natural language: "What methods are used to study X?" or "Compare the arguments about Y in my readings"
- View citations in the Evidence Panel to see which documents and pages were used
- Click document titles to open them in Zotero or your PDF reader
- Create multiple profiles if you work with different document collections
Architecture:
- Backend: FastAPI (Python) with ChromaDB for vector storage
- Frontend: React with TypeScript
- Desktop: Electron wrapper with auto-updates
- Embeddings: BGE-base (768-dimensional) with hybrid BM25 keyword search
- Retrieval: Cross-encoder re-ranking for improved precision
Privacy: All processing happens locally. If you use a cloud LLM provider (OpenAI, Anthropic, etc.), your queries and retrieved document chunks are sent to their API, but your full library never leaves your machine.
To create distribution packages:
npm run package:mac # macOS .dmg and .zip
npm run package:win # Windows .exe installer
npm run package:linux # Linux .AppImage and .deb
npm run package:all # All platformsBuilt packages appear in the release/ directory.
For complete build instructions, see docs/BUILD_CHECKLIST.md.
** Complete Documentation Index**
Quick Links:
- Users: Prompting Guide · Provider Setup
- Developers: Build Checklist · Desktop App Guide
- Platform-Specific: Windows Build · Linux Packaging
MIT
Contributions are welcome. Please open an issue to discuss significant changes before submitting a pull request.