PrivyBot is a private, local-first chatbot interface powered by Ollama and LangChain. It runs in your browser and stores all conversations locally on your machine. No internet connection required after setup! For up-to-date answers, you can optionally turn on internet search, which allows PrivyBot to retrieve relevant results using DuckDuckGo and include source links in its responses. When the feature is off, the chatbot relies solely on local knowledge and memory.
You need to install:
- Python 3.10+
- Ollama (to run local LLMs like LLaMA or Mistral)
pip(Python's package manager)- Terminal or Bash (e.g., PowerShell, Terminal.app, or Git Bash)
-
Install Python 3.10+
Download from: https://www.python.org/downloads/windows
✅ Make sure to check the box:Add Python to PATHduring installation. -
Install Git Bash (optional but recommended)
Download from: https://gitforwindows.org
This gives you a Linux-like terminal. -
Install Ollama
Download and run the installer: https://ollama.com/download
Then open a terminal (or PowerShell) and run:ollama run llama3
-
Clone or unzip this repository
-
Install required Python libraries In your terminal:
pip install fastapi uvicorn langchain-community jinja2 requests beautifulsoup4 serpapi faiss-cpu python-multipart
-
Start the server
python -m uvicorn main:app --reload
-
Open your browser Go to: http://127.0.0.1:8000
-
Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -
Install Python 3
brew install python
-
Install Ollama Download from: https://ollama.com/download
Or install via Homebrew:brew install ollama
-
Run a model
ollama run llama3
-
Navigate to your project directory
-
Install dependencies
pip3 install install fastapi uvicorn langchain-community jinja2 requests beautifulsoup4 serpapi faiss-cpu python-multipart
-
Start the server
python3 -m uvicorn main:app --reload
-
Open your browser Visit: http://127.0.0.1:8000
- All chats are stored locally in the
chats/folder. - You can select from available Ollama models.
- You can delete or reload old chats from the sidebar.
- Nothing is sent to the cloud.
Make sure you’ve pulled a model via Ollama before running it:
ollama run mistral
ollama run llama3
ollama run gemmaYou can check installed models with:
ollama listThis project is for personal and educational use. Modify and extend it as needed!