Local AI Assistant That Respects Your Privacy! π
Website: llama-assistant.nrl.ai
AI-powered assistant to help you with your daily tasks, powered by Llama 3.2. It can recognize your voice, process natural language, and perform various actions based on your commands: summarizing text, rephrasing sentences, answering questions, writing emails, and more.
This assistant can run offline on your local machine, and it respects your privacy by not sending any data to external servers.
Llama.Assistant.-.Local.AI.Assistant.That.Respects.Your.Privacy.720p.mp4
-
π Text-only models:
- Llama 3.2 - 1B, 3B (4/8-bit quantized).
- Qwen2.5-0.5B-Instruct (4-bit quantized).
- Qwen2.5-1.5B-Instruct (4-bit quantized).
- gemma-2-2b-it (4-bit quantized).
- And other models that LlamaCPP supports via custom models. See the list.
-
πΌοΈ Multimodal models:
- Moondream2.
- MiniCPM-v2.6.
- LLaVA 1.5/1.6.
- Besides supported models, you can try other variants via custom models.
- πΌοΈ Support multimodal model: moondream2.
- π£οΈ Add wake word detection: "Hey Llama!".
- π οΈ Custom models: Add support for custom models.
- π Support 5 other text models.
- πΌοΈ Support 5 other multimodal models.
- β‘ Streaming support for response.
- ποΈ Add offline STT support: WhisperCPP.
- π§ Knowledge database: Langchain or LlamaIndex?.
- π Plugin system for extensibility.
- π° News and weather updates.
- π§ Email integration with Gmail and Outlook.
- π Note-taking and task management.
- π΅ Music player and podcast integration.
- π€ Workflow with multiple agents.
- π Multi-language support: English, Spanish, French, German, etc.
- π¦ Package for Windows, Linux, and macOS.
- π Automated tests and CI/CD pipeline.
- ποΈ Voice recognition for hands-free interaction.
- π¬ Natural language processing with Llama 3.2.
- πΌοΈ Image analysis capabilities (TODO).
- β‘ Global hotkey for quick access (Cmd+Shift+Space on macOS).
- π¨ Customizable UI with adjustable transparency.
Note: This project is a work in progress, and new features are being added regularly.
Recommended Python Version: 3.10.
Install PortAudio:
-
For Mac OS X, you can use
Homebrew
_::brew install portaudio
Note: if you encounter an error when running
pip install
that indicates it can't findportaudio.h
, try runningpip install
with the following flags::pip install --global-option='build_ext' \ --global-option='-I/usr/local/include' \ --global-option='-L/usr/local/lib' \ pyaudio
-
For Debian / Ubuntu Linux::
apt-get install portaudio19-dev python3-all-dev
-
Windows may work without having to install PortAudio explicitly (it will get installed with PyAudio).
For more details, see the PyAudio installation
_ page.
.. _PyAudio: https://people.csail.mit.edu/hubert/pyaudio/ .. _PortAudio: http://www.portaudio.com/ .. _PyAudio installation: https://people.csail.mit.edu/hubert/pyaudio/#downloads .. _Homebrew: http://brew.sh
On Windows: Installing the MinGW-w64 toolchain
Install from PyPI:
pip install pyaudio
pip install git+https://github.com/stlukey/whispercpp.py
pip install llama-assistant
Or install from source:
- Clone the repository:
git clone https://github.com/vietanhdev/llama-assistant.git
cd llama-assistant
- Install the required dependencies and install the package:
pip install pyaudio
pip install git+https://github.com/stlukey/whispercpp.py
pip install -r requirements.txt
pip install .
Speed Hack for Apple Silicon (M1, M2, M3) users: π₯π₯π₯
- Install Xcode:
# check the path of your xcode install
xcode-select -p
# xcode installed returns
# /Applications/Xcode-beta.app/Contents/Developer
# if xcode is missing then install it... it takes ages;
xcode-select --install
- Build
llama-cpp-python
with METAL support:
pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DGGML_METAL=on" pip install -U llama-cpp-python --no-cache-dir
# You should now have llama-cpp-python v0.1.62 or higher installed
# llama-cpp-python 0.1.68
Run the assistant using the following command:
llama-assistant
# Or with a
python -m llama_assistant.main
Use the global hotkey (default: Cmd+Shift+Space
) to quickly access the assistant from anywhere on your system.
The assistant's settings can be customized by editing the settings.json
file located in your home directory: ~/llama_assistant/settings.json
.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the GPLv3 License - see the LICENSE file for details.
- This project uses llama.cpp, llama-cpp-python for running large language models. The default model is Llama 3.2 by Meta AI Research.
- Speech recognition is powered by whisper.cpp and whispercpp.py.
- Viet-Anh Nguyen - vietanhdev, contact form.
- Project Link: https://github.com/vietanhdev/llama-assistant, https://llama-assistant.nrl.ai/.