SHANA is a fully local AI assistant built for AMD GPU systems running Linux. No NVIDIA, no CUDA, no cloud. Everything runs on your machine.
- Understands voice input (Whisper STT)
- Responds with voice output (Piper TTS)
- Runs a local LLM via Ollama
- Web GUI with status orb, chat log, mic button
- Persistent memory saved between sessions
- Linux (tested on Linux Mint)
- AMD GPU (tested on gfx1151 / Strix Halo)
- Ollama installed and running
- Python 3.10+
- PipeWire audio system
SHANA uses qwen2.5:7b-instruct by default.
Pull it with:
ollama pull qwen2.5:7b-instruct
pip install flask flask-socketio openai-whisper requests psutil piper-tts
mkdir piper_models wget -O piper_models/en_GB-alba-medium.onnx https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/alba/medium/en_GB-alba-medium.onnx wget -O piper_models/en_GB-alba-medium.onnx.json https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/alba/medium/en_GB-alba-medium.onnx.json
Run this to find your mic device name: pactl list sources short Then edit gui.py and update the MIC_DEVICE variable at the top.
By default SHANA saves session memory to ~/SHANA_MEMORY/memory.txt You can change this path in brain.py to point to a USB drive or any location.
source venv/bin/activate python gui.py Then open your browser at http://127.0.0.1:5000
Built and tested by Brett (Practical-Cupcake259) on Linux Mint with AMD Strix Halo. Assisted by Claude on Abacus.AI ChatLLM.