git clone https://github.com/mangiucugna/local_multimodal_ai
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build
cd build
cmake ..
cmake --build . --config Release
Download the following bakllava model files to the llama.cpp/models
folder
- https://huggingface.co/mys/ggml_bakllava-1/resolve/main/ggml-model-q4_k.gguf
- https://huggingface.co/mys/ggml_bakllava-1/resolve/main/mmproj-model-f16.gguf
and copy them in llama.cpp/models/ggml-bakllava-1/
Create a venv and install requirements
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Install FFMPEG: https://ffmpeg.org/download.html
For Mac (using brew)
brew install ffmpeg
First start the llama.cpp server:
cd llama.cpp\build\bin
Release\server.exe -m ..\..\models\ggml-bakllava-1\ggml-model-q4_k.gguf --mmproj ..\..\models\ggml-bakllava-1\mmproj-model-f16.gguf -ngl 1
cd llama.cpp/build/bin
./server -m ../../models/ggml-bakllava-1/ggml-model-q4_k.gguf --mmproj ../../models/ggml-bakllava-1/mmproj-model-f16.gguf -ngl 1
Open another terminal window
source .venv/bin/activate
python app.py
- Forked from https://github.com/cocktailpeanut/mirror/
- Llama.cpp
- Bakllava
- Built with gradio.