Alpaca-Turbo is a frontend to use large language models that can be run locally without much setup required. It is a user-friendly web UI for the llama.cpp , with unique features that make it stand out from other implementations. The goal is to provide a seamless chat experience that is easy to configure and use, without sacrificing speed or functionality.
demo.mp4
- ToDo
- ToDo
- ToDo
For Windows users we have a oneclick standalone launcher - Alpaca-Turbo.exe.
- Links for installing miniconda:
- Download the latest alpaca-turbo.zip from the release page.
- Extract Alpaca-Turbo.zip to Alpaca-Turbo
Make sure you have enough space for the models in the extracted location
-
Copy your alpaca models to alpaca-turbo/models/ directory.
-
Open cmd as Admin and type
conda init
-
close that window
-
open a new cmd window in your Alpaca-Turbo dir and type
conda create -n alpaca_turbo python=3.10 -y conda activate alpaca_turbo pip install -r requirements.txt python app.py
-
ready to interact
just get the latest release unzip and then run
pip install -r requirements.txt
python app.py
As an open source project in a rapidly developing field, I am open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
For detailed information on how to contribute.
- ggerganov/LLaMA.cpp For their amazing cpp library
- antimatter15/alpaca.cpp For initial versions of their chat app
- cocktailpeanut/dalai For the Inspiration
- MetaAI for the LLaMA models
- Stanford for the alpaca models