Skip to content

Latest commit

 

History

History

llamacpp

llama.cpp

llama-cpp is a Python binding for llama.cpp.

This allows you to run supported models on your own machine!

Set up

  1. Download the model (this needs about 4 GB of disk space):
flask download-model --repo "TheBloke/Llama-2-7B-Chat-GGUF" --filename "llama-2-7b-chat.Q4_K_M.gguf"
  1. Download the aifile and load it with ownAI (in ownAI, click on the logo in the upper left corner to open the menu, then select "AI Workshop", then "New AI" and "Load Aifile").

Privacy

These AIs are running on your own machine.