Skip to content
/ ggify Public

Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp

License

Notifications You must be signed in to change notification settings

akx/ggify

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ggify

A small tool that downloads models from the Huggingface Hub and converts them into GGML for use with llama.cpp.

Usage

  • Download and compile llama.cpp.
  • Set up a virtualenv using the requirements from llama.cpp.
  • Install the requirements from this repo in that virtualenv.
  • Run e.g. python ggify.py databricks/dolly-v2-12b (nb.: I haven't tried with that particular repo)
  • You'll end up with GGML models under models/....

You can set --llama-cpp-dir (or the LLAMA_CPP_DIR environment variable) to point to the directory where you've compiled llama.cpp.

About

Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages