Skip to content

llamacpp-for-kobold-1.0.5

Compare
Choose a tag to compare
@LostRuins LostRuins released this 25 Mar 03:29
· 4932 commits to concedo since this release

llamacpp-for-kobold-1.0.5

  • Merged the upstream fixes for 65b
  • Clamped max thread count to 4, it actually provides better results as it is memory bottlenecked.
  • Added support for select kv data type, defaulting to f32 instead of f16
  • Added more default build flags
  • Added softprompts endpoint

To use, download and run the llamacpp_for_kobold.exe
Alternatively, drag and drop a compatible quantized model for llamacpp on top of the .exe, or run it and manually select the model in the popup dialog.

and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001