Static builds of llama.cpp (Currently only amd64 server builds are available)
-
Updated
Jun 14, 2024 - Dockerfile
Static builds of llama.cpp (Currently only amd64 server builds are available)
Ask LLaMa about image in your clipboard
Presentation on Artificial Intelligence for the Free Drawing and Print Graphics class of the Muthesius Academy of Art.
Repo to download, save and run quantised LLM models using Llama.cpp and benchmark the results (private use)
A custom framework for easy use of LLMs, VLMs, etc. supporting various modes and settings via web-ui
Some useful apps containerized.
Genshin Impact Character Chat Models tuned by Lora on LLM
A Genshin Impact Question Answer Project supported by Qwen1.5-14B-Chat
A chatbot with the ability to vocally respond (TTS) using llama
LLM content classification with only prompt engineering
Llama-2 on apple mac using gpu
UnOfficial Gradio Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
PowerShell automation to download large language models (LLMs) from Git repositories and quantize them with llama.cpp into the GGUF format.
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."