llamacpp
Here are 16 public repositories matching this topic...
Text-To-Speech, RAG, and LLMs. All local!
-
Updated
May 28, 2024 - JavaScript
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
-
Updated
Nov 4, 2023 - JavaScript
A simple "Be My Eyes" web app with a llama.cpp/llava backend
-
Updated
Nov 28, 2023 - JavaScript
llama.cpp gguf file parser for javascript
-
Updated
Apr 22, 2024 - JavaScript
A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)
-
Updated
May 30, 2023 - JavaScript
Function Calling LLMs that run locally on device.
-
Updated
Mar 1, 2024 - JavaScript
Metatron is a project that brings together whisper.cpp, llama.cpp, and piper into a deployable stack with an awesome Node.js API wrapper for each of them.
-
Updated
Jul 17, 2023 - JavaScript
Self-hosted chat UI for running Alpaca models locally, built with MERN stack and based on llama.cpp
-
Updated
Apr 16, 2023 - JavaScript
An open-source AI app | running mixtral 8x7B / llama.cpp | single-layer threads interface | multi-user | private | offline capable
-
Updated
Mar 16, 2024 - JavaScript
Messenger-like AI chat app that can run locally using Llama cpp and Stable Diffusion.
-
Updated
Apr 29, 2024 - JavaScript
Improve this page
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."