Self-hosted chat UI for running Alpaca models locally, built with MERN stack and based on llama.cpp
-
Updated
Apr 16, 2023 - JavaScript
Self-hosted chat UI for running Alpaca models locally, built with MERN stack and based on llama.cpp
A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)
Metatron is a project that brings together whisper.cpp, llama.cpp, and piper into a deployable stack with an awesome Node.js API wrapper for each of them.
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
A simple "Be My Eyes" web app with a llama.cpp/llava backend
Function Calling LLMs that run locally on device.
An open-source AI app | running mixtral 8x7B / llama.cpp | single-layer threads interface | multi-user | private | offline capable
llama.cpp gguf file parser for javascript
Messenger-like AI chat app that can run locally using Llama cpp and Stable Diffusion.
Text-To-Speech, RAG, and LLMs. All local!
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."