Control what LLMs can, and can't, say
-
Updated
May 1, 2024 - TypeScript
Control what LLMs can, and can't, say
A desktop tool to install stable diffusion webui and chat with it
Interactive LLM based chat application in TypeScript, wrapped with Electron, an utilizing Vite + React.
An open source user experience for AI models.
Filling in the missing gaps with langchain, and creating OO wrappers to simplify some workloads.
A small Next.js app which utilises search results to feed context to the LLM.
AGI for creators, professionals, entrepreneurs and organizations
A minimalist Docker project to get started with Node, llama-node and Express. Ready to be used in a Hugging Face Space.
Empower Your Productivity with Local AI Assistants
Starter examples for using Next.js and the Vercel AI SDK with Llama.cpp and ModelFusion.
Empower your LLM to do more than you ever thought possible with these state-of-the-art prompt templates.
A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.
Wingman is the fastest and easiest way to run Llama models on your PC or Mac.
LocalChat is a ChatGPT-like chat that runs on your computer
VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."