llamacpp
Here are 302 public repositories matching this topic...
Private & local AI personal knowledge management app.
-
Updated
May 22, 2024 - TypeScript
Your AI second brain. A copilot to get answers to your questions, whether they be from your own notes or from the internet. Use powerful, online (e.g gpt4) or private, local (e.g mistral) LLMs. Self-host locally or use our web app. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
-
Updated
May 23, 2024 - Python
Unified framework for building enterprise RAG pipelines with small, specialized models
-
Updated
May 23, 2024 - Python
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
-
Updated
May 23, 2024 - Python
AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
-
Updated
May 23, 2024 - Python
The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
-
Updated
May 23, 2024 - TypeScript
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan
-
Updated
May 23, 2024 - C++
The TypeScript library for building AI applications.
-
Updated
May 22, 2024 - TypeScript
Text-To-Speech, RAG, and LLMs. All local!
-
Updated
May 21, 2024 - JavaScript
A FastAPI service for semantic text search using precomputed embeddings and advanced similarity measures, with built-in support for various file types through textract.
-
Updated
May 22, 2024 - Python
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
-
Updated
May 21, 2024 - Dart
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
-
Updated
Aug 3, 2023 - Rust
Improve this page
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."