BabyAGI-🦙: Enhanced for Llama models (running 100% local) and persistent memory, with smart internet search based on BabyCatAGI and document embedding in langchain based on privateGPT
-
Updated
Jun 4, 2023 - Python
BabyAGI-🦙: Enhanced for Llama models (running 100% local) and persistent memory, with smart internet search based on BabyCatAGI and document embedding in langchain based on privateGPT
LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.
A chatbot with the ability to vocally respond (TTS) using llama
Email Auto-ReplAI is a Python tool that uses AI to automate drafting responses to unread Gmail messages, streamlining email management tasks.
Transcribes videos and describes them with OpenAI APIs or local models.
A simple AI chat using FastAPI, Langchain and llama.cpp
✨ Your Custom Offline Role Play with LLM and Stable Diffusion on Mac and Linux (for now) 🧙♂️
Auto Complete anything using a gguf model
YouTube API implementation with Meta's Llama 2 to analyze comments and sentiments
Lightweight implementation of the OpenAI open API on top of local models
LLM content classification with only prompt engineering
This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
📚 Local PDF-Integrated Chat Bot: Secure Conversations and Document Assistance with LLM-Powered Privacy
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
AgentX is an Open-source library that help people use LLMs on their own computers or help them to serve LLMs as easy as possible that support multi-backends like PyTorch, llama.cpp, Ollama and EasyDeL
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."