Genshin Impact Character Chat Models tuned by Lora on LLM
-
Updated
Jun 3, 2024 - Python
Genshin Impact Character Chat Models tuned by Lora on LLM
A Genshin Impact Question Answer Project supported by Qwen1.5-14B-Chat
LLM content classification with only prompt engineering
Lightweight implementation of the OpenAI open API on top of local models
Transcribes videos and describes them with OpenAI APIs or local models.
YouTube API implementation with Meta's Llama 2 to analyze comments and sentiments
A simple AI chat using FastAPI, Langchain and llama.cpp
Email Auto-ReplAI is a Python tool that uses AI to automate drafting responses to unread Gmail messages, streamlining email management tasks.
Auto Complete anything using a gguf model
AgentX is an Open-source library that help people use LLMs on their own computers or help them to serve LLMs as easy as possible that support multi-backends like PyTorch, llama.cpp, Ollama and EasyDeL
Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma
✨ Your Custom Offline Role Play with LLM and Stable Diffusion on Mac and Linux (for now) 🧙♂️
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."