Harness LLMs with Multi-Agent Programming
-
Updated
Jun 6, 2024 - Python
Harness LLMs with Multi-Agent Programming
Instruct and validate structured outputs from LLMs with Ollama.
A comprehensive AI companion leveraging advanced semantic analysis, sentiment detection, and voice processing to provide personalized and context-aware interactions using Autogen, semantic-router, and VoiceProcessingToolkit.
A python package for developing AI applications with local LLMs.
Your fully proficient, AI-powered and local chatbot assistant🤖
CrewAI Local LLM is a GitHub repository for a locally hosted large language model (LLM) designed to enable private, offline AI model usage and experimentation.
automate the batching and execution of prompts.
Use your open source local model from the terminal
Huginn Hears is a local app that transcribes and summarizes your meetings in Norwegian and English, using state-of-the-art models and open-source libraries. No cloud needed, run everything offline.
Project Jarvis is a versatile AI assistant that integrates various functionalities.
A constrained generation filter for local LLMs that makes them quote properly from a source document
implemented vector similarity algorithms to understand their inner workings, used local embeddding models
Infinite Craft but in Pyside6 and Python with local LLM (llama2 & others) using Ollama
Data Whisperer is a chatbot that allows you to analyze your local files using Large Language Models (LLMs). You can upload PDFs, Markdown, Text, and Document files to the chatbot and then ask questions related to the content of these files.
LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."