No-code multi-agent framework to build LLM Agents, workflows and applications with your data
-
Updated
Nov 16, 2024 - Python
No-code multi-agent framework to build LLM Agents, workflows and applications with your data
A pure Python-implemented, lightweight, server-optional, multi-end compatible, vector database deployable locally or remotely.
Experiment on QnA tabular data using LLMs and SQL
Use OpenAI, Redis, and streamlit to recommend hotels using Large Language Models
Your own GPT-powered Personal Assistant to whom you can ORDER or INSTRUCT to do some task or search for something using your VOICE commands.
Summarizes texts, videos and audios recursively. Allows custom prompts.
This project integrates LangChain v0.2.6, HuggingFace Serverless Inference API, and Meta-Llama-3-8B-Instruct. It provides a chat-like web interface to interact with a language model and maintain conversation history using the Runnable interface, the upgraded version of LLMChain. LLMChain has been deprecated since 0.1.17.
a basic proof-of-concept implementation of https://python.langchain.com/docs/use_cases/question_answering/
A web-based chat application for querying process execution data using natural language.
A payload compression toolkit that makes it easy to create ideal data structures for LLMs; from training data to chain payloads.
Retrieval Augmented Generation (RAG) using LangChain Framework, FAISS vector store and FastEmbed text embedding model.
Add a description, image, and links to the llm-chain topic page so that developers can more easily learn about it.
To associate your repository with the llm-chain topic, visit your repo's landing page and select "manage topics."