Production-ready Chainlit RAG application with Pinecone pipeline offering all Groq and OpenAI Models, to chat with your documents.
-
Updated
Aug 19, 2025 - Python
Production-ready Chainlit RAG application with Pinecone pipeline offering all Groq and OpenAI Models, to chat with your documents.
An intelligent customer support system powered by LangGraph and LangChain that uses Retrieval-Augmented Generation (RAG) to provide accurate, context-aware responses to customer queries. Built with FastAPI, FAISS, and multi-stage validation for production-ready deployment.
When retrieval outperforms generation: Dense evidence retrieval for scalable fake news detection - LDK 2025
🛡️ Web3 Guardian is a comprehensive security suite for Web3 that combines browser extension and backend services to provide real-time transaction analysis, smart contract auditing, and risk assessment for decentralized applications (dApps).
This repo is for advanced RAG systems, each branch will represent a project based on RAG.
Demo LLM (RAG pipeline) web app running locally using docker-compose. LLM and embedding models are consumed as services from OpenAI.
Advanced RAG Pipelines optimized with DSPy
Project Agora: An expert system for the Google ADK, powered by a hierarchical multi-agent framework to automate code generation, architecture, and Q&A.
AI-driven prompt generation and evaluation system, designed to optimize the use of Language Models (LLMs) in various industries. The project consists of both frontend and backend components, facilitating prompt generation, automatic evaluation data generation, and prompt testing.
This research aims to develop an AI Legal Chain Resolver using Mixture-of-Experts and a multi-agent system to provide tailored legal guidance. It simplifies legal navigation across statutes, precedents, and regulations while offering legal assistance through a conversational system to enhance accessibility and understanding.
A comprehensive framework for enhancing Retrieval-Augmented Generation (RAG) systems through metadata enrichment, advanced chunking strategies, and neural retrieval techniques.
A simple NaiveRAG pipeline.
The Memory of your Agent
A chatbot for answering questions about "Portal da transparência". It uses a RAG + LLM pipeline
a comprehensive Streamlit chatbot with Gemini API and RAG capabilities that supports all the features
Using MLflow to deploy your RAG pipeline, using LLamaIndex, Langchain and Ollama/HuggingfaceLLMs/Groq
A customized Retrieval-Augmented Generation (RAG) pipeline designed to compare Maximum Residue Limit (MRL) values from Chinese food safety standard (GB) PDFs with those from the EU DataLake.
the backend for project hozie see the readme for more details. For voice interaction see Hozie V2 branch
This project implements a Retrieval Augmented Generation (RAG) Pipeline for PDF documents. It extracts information, generates embeddings, and uses LLMs to provide intelligent responses via an interactive Streamlit UI. Ideal for building Q&A systems on custom knowledge bases
Add a description, image, and links to the rag-pipeline topic page so that developers can more easily learn about it.
To associate your repository with the rag-pipeline topic, visit your repo's landing page and select "manage topics."