Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
-
Updated
Sep 4, 2025 - Python
Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
AI-Rag-ChatBot is a complete project example with RAGChat and Next.js 14, using Upstash Vector Database, Upstash Qstash, Upstash Redis, Dynamic Webpage Folder, Middleware, Typescript, Vercel AI SDK for the Client side Hook, Lucide-React for Icon, Shadcn-UI, Next-UI Library Plugin to modify TailwindCSS and deploy on Vercel.
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
A RAG-based retrieval system for air pollution topics using LangChain and ChromaDB.
RAG Mini Project — Retrieval‑Augmented Generation chatbot with FastAPI backend (Docker on Hugging Face Spaces) and Streamlit frontend (Render), featuring document ingestion, vector search, and LLM‑powered answers
Production-grade Retrieval-Augmented Generation (RAG) backend in TypeScript with Express.js, PostgreSQL, and Sequelize — featuring OpenAI-powered embeddings, LLM orchestration, and a complete data-to-answer pipeline.
📄 QuestRAG: AI-powered PDF Question Answering & Summarizer Bot using LangChain, Flan-T5, and Streamlit: A GenAI mini-project that allows users to upload research PDFs, ask questions, and get intelligent summaries using Retrieval-Augmented Generation (RAG) with locally hosted Hugging Face models.
It is the rag based ai model which leverages the misteral-7b model to generate the roadmap for the student information provided.
Documentation assistant for developers who want to quickly understand and query large documentation sites. Built with a modern tech stack including Firecrawl for llm-ready web crawling, Unstructured for document processing, MongoDB Atlas for vector search, and OpenAI for embeddings and generation.
RAG-PDF Assistant — A simple Retrieval-Augmented Generation (RAG) chatbot that answers questions using custom PDF documents. It uses HuggingFace embeddings for text representation, stores them in a Chroma vector database, and generates natural language answers with Google Gemini. In this example, the assistant is powered by a few school policy doc
Research FlowStream — multi‑agent research assistant with Streamlit frontend and FastAPI backend, leveraging LLMs and Qdrant for retrieval, deployed on Render (UI) and Hugging Face Spaces (API)
A comprehensive, hands-on tutorial repository for learning and mastering LangChain - the powerful framework for building applications with Large Language Models (LLMs). This codebase provides a structured learning path with practical examples covering everything from basic chat models to advanced AI agents, organized in a progressive curriculum.
Add a description, image, and links to the retrieval-augmented-generation-rag topic page so that developers can more easily learn about it.
To associate your repository with the retrieval-augmented-generation-rag topic, visit your repo's landing page and select "manage topics."