<<<<<<< HEAD
This project is a Retrieval-Augmented Generation (RAG) powered AI assistant that answers questions based on a text file (sample.txt
) using embeddings and a large language model (LLM).
- Uses FAISS for vector search on text chunks.
- Integrates Sentence Transformers for embeddings.
- Uses FLAN-T5 as the LLM for question answering.
- Provides a Gradio web interface for interactive QA.
- Example questions included in sidebar for quick testing.
- Clone/download this repository.
- Ensure Python 3.10+ is installed and added to PATH.
- Install dependencies:
pip install -r requirements.txt
- Open a terminal/command prompt in the project folder.
- Run: python main.py Or, if your system uses python3: python3 main.py
- The Gradio web interface will start, and a local URL will be printed in the console.
- Open this URL in your browser to interact with the assistant.
Note: The FAISS index (
faiss_index.bin
) is automatically created when you runmain.py
for the first time.1d911e5 (Initial commit: RAG LangGraph QA Assistant)