Unlock Knowledge Instantly with AI-Powered Insights
✨ Powered by: LangChain
, Ollama
, Vector Embeddings
, Streamlit
, and local LLMs for private, powerful, and customizable AI without relying on external APIs.
Tired of manually digging through long documents?
SIMPLE_RAG_CHAT transforms static PDFs into dynamic knowledge bases. Whether you're researching, studying, or analyzing reports, this tool helps you:
- 🔍 Find relevant info fast using semantic search powered by vector embeddings.
- 💬 Chat naturally with your documents like talking to a smart assistant — backed by Ollama's local LLMs.
- 📄 Process PDFs on the fly with automatic text extraction and chunking.
- 🔄 Maintain conversation history across messages for coherent follow-ups.
- ⚙️ Easy to customize & extend — perfect for developers building privacy-first, AI-powered tools.
- 🛡️ Run entirely offline using Ollama — no data leaves your machine!
Ideal for:
- Researchers 🎓
- Developers 👨💻
- Students 📚
- Analysts 📊
- Privacy-conscious users 🔐
Before you begin, ensure you have:
- Python 3.9+ (required by pip and modern packages)
pip
– Python’s package installergit
– for cloning the repository- Ollama – installed and running locally
💡 Make sure Ollama is running! Start it and pull a model like:
ollama pull deepseek-r1:1.5b
-
Clone the repository
git clone https://github.com/ashishkumar0724/simple_rag_chat
-
Navigate to the project directory:
❯ cd simple_rag_chat
-
Install the dependencies:
Using pip:
❯ pip install -r requirements.txt
Run the Streamlit app:
streamlit run main.py
Maintained by Ashish Kumar