A question-answering chatbot that provides answers based on internal documents.
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
- Copy
.env.exampleto.env - Update the values in
.envwith your configuration
- Copy
-
Run tests:
pytest tests/
-
To start the servers:
bash scripts/run_api.sh # To activate the API bash scripts/run_app.sh # To activate the unified web interface (chat + admin)
qna_bot/
├── data/
│ ├── raw/ # Place source documents here
│ └── vector_store/ # Vector database storage
├── scripts/
│ └── ingest_data.sh # Data ingestion script
├── src/
│ ├── app/ # FastAPI application
│ ├── core/ # Core functionality
│ ├── ingestion/ # Document processing
│ └── ui/ # Streamlit interface (unified chat & admin)
└── tests/ # Test files
- The unified Streamlit web interface provides both the chat (Q&A) and document management (admin) features.
- Use the sidebar navigation to switch between Chat, Dashboard, Upload Documents, Document List, and Settings.
- All document upload, listing, and management features are now accessible from the same interface as the chat.
- The project uses FastAPI for the backend API
- Streamlit for the web interface
- LangChain for RAG implementation
- ChromaDB for vector storage
Run the test suite:
pytest tests/Required environment variables:
OPENAI_API_KEY: Your OpenAI API keyOPENAI_MODEL_NAME: Model to use (gpt-4.1-mini or gpt-4.1-nano)APP_ENV: Environment (development, testing, production)DEBUG: Debug mode (True/False)CHROMA_PERSIST_DIRECTORY: Vector store directoryAPI_HOST: API host addressAPI_PORT: API port number