This is a fullstack document management and Q&A system powered by Retrieval-Augmented Generation (RAG). The system allows users to upload .pdf or .txt files, converts them into vector embeddings using OpenAI, stores them in FAISS, and allows users to ask questions and receive LLM-generated answers.
- Backend: FastAPI (Python), SQLAlchemy
- Embeddings + RAG: OpenAI + LangChain + FAISS
- Frontend: React (with Axios and React Router)
- Database: PostgreSQL
- Authentication: JWT-based (Login + Protected Routes)
- Containerization: Docker, Docker Compose
fullstack-rag-app/ β βββ backend/ # FastAPI backend β βββ routers/ # API routers: auth, documents, qa β βββ services/ # Document ingestion and QA engine β βββ db/ # Models and DB init β βββ core/ # Pydantic settings config β βββ main.py # Entry point β βββ frontend/ # React frontend β βββ pages/ # Login, Dashboard, QA β βββ components/ # Navbar, Uploader β βββ App.js / index.js β βββ docker-compose.yml # Docker multi-service orchestration βββ .env # Environment variables βββ README.md
Create a .env file in the root with:
DATABASE_URL=postgresql+asyncpg://postgres:postgres@db:5432/ragdb JWT_SECRET_KEY=your-secret-key JWT_ALGORITHM=HS256 ACCESS_TOKEN_EXPIRE_MINUTES=60 OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxx
Step-by-step setup:
-
Clone the project:
git clone https://github.com/your-org/fullstack-rag-app.git cd fullstack-rag-app -
Run the application:
docker-compose up --build -
Visit the following:
- Frontend UI: http://localhost:3000
- FastAPI docs: http://localhost:8000/docs
- JWT-based login and authentication
- Secure document upload (.pdf and .txt)
- Embedding generation using OpenAI
- FAISS vector index for semantic search
- Question answering via LangChain + LLM
- React-based UI with login, upload, and Q&A panels
- User logs in via
/auth/login - Backend issues a JWT token
- Token is stored in browser
localStorage - Protected routes (upload, QA) validate token via middleware
- User uploads a
.pdfor.txtfile - File is saved and parsed
- Embeddings are generated using
OpenAIEmbeddings - Embeddings stored in
FAISSindex locally - Metadata is saved in PostgreSQL
- User types a question
- Backend loads FAISS index and retrieves top-k documents
- LLM (OpenAI) is invoked via
load_qa_chain - Answer is returned based on retrieved context
To run backend unit tests:
bash cd backend pytest tests/
To debug FAISS or OpenAI failures, logs are printed inside:
services/qa_engine.py services/doc_ingestor.py
- Username:
postgres - Password:
postgres - DB Name:
ragdb - Port:
5432
Docker volume persists data using postgres_data.
backend: FastAPI with volume-mount and .env supportfrontend: React served vianpm startdb: Postgres 15 with initialized volume
| Endpoint | Method | Description |
|---|---|---|
/auth/login |
POST | Login with username/password |
/documents/upload |
POST | Upload document (JWT required) |
/qa/ |
POST | Ask question (JWT required) |
/auth/test-token |
GET | Test JWT token validity |
A fullstack application that implements Retrieval-Augmented Generation (RAG) for document processing and question answering.
- FastAPI
- PostgreSQL
- SQLAlchemy
- LangChain
- OpenAI
- FAISS
- Alembic
- Pytest
- React
- TypeScript
- Vite
- React Router
- Axios
- TailwindCSS
- Document processing and embedding generation
- Vector similarity search
- Question answering with context
- Document summarization
- User authentication
- Secure API endpoints
- Modern UI/UX
- Python 3.9+
- Node.js 18+
- Docker and Docker Compose
- OpenAI API key
Create .env files in both backend and frontend directories:
DATABASE_URL=postgresql+asyncpg://postgres:postgres@localhost:5432/rag_db
OPENAI_API_KEY=your_openai_api_key
EMBEDDING_MODEL=text-embedding-ada-002
LLM_MODEL=gpt-3.5-turbo
JWT_SECRET=your_jwt_secret
JWT_ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30VITE_API_URL=http://localhost:8000
VITE_APP_TITLE=RAG Application- Clone the repository:
git clone https://github.com/yourusername/fullstack-rag-app.git
cd fullstack-rag-app- Start the backend:
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
alembic upgrade head
uvicorn main:app --reload- Start the frontend:
cd frontend
npm install
npm run dev- Run tests:
# Backend tests
cd backend
pytest
# Frontend tests
cd frontend
npm test- Build and start the containers:
docker-compose up --build- Run migrations:
docker-compose exec backend alembic upgrade headOnce the backend is running, visit:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
fullstack-rag-app/
βββ backend/
β βββ alembic/
β βββ api/
β β βββ v1/
β βββ core/
β βββ db/
β β βββ models/
β β βββ repositories/
β βββ services/
β βββ tests/
βββ frontend/
β βββ src/
β β βββ components/
β β βββ hooks/
β β βββ services/
β β βββ types/
β βββ tests/
βββ docker-compose.yml
- Follow PEP 8 style guide
- Write tests for new features
- Update API documentation
- Use type hints
- Handle errors properly
- Follow TypeScript best practices
- Write component tests
- Use proper state management
- Follow React best practices
- Implement proper error handling
- Build the frontend:
cd frontend
npm run build-
Set up production environment variables
-
Deploy using Docker:
docker-compose -f docker-compose.prod.yml up -d- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request