A RAG-powered technical interview prep assistant. Ask questions about system design, databases, DSA, and AI — get precise answers with citations, suggested follow-ups, and conversational context.
Invite-only. Sign in with Google.
- RAG-powered answers — hybrid search (BM25 + semantic) over a curated knowledgebase; returns structured answers with source citations
- Adaptive responses — classifies the question type (concept, specific aspect, deeper reasoning, system design) and adjusts response depth in a single LLM call
- Cited sources — every answer includes citation chips linking back to the source document and section
- Follow-up suggestions — each response includes 2–3 suggested follow-up questions rendered as clickable pills
- Conversation continuity — last 10 messages passed as context to the LLM for multi-turn conversations
- Context limit enforcement — conversation is capped; banner prompts the user to clear and continue
- Message feedback — thumbs up/down on each AI response; ratings sent to Arize Phoenix for analysis
- Prompt injection defense — input blocklist before LLM call; output guardrail strips hallucinated citations
- Spend tracking — every OpenAI API call logged with cost; daily crossing-point email alert fires once when threshold is exceeded
- Rate limiting — per-user limit on chat, per-IP limit on auth initiation
- Observability — full RAG trace (query → retrieval → prompt → response) in self-hosted Arize Phoenix
- Session management — single active session per user; Clear Chat deactivates and creates a new one; messages retained in DB
- Invite-only access — Google OAuth with an allowlist; unauthorized attempts are logged
| Concern | Choice |
|---|---|
| Backend | FastAPI |
| Frontend | Alpine.js + Bootstrap 5 |
| Auth | Google OAuth 2.0 + JWT (HttpOnly cookie) |
| LLM | OpenAI GPT-4o |
| Embeddings | OpenAI text-embedding-3-small |
| Vector DB | Weaviate (self-hosted) |
| Relational DB | PostgreSQL + SQLAlchemy + Alembic |
| Observability | Arize Phoenix (self-hosted) |
- Docker + Docker Compose
- Python 3.11+
- A Google OAuth 2.0 app (Client ID + Secret)
- An OpenAI API key
1. Clone and install dependencies
git clone https://github.com/<your-username>/prepwise.git
cd prepwise
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt2. Configure environment
cp .env.example .env.localFill in all values in .env.local. See .env.example for the full list of required keys.
3. Run database migrations
alembic upgrade head4. Ingest knowledge base documents
python scripts/ingest.py5. Start the app
docker compose up -dVisit http://localhost:8000.
Access is invite-only. Add a user's email to the allowed_users table before they can sign in.
Phoenix UI is available at http://localhost:6006. All OpenAI calls (embeddings + LLM) are traced automatically.
See .env.example for the full list. Key variables:
| Variable | Description |
|---|---|
GOOGLE_CLIENT_ID |
Google OAuth client ID |
GOOGLE_CLIENT_SECRET |
Google OAuth client secret |
GOOGLE_REDIRECT_URI |
OAuth callback URL |
OPENAI_API_KEY |
OpenAI API key |
DATABASE_URL |
PostgreSQL connection string |
JWT_SECRET |
Secret for signing JWT tokens |
SPEND_ALERT_THRESHOLD |
Daily spend limit in USD before email alert |
prepwise/
├── app.py # FastAPI entry point
├── config.py # Centralised config (reads from .env)
├── constants.py # Business constants
├── auth/ # Google OAuth, JWT, allowlist
├── chat/ # Sessions, message history, context limits
├── rag/ # Retrieval, confidence check, LLM response
├── spend/ # Cost logging and spend alerts
├── feedback/ # Thumbs up/down on AI responses
├── documents/ # KB document listing
├── infra/ # PostgreSQL + Weaviate client setup
├── scripts/ # ingest.py — one-time KB ingestion
├── docs/ # Raw knowledge base documents (Markdown)
├── templates/ # landing.html, chat.html
└── static/ # Alpine.js, CSS, images