An AI-powered mock interview app that reads your resume and asks you personalized technical questions based on your actual projects, skills, and experience — just like a real recruiter would.
- Upload your resume (PDF)
- The app parses it into sections (Skills, Projects, Experience, Education, etc.)
- Chunks are embedded and stored in an in-memory vector database (ChromaDB)
- A Groq-powered LLM generates 8 targeted interview questions referencing your specific work
- For each question, it generates a follow-up based on your answer
- Questions vary between sessions — different interviewer personas and shuffled context mean no two sessions are identical
| Layer | Tool |
|---|---|
| UI | Streamlit |
| LLM | Groq (llama-3.3-70b-versatile) |
| Vector DB | ChromaDB (in-memory) |
| PDF Parsing | pdfplumber |
| Embeddings | ChromaDB default (MiniLM) |
git clone https://github.com/FaresIbrahim32/smart_interview.git
cd smart_interviewpython -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activatepip install -r requirements.txtCreate a .env file in the project root:
RAG_API=your_groq_api_key_here
Get a free API key at console.groq.com.
streamlit run app.py- Section-aware chunking — resume is split by detected sections so questions are targeted
- Randomized personas — each session picks a different interviewer style (startup CTO, staff engineer, etc.)
- Follow-up questions — after each answer, a contextual follow-up is generated
- Progress tracking — sidebar shows all questions with completion status
- No data stored — everything is in-memory, cleared when you close the session
smart_interview/
├── app.py # Streamlit UI and interview loop
├── requirements.txt
├── rag/
│ ├── parser.py # PDF extraction and section-based chunking
│ ├── vectorstore.py # ChromaDB setup and querying
│ └── interviewer.py # Groq LLM calls for question and follow-up generation