An AI interview system that does more than generate one-off reports.
It turns each interview into a long-term candidate growth graph.
Most AI interview tools stop at "generate a report."
This project focuses on continuous candidate intelligence:
- persistent candidate profiles
- session-over-session issue tracking
- incremental learning board updates
- visual analytics dashboard
- Explicit state-machine interview flow: intro -> technical -> personality -> report.
- Candidate library keyed by
name + emailfor cross-session continuity. - Dashboard visualization: score trend, issue mix, board status, experience heat.
- Automatic post-session review: repeated issues, new issues, fixed issues.
- Profile backfill from raw Q/A: progressively refines independent library profile.
- Split storage architecture: decomposes heavy
candidate.jsoninto maintainable files.
- Create or link a candidate
- Run interview session (text-first, voice optional)
- Generate structured report
- Auto-merge analysis + update learning board
- Refresh independent candidate library profile
- Track progress in dashboard
Real screenshots are now wired in the order of usage flow.
| Report Head | Personal Assessment | Potential & Experience Evaluation |
|---|---|---|
![]() |
![]() |
![]() |
Tip: all images are clickable for full-size preview on GitHub.
git clone https://github.com/GoDiao/ai-interview-agent.git
cd ai-interview-agent
python -m venv .venv
# Windows
.venv\Scripts\activate
# macOS / Linux
# source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env # Windows: copy .env.example .env
python run.pyOpen: http://127.0.0.1:8765
POST /api/candidates/{candidate_id}/library_profile/rebuild
session_id: optional, backfill from one specific sessionall_sessions=true: replay all done sessions sequentiallyclear_existing=true: optional reset before full rebuild
Per-candidate directory: data/candidates/{candidate_id}/
candidate.meta.jsonprofile.snapshot.jsonprofile.library.jsonanalysis.store.jsonlearning.board.jsonadvice.registry.jsoncandidate.json(legacy compatibility)
Migration:
python scripts/migrate_candidate_split_storage.pyRelated .env flags:
CANDIDATE_SPLIT_STORAGE_ENABLED=trueCANDIDATE_LEGACY_MIRROR_WRITE=false
- STT:
sensevoice_http/fake/local_whisper/ iFlytek - TTS:
browser(default) /kokoro
See .env.example and docs/SENSEVOICE.md.
- Backend: FastAPI, Pydantic, async orchestration
- Frontend: HTML/CSS/JS + Chart.js
- Storage: JSON-based local persistence with split-candidate schema
- LLM: OpenAI-compatible APIs
| File | Description |
|---|---|
| SPEC.md | Product scope and API |
| STATE_MACHINE.md | State machine contract |
| RESEARCH.md | Technical notes |
| docs/SENSEVOICE.md | SenseVoice integration |
- Core interview flow
- Candidate library + dashboard
- Auto review + incremental learning board
- Session-by-session profile backfill
- Split storage migration
- Multi-role interview templates and weighted scoring
- Team collaboration view for multiple interviewers
If this project is useful, feel free to star it and open issues/PRs.




