GitHub description: LangGraph-powered AI interviewer agent β generates role-specific questions, evaluates answers, decides follow-ups, and streams responses via FastAPI SSE.
Python microservice that implements the AI interviewer brain. It exposes two FastAPI endpoints consumed by interview-levelup-backend and drives the conversation through a LangGraph stateful graph.
| Layer | Tech |
|---|---|
| Language | Python 3.11+ |
| API | FastAPI + Uvicorn |
| AI orchestration | LangGraph + LangChain Core |
| LLM | OpenAI-compatible (configurable base URL) |
| State model | Pydantic v2 |
- Role-agnostic β works for any job role; the LLM infers domain, language, and appropriate question style from the role name
- Adaptive difficulty β questions get progressively harder each round
- Answer evaluation β scores answers 0β100 with a structured detail breakdown
- Follow-up logic β if a score is below threshold, the agent asks one targeted follow-up instead of moving on
- Candidate sub-questions β detects when the candidate redirects a question back at the interviewer and handles it gracefully
- Abort detection β terminates the session if hostile or off-topic behaviour is detected
- Final report β generates a structured debrief after all rounds complete
- Language detection β responds in the same language the role or candidate implies (Chinese, Japanese, English, etc.)
- Token streaming β
/chat/streamemits SSE tokens as the LLM generates them
βββββββββββββββ
β route_entry β (start / answer path)
ββββββββ¬βββββββ
βββββββββββββ΄βββββββββββββββ
βΌ βΌ
generate_question check_sub
β β
END βββββββββββΌβββββββββββ
βΌ βΌ βΌ
handle_sub (user_end) evaluate_answer
β β β
END β decide_next_step
β ββββββΌβββββ¬βββββββββββ
β βΌ βΌ βΌ βΌ
β (next) (fu) (finished) (aborted)
β β β β β
β βΌ βΌ ββββββ¬ββββββ
β gen_q gen_fu βΌ
β β β generate_report
β END END β
ββββββββββββββββββββββββEND
| Node | Responsibility |
|---|---|
generate_question |
Produce the next interview question (avoids repeating covered topics) |
check_sub |
Classify reply as: sub-question (SUB), voluntary exit (END), or answer (ANSWER) |
handle_sub |
Answer the candidate's sub-question and return the turn to them |
evaluate_answer |
Score the answer 0β100, produce evaluation detail |
decide_next_step |
Two-layer abort check (immediate hostile + cumulative), then route to follow-up / next question / report |
generate_followup |
Ask a targeted follow-up for a weak answer |
generate_report |
Produce the final structured debrief (triggered by: all rounds done, user exit, or abort) |
Blocking endpoint. Sends one complete LLM response.
Request
{
"role": "product manager",
"level": "junior",
"style": "standard",
"max_rounds": 5,
"current_round": 0,
"current_question": null,
"answer": null,
"interview_history": []
}Set current_question + answer to null to start a new session (generates first question).
Populate both to submit an answer and receive the next step.
Response
{
"question": "Tell me about a time you handled competing stakeholder priorities.",
"evaluation_score": null,
"evaluation_detail": null,
"finished": false,
"is_followup": false,
"is_sub": false,
"current_round": 1,
"followup_count": 0,
"report": null
}Same contract but streams the interviewer's question token-by-token via SSE.
data: {"type": "token", "content": "Tell"}
data: {"type": "token", "content": " me"}
...
data: {"type": "done", "question": "...", "finished": false, ...}
cp .env.example .env
# Set: LLM_API_KEY, LLM_BASE_URL (defaults to OpenAI), LLM_MODEL
pip install -r requirements.txt
uvicorn main:app --reload --port 8000# Export required variables (or put them in your shell profile)
export LLM_API_KEY=your_key
export LLM_BASE_URL=https://api.siliconflow.cn/v1
export LLM_MODEL=deepseek-ai/DeepSeek-V3.2
# Start (default external port 8000)
docker compose up --build -d
# Start with a custom external port
AGENT_PORT=9090 docker compose up --build -d
# Logs
docker compose logs -f
# Stop
docker compose down| Variable | Default | Description |
|---|---|---|
LLM_API_KEY |
β | API key for the LLM provider |
LLM_BASE_URL |
https://api.openai.com/v1 |
OpenAI-compatible base URL |
LLM_MODEL |
gpt-4o |
Model name |
LOG_LEVEL |
INFO |
Logging verbosity: DEBUG / INFO / WARNING / ERROR |
AGENT_PORT |
8000 |
Host port mapped to the container (Docker only) |