Skip to content

rackumar21/pm-interview-coach

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

PM Interview Coach

A CLI tool for practicing PM interviews — Product Sense and Analytical Thinking / Setting a Goal questions. Built around the Ben Erez interview frameworks.

Coach mode is now an AI agent with persistent memory: it reads your past sessions, identifies your weak sections, focuses coaching there, and tracks your improvement over time.

What it does

Three modes, two frameworks:

Coach mode (agentic) — The coach interviews you section by section, but now it knows you. Before the session starts, it pulls your history, identifies which sections you've been scoring below 3/5, and focuses extra pressure there. At the end, it scores your sections, rewrites the weakest one, and saves the session so it can track your progress across weeks of practice.

Demo mode — Watch the coach answer a question as a strong PM candidate would. Every reasoning step narrated. Good for seeing what a 5/5 answer looks like before you practice yourself.

Score mode — Paste your full answer. The coach scores each section 1-5 using the rubric, identifies your weakest section, and rewrites it.

Frameworks covered

  • Product Sense (ps) — 7 sections: Assumptions, Plan Statement, Product Motivation, Ecosystem Players, Segmentation, Problem Identification, Solution Development
  • Analytical Thinking / Setting a Goal (at) — 7 sections + Tradeoff: Assumptions, Plan Statement, Product Rationale, Ecosystem Players, NSM, Guardrails, Team Goals

Setup

pip install -r requirements.txt
export ANTHROPIC_API_KEY=your_key_here

Usage

Interactive menu:

python coach.py

With flags:

python coach.py --mode coach --type ps    # agentic coach, product sense
python coach.py --mode demo  --type at    # demo mode, analytical thinking
python coach.py --mode score --type ps    # score mode, product sense

How the agent works

In coach mode, the agent uses Claude's tool_use API to call five tools autonomously:

Tool When called What it does
get_session_history Session start Reads past sessions for this framework from sessions.json
get_weak_areas Session start Finds sections averaging below 3.0 or never practiced
select_question Session start Builds a priority list of sections to emphasize
log_session Session end Saves scores for each section to sessions.json
get_progress_summary Session end Shows per-section improvement vs historical averages

Session data is stored in sessions.json (gitignored). The file builds up as you practice. After 3-4 sessions, the coach has a meaningful picture of your strengths and gaps.

To end a session and trigger scoring + saving, say: "done", "finish", "end session", or "wrap up".

Example questions to practice

Product Sense:

  • Design a product for elderly people to manage medications
  • Build an AI agent for enterprise customer onboarding
  • Design an AI feature for a B2B SaaS company to reduce churn

Analytical Thinking / Setting a Goal:

  • Set a goal for Instagram Reels
  • Set a goal for a new AI customer support agent
  • Set a goal for an AI research assistant used by enterprise teams

Model

Uses Claude Opus 4.6 via the Anthropic API. Requires anthropic >= 0.40.0.


What I learned

  • Frameworks only work if you internalise them. Reading a framework and practicing it are completely different. Building a tool that forces you to go section by section — and pushes back when you skip steps — surfaced gaps that passive reading never would.
  • The hardest part of PM interviews is specificity. The coach mode revealed that most weak answers aren't wrong, they're just vague. "Improve engagement" vs "increase creator posts per week among users with fewer than 100 followers" — the difference is specificity, not insight.
  • System prompts are product specs. Writing the coach, demo, and score mode prompts was the same work as writing a PRD: define the behaviour, anticipate edge cases, test against real inputs, iterate. Prompt engineering is product work.
  • Streaming makes CLI tools feel alive. Switching to streamed API responses transformed the tool from "waiting for an answer" to "watching someone think." Small UX change, significant difference in feel.
  • Tool use is agent design, not just API wiring. Deciding when the agent should call tools, what each tool returns, and how those results should influence behavior — that is product thinking applied to AI systems. The tool schema is a miniature data model; the dispatch logic is a miniature backend.

About

CLI tool for practicing PM interviews - coach, demo, and score modes for Product Sense and Analytical Thinking frameworks.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages