AI Business Development Agent for Claude Code
Roger is an AI business development assistant built on Claude Code. Give it a company name and it researches the prospect end-to-end across 9 sections (business model, market, audience, competitive landscape, paid media posture, benchmarks, and more), passes the research through a 4-expert review panel, and produces an approved research document ready for pitching. The whole pipeline runs inside your terminal. No dashboards, no SaaS, no waiting on a junior analyst.
Say "research [company]" and Roger runs a three-stage pipeline:
- Scoping -- sizes the prospect (SMB / Mid-market / Enterprise), asks you 2-3 targeting questions, sets budget caps
- Research -- executes a structured 9-section investigation with hard time and tool-call limits per tier
- Expert Panel -- 4 independent reviewers score the draft in parallel. All 4 must score >= 8/10 or Roger revises and resubmits (up to 5 rounds)
flowchart TD
A["User: research [company]"] --> B[Roger: Scoping + Intake]
B --> C[Roger: 9-Section Research Draft]
C --> D1[CMO Review]
C --> D2[Strategist Review]
C --> D3[Data Analyst Review]
C --> D4[Creative Strategist Review]
D1 --> E{All 4 score >= 8/10?}
D2 --> E
D3 --> E
D4 --> E
E -- Yes --> F[Approved Research Document]
E -- No --> G[Roger Revises Draft]
G --> C
G -. "Max 5 rounds" .-> H[Escalate to Owner]
roger-agent/
CLAUDE.md # Agent brain -- persona, protocols, rules
skills/
scoping-intake.md # Prospect sizing + scoping questions
research-opportunity.md # 9-section research with tier-based budgets
expert-panel-review.md # Parallel 4-expert review + pass/revise logic
competitor-ad-scan.md # Standalone Meta Ad Library deep-dive
internal-benchmarks.md # Pull benchmarks from your own client portfolio
experts/
cmo.md # Ex-brand-side CMO reviewer
strategist.md # Performance marketing strategist reviewer
data-analyst.md # Ex-Google/Meta data analyst reviewer
creative-strategist.md # Ex-Ogilvy creative strategist reviewer
_templates/
research-template.md # 9-section research format (all drafts use this)
scoping-intake-template.md # Intake brief format
panel-review-template.md # Expert review scorecard format
_knowledge/
index.md # Institutional memory index
past-prospects.md # Prospect log with outcomes
industry-benchmarks.md # Cached vertical benchmarks (CPC, CPM, CVR, AOV)
prospects/ # Your prospect data lives here (gitignored)
Required:
- Claude Code installed and configured
- Claude API key with access to Claude Opus or Sonnet
Optional (but recommended):
- Email integration (Gmail MCP) for prior correspondence search
- Slack integration for approval notifications
- Web search tools for research phase
- Internal benchmarking data or tools (Supermetrics, Databox, your own BigQuery, etc.)
-
Clone the repo
git clone https://github.com/YOUR-USERNAME/roger-agent.git cd roger-agent -
Open
CLAUDE.mdand find-replace these 5 placeholders:Placeholder Replace with Example [YOUR NAME]Your name Sarah Chen[YOUR AGENCY]Your agency/company Apex Digital[YOUR SLACK CHANNEL]Slack channel for notifications #bd-agent[YOUR EMAIL]Your email sarah@apexdigital.com[YOUR BENCHMARKING TOOL]Your data tool, or noneSupermetrics -
Also replace
[OWNER]in theskills/folder with your name. Every skill file references[OWNER]as the human-in-the-loop. -
Open the folder in Claude Code:
claude
-
Say "research [company]" and Roger takes it from there.
The panel is designed for exactly 4 reviewers running in parallel. To swap a reviewer, replace the persona file in experts/ and update the reviewer table in skills/expert-panel-review.md. Keep the 4-reviewer structure. Fewer reviewers create blind spots; more add latency without proportional quality gain.
Edit _templates/research-template.md. All 9 sections should remain present in every draft (filled or stubbed with TODO). The expert reviewers grade against the template, so if you add sections, update the reviewer persona files to cover them.
Edit the budget table in skills/research-opportunity.md. The three tiers (SMB, Mid-market, Enterprise) each have hard caps on wall-clock time, tool calls, and approximate spend. Increase them if you want deeper round-1 drafts; decrease them if you want faster iteration with more panel rounds.
Edit skills/internal-benchmarks.md and the "Internal Data" section of CLAUDE.md. Replace the example queries with whatever your tool supports. Roger is read-only on internal data by design.
Roger posts one-liners to your Slack channel on panel approval or escalation. To disable, remove the Slack references in CLAUDE.md and skills/expert-panel-review.md. To change the channel, update the [YOUR SLACK CHANNEL] placeholder.
A few design decisions that make Roger work:
CLAUDE.md as agent definition. The entire agent persona, protocols, and rules live in one file that Claude Code reads on session start. This is the brain. Everything else is a skill or reference file that CLAUDE.md orchestrates.
Skills as reusable, composable workflows. Each skill has defined inputs, steps, outputs, and hard budget caps. Skills can call each other (scoping triggers research, research triggers panel review) but each is independently runnable. The budget caps prevent runaway token spend on any single skill invocation.
Expert panel as quality guarantee. The 4 reviewers run as independent parallel subagents. They don't see each other's reviews until synthesis. This eliminates groupthink and dominant-voice effects. The >= 8/10 threshold from all 4 reviewers is a hard gate, not a suggestion.
_knowledge/ as institutional memory. Industry benchmarks get cached after each prospect so Roger doesn't re-research verticals you've already analyzed. Past prospect outcomes feed back into future research. The knowledge base compounds over time.
Persist-before-return pattern. Any subagent Roger dispatches writes its findings to disk before returning control. This protects against Claude Code's context compaction destroying research that only existed in the conversation. If a subagent crashes or gets compacted, the work on disk survives.
- Phase 2: Website audit mockups via Vercel. Roger produces a "here's what we'd change" visual for the prospect's site.
- Phase 3: Proposals and pitch decks. Roger drafts a full proposal grounded in the approved research, in your voice.
Built by Patrick Gilbert at AdVenture Media Group.
MIT. See LICENSE.