Skip to content

StreamPilotOrg/growth-machine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Growth Machine

A Claude-powered system for managing SaaS growth experiments using hypothesis-driven development and the ICE prioritization framework.

What is Growth Machine?

Growth Machine helps you systematically discover, prioritize, execute, and learn from growth experiments for your SaaS product. It combines industry-standard frameworks (hypothesis-driven development, ICE scoring) with Claude's intelligence to guide you through the entire experiment lifecycle.

Key Features

  • Hypothesis-Driven Framework: Structure experiments with clear hypotheses, target audiences, expected outcomes, and success criteria
  • ICE Prioritization: Score experiments on Impact, Confidence, and Ease to focus on high-value opportunities
  • Experiment Lifecycle Management: Track experiments from ideation through execution to analysis
  • Context-Aware Ideation: Generate experiment ideas based on your product documentation
  • Automated Analysis: Get insights and follow-up suggestions from completed experiments
  • Multi-Format Export: Export to CSV (Google Sheets), JSON, or Markdown
  • SaaS Metrics Calculator: Track key metrics like MRR, churn, LTV, CAC with health checks

Quick Start

1. Add Your Product Documentation

Add files describing your SaaS product to the product_docs/ folder:

product_docs/
├── overview.md         # What your product does, target audience
├── features.md        # Key features and capabilities
├── metrics.json       # Current metrics (MRR, churn, etc.)
└── challenges.md      # Known pain points and opportunities

See product_docs/README.md for detailed guidance.

2. Generate Experiment Ideas

Use Claude to generate experiment ideas based on your product context:

/hypothesis-generate

Claude will analyze your product docs and suggest 3-5 high-potential experiment ideas following the hypothesis template.

3. Create an Experiment

Create a structured experiment:

/experiment-create "Improve onboarding completion"

Claude will guide you through:

  • Defining the hypothesis
  • Identifying target audience
  • Setting expected outcomes
  • Providing rationale
  • Defining success criteria

4. Score with ICE Framework

Prioritize experiments using ICE (Impact × Confidence × Ease):

/experiment-score exp-001

Claude analyzes your experiment and suggests scores:

  • Impact (1-10): How much will this move the key metric?
  • Confidence (1-10): How certain are we this will work?
  • Ease (1-10): How easy is this to implement?

Experiments with ICE score ≥ 300 automatically move to your prioritized pipeline.

5. Execute and Track

Update experiment status as you progress:

/experiment-update exp-001 active      # Start experiment
/experiment-update exp-001 completed   # Add results

6. Analyze Results

Get insights and follow-up suggestions:

/experiment-analyze exp-001

Claude will:

  • Classify the outcome (Win/Loss/Inconclusive)
  • Validate your hypothesis
  • Compare predicted vs actual impact
  • Generate 2-3 follow-up experiment ideas
  • Create a detailed analysis report

7. Export and Share

Export experiments to share with your team:

/export csv              # All experiments to CSV
/export markdown pipeline # Prioritized view
/export json completed   # Completed experiments

Import the CSV into Google Sheets for collaborative tracking and planning.

Experiment Lifecycle

BACKLOG          PIPELINE         ACTIVE          COMPLETED        ARCHIVED
   │                │                │                 │               │
   │ Create new     │ ICE score     │ Experiment    │ Results       │ Analysis
   │ hypothesis     │ ≥ 300         │ running       │ captured      │ complete
   │                │                │                 │               │
   └────────────────┴────────────────┴─────────────────┴───────────────┘

ICE Framework

The ICE framework helps you prioritize experiments objectively:

Component Question Scale
Impact How much will this move the key metric? 1-10
Confidence How certain are we this will work? 1-10
Ease How easy is this to implement? 1-10

Total Score = Impact × Confidence × Ease

Priority Levels

  • 700+: Critical Priority - Implement immediately
  • 500-699: High Priority - Strong candidate
  • 300-499: Medium Priority - Good experiment
  • 150-299: Low Priority - Consider if higher priority exhausted
  • <150: Very Low Priority - Deprioritize

Example

Experiment: Add progress indicators to onboarding

  • Impact: 7 (Activation is important, expecting 15% increase)
  • Confidence: 6 (User research supports it)
  • Ease: 9 (Simple UI change)
  • Total: 378 (Medium-High Priority)

Hypothesis Template

All experiments follow this structured format:

We believe that [proposed change/solution]
for [target audience/segment]
will result in [expected outcome with metrics]
because [rationale/evidence].

We will have confidence to proceed when we see [success criteria]
by [testing method] for [timeframe].

Example:

We believe that adding an interactive product tour highlighting the 3 core features for new trial users within their first session will result in a 25% increase in activation rate because user interviews revealed confusion about core capabilities.

We will have confidence when we see increased completion of first core action by A/B testing with 50/50 split for 2 weeks.

Available Commands

Experiment Management

  • /experiment-create [title] - Create new experiment with hypothesis
  • /experiment-score [id] - Score with ICE framework
  • /experiment-update [id] [status] - Update status or add results
  • /experiment-analyze [id] - Analyze results and generate insights
  • /hypothesis-generate [category] - Generate experiment ideas

Export & Reporting

  • /export [format] [filter] - Export experiments
    • Formats: csv, json, markdown, pipeline, summary
    • Filters: all, backlog, pipeline, active, completed, category:acquisition

Project Structure

Growth machine/
├── .claude/              # Claude commands and skills
│   ├── commands/         # Slash commands (/experiment-create, etc.)
│   └── skills/          # Autonomous skills (ice-scorer, experiment-analyzer)
├── src/                 # Python core modules
│   ├── models/          # Experiment, Hypothesis, Metrics data models
│   ├── scoring/         # ICE framework logic
│   ├── exporters/       # CSV, JSON, Markdown exporters
│   └── utils/           # Validation utilities
├── experiments/         # Experiment storage (JSON files)
│   ├── backlog/        # Ideas (not yet scored)
│   ├── pipeline/       # Prioritized (ICE ≥ 300)
│   ├── active/         # Currently running
│   └── archive/        # Completed with results
├── product_docs/       # Your product documentation (add your files here)
├── templates/          # Experiment and hypothesis templates
└── exports/            # Generated export files

SaaS Metrics Tracking

Track key SaaS metrics with built-in calculations and health checks:

Revenue Metrics

  • MRR (Monthly Recurring Revenue)
  • ARR (Annual Recurring Revenue)
  • NRR (Net Revenue Retention)

Customer Metrics

  • Churn Rate (Benchmark: 3.5% monthly for B2B)
  • LTV (Customer Lifetime Value)
  • CAC (Customer Acquisition Cost)
  • LTV:CAC Ratio (Target: 3:1 or higher)

Conversion Metrics

  • Visitor → Signup rate
  • Signup → Activation rate
  • Activation → Paying rate

Each metric includes:

  • Industry benchmarks (2025)
  • Health status (Good/Warning/Critical)
  • Improvement recommendations
  • Relationship to other metrics

Growth Experiment Categories

Organize experiments by funnel stage:

  • Acquisition: How users discover your product
  • Activation: Users reaching "aha moment"
  • Retention: What brings users back
  • Revenue: How users upgrade or expand
  • Referral: What motivates users to refer others

Best Practices

  1. Start with context: Add product documentation before generating ideas
  2. Be specific: Use concrete metrics in expected outcomes (e.g., "15% increase")
  3. Provide evidence: Support hypotheses with user research, data, or case studies
  4. Keep experiments small: Target <2 weeks execution time
  5. Score consistently: Use ICE guidelines for comparable prioritization
  6. Document learnings: Even "failed" experiments provide valuable insights
  7. Follow up on wins: Scale successful experiments and test variations
  8. Export regularly: Keep stakeholders informed with exported reports

Industry Benchmarks (2025)

SaaS Metrics

  • Churn Rate: 3.5% monthly (B2B average)
  • NRR: 120%+ is excellent
  • LTV:CAC: 3:1 minimum
  • CAC Payback: <12 months ideal
  • MRR Growth: 10-20% monthly is strong

Experiment Success

  • Typical Win Rate: 10-30% of experiments
  • High ICE Scores: Better success rates for scores >500
  • Statistical Significance: Need 95%+ confidence
  • Sample Size: Varies by metric, typically 2+ weeks

Example Workflow

Day 1: Planning

  1. Add product docs to product_docs/
  2. Run /hypothesis-generate to get 5 experiment ideas
  3. Select top 3 ideas and run /experiment-create for each
  4. Run /experiment-score to prioritize with ICE
  5. Export pipeline: /export markdown pipeline

Week 1-2: Execution

  1. Pick top-scored experiment
  2. Update to active: /experiment-update exp-001 active
  3. Implement and run experiment
  4. Track metrics

Week 3: Analysis

  1. Capture results: /experiment-update exp-001 completed
  2. Analyze: /experiment-analyze exp-001
  3. Review follow-up suggestions
  4. Export results: /export csv completed
  5. Create follow-up experiments based on learnings

Ongoing: Iteration

  • Review pipeline weekly
  • Maintain 2-3 active experiments
  • Document learnings consistently
  • Refine ICE scoring based on outcomes
  • Build experiment knowledge base

Requirements

  • Python 3.7+
  • No external dependencies (uses standard library)
  • Claude Code for AI-powered assistance

Getting Started Tips

If you're new to growth experiments:

  1. Start by reading templates/hypothesis_template.md
  2. Review example hypotheses for each category
  3. Run /hypothesis-generate to see AI-generated examples
  4. Create 1-2 small experiments to learn the workflow

If you have existing experiments:

  1. Add them manually to experiments/backlog/ as JSON files
  2. Use /experiment-score to prioritize them
  3. Export to review: /export csv

If you're tracking experiments elsewhere:

  1. Export from your current tool
  2. Create experiments using /experiment-create
  3. Gradually transition to Growth Machine
  4. Export regularly to keep both systems in sync

Support & Documentation

  • Full Documentation: See CLAUDE.md for technical details
  • Templates: Check templates/ for hypothesis examples
  • Product Context: See product_docs/README.md for guidance
  • Commands: Type any command in Claude Code to get started

Philosophy

Growth Machine is built on these principles:

  1. Hypothesis-Driven: Every experiment starts with a clear, testable hypothesis
  2. Data-Informed: Use evidence to prioritize and validate
  3. Iterative Learning: Even "failures" provide valuable insights
  4. Systematic Approach: Consistent framework enables comparison and learning
  5. Focus on Impact: Prioritize experiments that move key metrics
  6. Quick Execution: Keep experiments small and fast
  7. Continuous Improvement: Build institutional knowledge over time

License

This project is for your internal use in building and growing your SaaS product.

Acknowledgments

Built on industry-standard growth frameworks:

  • ICE Framework by Sean Ellis
  • Hypothesis-Driven Development
  • SaaS metrics and benchmarks from industry research

Ready to start experimenting?

  1. Add your product docs to product_docs/
  2. Run /hypothesis-generate to get your first ideas
  3. Start building your growth experiment pipeline!

About

A specialized Claude Code workspace for creating growth strategies for saas products.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages