A production-grade agentic architecture for automating growth and lifecycle workflows. Built on the WAT pattern (Workflows / Agents / Tools) where AI handles reasoning and orchestration while deterministic Python scripts handle execution.
When AI tries to handle every step directly, accuracy compounds down fast. If each step is 90% accurate, five steps in a row gives you 59% end-to-end reliability. WAT solves this by keeping AI in the coordination layer and offloading execution to deterministic, testable scripts.
Workflows → define what to do (Markdown SOPs)
Agents → decide how to do it (Claude / LLM)
Tools → actually do it (Python scripts)
Two fully implemented pipelines for a PLG SaaS product:
Monitors product usage data daily, scores users on upgrade readiness (PQL scoring), generates AI-personalized upgrade messages, and outputs to Google Sheets.
Run:
python3 tools/run_pipeline.py --dry-run # synthetic data
python3 tools/run_pipeline.py --csv your_data.csv --dry-run # your own CSVPipeline stages:
generate_synthetic_users.py → score_pqls.py → generate_messages.py
→ detect_anomalies.py → build_daily_report.py → sheets_write.py → export_workbook.py
PQL Scoring tiers:
| Tier | Score | Criteria |
|---|---|---|
| On Fire | ≥70 | High engagement, strong buying signals |
| Hot | ≥50 | Active, multiple signals |
| Warming | ≥30 | Some activity |
| Cold | ≥0 | Low activity |
| Inactive | — | 0 sessions |
Identifies highly engaged users as referral candidates, generates personalized referral emails, sends via SendGrid, and tracks outcomes.
Run:
python3 tools/run_referral_pipeline.py --dry-runRequires Pipeline 1 to have run first — consumes
.tmp/pql_scores.json.
Pipeline stages:
filter_referral_candidates.py → generate_referral_emails.py
→ send_emails_sendgrid.py → track_referral_log.py → build_growth_report.py
Candidate criteria:
- On Fire or Hot tier + streak ≥14 days + last session ≤3 days ago + not contacted in 30 days
# 1. Clone and install dependencies
git clone https://github.com/yourusername/agentic-workflows.git
cd agentic-workflows
pip install -r requirements.txt
# 2. Configure environment
cp .env.example .env
# Fill in your API keys in .env
# 3. (Optional) Add Google Sheets credentials
# Place credentials.json in root for Sheets integration
# 4. Run a demo
python3 tools/demo_single_user.py
python3 tools/demo_scenarios.pytools/ # Python scripts — deterministic execution
run_pipeline.py # Orchestrator: full conversion pipeline
run_referral_pipeline.py # Orchestrator: full referral pipeline
generate_synthetic_users.py # Synthetic user data (swap for real API)
score_pqls.py # PQL scoring engine
generate_messages.py # AI message generation (Claude)
generate_referral_emails.py # AI referral email generation
filter_referral_candidates.py # Candidate identification
detect_anomalies.py # Anomaly detection
build_daily_report.py # Conversion report builder
build_growth_report.py # Referral report builder
sheets_read.py # Google Sheets reader
sheets_write.py # Google Sheets writer
export_workbook.py # Excel workbook export
send_emails_sendgrid.py # SendGrid email delivery
track_referral_log.py # Referral outcome tracker
ingest_csv.py # CSV data ingestion
config/
specialty_templates.json # Message templates by user segment
workflows/ # Markdown SOPs — agent instructions
conversion_accelerator_workflow.md
referral_growth_loop.md
testing_demo_guide.md
example_workflow.md
.tmp/ # Intermediate files (gitignored, auto-generated)
.env.example # Environment variable template
CLAUDE.md # Agent operating instructions
requirements.txt
| Key | Purpose | Required |
|---|---|---|
ANTHROPIC_API_KEY |
AI message generation | Yes (templates work without credits) |
OPENAI_API_KEY |
Alternative LLM | Optional |
GOOGLE_SHEET_ID + credentials.json |
Google Sheets output | Optional |
SENDGRID_API_KEY + SENDGRID_FROM_EMAIL |
Email delivery | Optional (dry-run works without) |
All pipelines run in --dry-run mode without any external credentials. Results are written to .tmp/ and optionally exported as .xlsx.
This framework is designed to be swapped out at the data layer:
- Replace synthetic data — swap
generate_synthetic_users.pywith a real API pull from Mixpanel, Amplitude, Segment, or your database - Update scoring signals — edit
score_pqls.pyto match your product's engagement events - Update brand files — create
brand.mdandvoice.mdin the root (gitignored) for AI-generated content to reference - No workflow logic changes needed — the pipeline orchestration stays the same
python3 tools/demo_single_user.py # Single user walkthrough
python3 tools/demo_scenarios.py # Multiple tier scenarios
python3 tools/demo_messages.py # Message generation
python3 tools/demo_referral.py # Full referral flow- Anthropic Claude — message generation
- SendGrid — email delivery
- gspread — Google Sheets integration
- Faker — synthetic data generation