A gallery of production-ready AI workflows built with Output.ai — an open-source framework for durable, LLM-powered workflows orchestrated by Temporal.
Each workflow is a self-contained example you can run locally, learn from, and fork.
| Workflow | Description | APIs |
|---|---|---|
| blog_evaluator | Evaluate blog post signal-to-noise quality | Jina Reader |
| call_scorer | Score sales call transcripts against MEDDIC, BANT, or SPIN | LLM only |
| changelog_generator | Generate categorized changelogs from GitHub commits and PRs | GitHub |
| dependency_audit | Audit npm dependencies for vulnerabilities, licenses, and abandonment | GitHub, OSV, npm |
| recipe_extractor | Extract structured recipes from blog URLs | Jina Reader |
| url_summarizer | Summarize any webpage into TLDR, key points, and FAQ | Jina Reader |
| youtube_summarizer | Summarize YouTube videos with key moments and takeaways | YouTube |
| ai_hn_digest | Personalized Hacker News digest published to Beehiiv newsletter | HN, Jina Reader, Beehiiv |
| sales_call_processor | Process sales call transcripts into notes + parallel recipe analyses | LLM only |
- Node.js >= 24.3
- Docker and Docker Compose (for local development)
npm installOutput.ai uses encrypted credentials to manage API keys. To set up your own:
# Initialize a new credentials file and encryption key
npx output credentials init
# Edit credentials (opens in your $EDITOR)
npx output credentials editSee config/credentials.yml.template for the full list of available credentials. At minimum, you need:
anthropic:
api_key: "<your-anthropic-api-key>"Some workflows require additional credentials — check each workflow's README for details.
| Credential | Where to get it | Used by |
|---|---|---|
anthropic.api_key |
console.anthropic.com | All workflows |
github.token |
github.com/settings/tokens | changelog_generator, dependency_audit |
beehiiv.api_key |
app.beehiiv.com | ai_hn_digest |
npm run devThis starts:
- Temporal server and UI (http://localhost:8080)
- PostgreSQL and Redis databases
- Output.ai API server (http://localhost:3001)
- Worker process for executing workflows
In a new terminal:
npx output workflow run blog_evaluator paulgraham_hwhEach workflow has scenario files in its scenarios/ folder for quick testing.
Press Ctrl+C in the terminal running npm run dev to stop all services.
src/
├── clients/ # Shared API clients (GitHub, Jina, YouTube, etc.)
├── shared/ # Shared utilities across workflows
│ └── utils/
└── workflows/ # Workflow implementations
└── <workflow_name>/
├── workflow.ts # Orchestration logic (deterministic, no I/O)
├── steps.ts # Step functions (all I/O happens here)
├── types.ts # Zod schemas and TypeScript types
├── evaluators.ts # Quality evaluators (optional)
├── utils.ts # Local utilities (optional)
├── prompts/ # LLM prompt templates
└── scenarios/ # Test input scenarios
Workflows can import from: local steps, evaluators, utilities, and shared clients/utilities.
Steps and Evaluators can import from: local utilities and shared clients/utilities.
Steps and Evaluators cannot import from other steps or evaluators (Temporal activity isolation).