Local-first AI workflow orchestration. Run multi-phase AI tasks with intelligent model routing — use local models for most work, cloud only when needed.
Cut your AI API costs by 70-90%.
# macOS/Linux
brew install jbctechsolutions/tap/skillrunner
# Or download binary
curl -sSL https://github.com/jbctechsolutions/skillrunner/releases/latest/download/skillrunner_Linux_x86_64.tar.gz | tar xz
sudo mv skillrunner /usr/local/bin/
sudo ln -sf /usr/local/bin/skillrunner /usr/local/bin/srThe CLI is available as skillrunner or sr for short:
sr run code-review "$(cat main.go)"
# or with a simple prompt
sr run code-review "Review this code for issues"# 1. Initialize config
sr init
# 2. (Optional) Install Ollama for local models
brew install ollama
ollama serve
ollama pull qwen2.5:14b
# 3. Run a skill
sr run code-review "func add(a, b int) { return a + b }"
# 4. See usage metrics
sr metrics- Multi-phase workflows — Break complex tasks into steps with dependencies
- Intelligent routing — Cheap models for simple work, premium for complex
- Local-first — Ollama support means most work never hits the cloud
- Cost tracking — See exactly what you spend per task
- Marketplace — Import skills from GitHub, npm, or HuggingFace
sr run <skill> <request> # Run a skill
sr ask <skill> <question> # Quick single-phase query
sr list # Show available skills
sr status # System health check
sr metrics # Usage and cost metrics
sr init # Initialize configuration| Provider | Type | Status |
|---|---|---|
| Ollama | Local | Ready |
| Anthropic | Cloud | Ready |
Config lives at ~/.skillrunner/config.yaml:
# Provider configuration with auto model discovery
providers:
ollama:
url: http://localhost:11434
enabled: true
anthropic:
api_key: ${ANTHROPIC_API_KEY}
enabled: false # Set to true when you have an API keySee config.example.yaml for all options.
MIT — see LICENSE