Skip to content

jbctechsolutions/skillrunner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Skillrunner

Local-first AI workflow orchestration. Run multi-phase AI tasks with intelligent model routing — use local models for most work, cloud only when needed.

Cut your AI API costs by 70-90%.

Install

# macOS/Linux
brew install jbctechsolutions/tap/skillrunner

# Or download binary
curl -sSL https://github.com/jbctechsolutions/skillrunner/releases/latest/download/skillrunner_Linux_x86_64.tar.gz | tar xz
sudo mv skillrunner /usr/local/bin/
sudo ln -sf /usr/local/bin/skillrunner /usr/local/bin/sr

Usage

The CLI is available as skillrunner or sr for short:

sr run code-review "$(cat main.go)"
# or with a simple prompt
sr run code-review "Review this code for issues"

Quick Start

# 1. Initialize config
sr init

# 2. (Optional) Install Ollama for local models
brew install ollama
ollama serve
ollama pull qwen2.5:14b

# 3. Run a skill
sr run code-review "func add(a, b int) { return a + b }"

# 4. See usage metrics
sr metrics

Features

  • Multi-phase workflows — Break complex tasks into steps with dependencies
  • Intelligent routing — Cheap models for simple work, premium for complex
  • Local-first — Ollama support means most work never hits the cloud
  • Cost tracking — See exactly what you spend per task
  • Marketplace — Import skills from GitHub, npm, or HuggingFace

Commands

sr run <skill> <request>   # Run a skill
sr ask <skill> <question>  # Quick single-phase query
sr list                    # Show available skills
sr status                  # System health check
sr metrics                 # Usage and cost metrics
sr init                    # Initialize configuration

Supported Providers

Provider Type Status
Ollama Local Ready
Anthropic Cloud Ready

Configuration

Config lives at ~/.skillrunner/config.yaml:

# Provider configuration with auto model discovery
providers:
  ollama:
    url: http://localhost:11434
    enabled: true

  anthropic:
    api_key: ${ANTHROPIC_API_KEY}
    enabled: false  # Set to true when you have an API key

See config.example.yaml for all options.

Documentation

License

MIT — see LICENSE

About

Local-first AI workflow orchestration with intelligent model routing

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages