A minimal, lightweight task orchestrator with cron-like scheduling, written in Rust.
I wanted something to orchestrate simple tasks on embedded hardware (Turing PI cluster with RK1 and Raspberry PIs). Low-memory and efficiency were important, but I also wanted something that was usable and friendly.
Importantly, I needed
- Tasks and Jobs that can form dependencies
- Simple observability
- Concurrency
- YAML configuration
And so I built this.
No! I built this to see what it would take to build an orchestrator with Claude Code using Opus 4.5. It works for me but please do not use it for you.
- DAG-based execution — Define task dependencies; independent tasks run in parallel
- Cron scheduling — Standard cron expressions with timezone support
- YAML configuration — Define jobs declaratively
- Retry policies — Configurable retries with fixed delays
- Conditional execution — Run tasks based on upstream success/failure
- Cross-job dependencies — Jobs can depend on other jobs
- Event system — Subscribe to lifecycle events (task started, completed, failed, etc.)
- HTTP REST API — Optional API server for monitoring and control (enabled by default)
- Pluggable storage — In-memory (default) or SQLite for persistence
- Concurrency control — Limit parallel tasks and concurrent job runs
api— Enables HTTP REST API and related CLI flags (--no-api,--api-port,--api-host). Enabled by default.sqlite— Enables SQLite storage backend and--dbCLI flagtui— Enables terminal UI dashboard (includes sqlite)
# Default installation (includes API feature)
cargo install --path .
# With SQLite support
cargo install --path . --features sqlite
# With TUI support
cargo install --path . --features tui
# Minimal installation (no API, no SQLite)
cargo install --path . --no-default-featuresThere are example jobs in ./examples. You can use those or follow these instructions:
# jobs/hello.yaml
id: hello_world
name: Hello World Job
tasks:
- id: greet
type: command
command: echo
args: ["Hello, World from Petit!"]pt run jobs/pt trigger jobs/ hello_worldBy default, this runs in-memory with no persistence!
pt trigger jobs/ hello_world --db petit.dbpt run <jobs-dir> # Run scheduler with jobs from directory
pt validate <jobs-dir> # Validate job configurations
pt list <jobs-dir> # List all jobs
pt trigger <jobs-dir> <job-id> # Trigger a job manually
| Flag | Description |
|---|---|
-j, --max-jobs <N> |
Maximum concurrent jobs (default: unlimited) |
-t, --max-tasks <N> |
Maximum concurrent tasks per job (default: 4) |
--tick-interval <N> |
Scheduler tick interval in seconds (default: 1) |
--db <PATH> |
Path to SQLite database file (requires sqlite feature) |
--no-api |
Disable HTTP API server (requires api feature, enabled by default) |
--api-port <PORT> |
API server port (default: 8565, requires api feature) |
--api-host <HOST> |
API server host (default: 127.0.0.1, requires api feature) |
Note: API-related flags (--no-api, --api-port, --api-host) are only available when the api feature is enabled (which is the default). If you compile with --no-default-features, these flags will not be available and no API server will start.
Jobs are defined in YAML files:
id: data_pipeline
name: Data Pipeline
schedule: "0 0 2 * * *" # 2 AM daily (6-field cron: sec min hour day month weekday)
enabled: true
max_concurrency: 1
config:
batch_size: 1000
output_dir: /tmp/data
tasks:
- id: extract
type: command
command: python
args: ["scripts/extract.py"]
environment:
STAGE: extract
- id: transform
type: command
command: python
args: ["scripts/transform.py"]
depends_on: [extract]
condition: all_success
retry:
max_attempts: 3
delay_secs: 10
condition: always
- id: notify
type: command
command: echo
args: ["Pipeline done"]
depends_on: [transform]
condition: all_done # Runs regardless of upstream success/failure| Condition | Description |
|---|---|
always |
Run if dependencies completed (default) |
all_success |
Run only if all upstream tasks succeeded |
on_failure |
Run only if at least one upstream task failed |
all_done |
Run after dependencies complete, regardless of status |
retry:
max_attempts: 3 # Number of retries (0 = no retries)
delay_secs: 5 # Fixed delay between attempts
condition: always # always | transient_only | neverid: downstream_job
name: Downstream Job
depends_on:
- job_id: upstream_job
condition: last_success # last_success | last_complete | within_window┌─────────────────────────────────────────────────────────┐
│ CLI │
│ run | validate | list | trigger │
└─────────────────────────┬───────────────────────────────┘
│
┌─────────────────────────▼───────────────────────────────┐
│ Scheduler │
│ • Cron scheduling • Job dependencies │
│ • Pause/resume • Concurrency control │
└─────────────────────────┬───────────────────────────────┘
│
┌─────────────────────────▼───────────────────────────────┐
│ DagExecutor │
│ • Topological ordering • Parallel execution │
│ • Task conditions • Context propagation │
└─────────────────────────┬───────────────────────────────┘
│
┌─────────────────────────▼───────────────────────────────┐
│ TaskExecutor │
│ • Retry logic • Timeout handling │
│ • CommandTask • Custom Task trait │
└─────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌───────────┐
│ Storage │ │ EventBus │ │ Context │
└─────────┘ └──────────┘ └───────────┘
# Run tests
cargo test
# Run tests with SQLite
cargo test --features sqlite
# Build release
cargo build --release
# Run with debug logging
RUST_LOG=debug cargo run -- run examples/jobs/MIT