Problem
Multi-step patterns that repeat across many flows have no abstraction mechanism. Two specific cases:
1. Caching (6-step boilerplate per flow)
The recommended caching pattern requires 6 steps every time:
buildCacheKey — deterministic key from inputs
readCacheFile — file-read with onError: continue
checkCache — evaluate hit/miss/expired
buildResult — actual computation
buildCacheEntry — quality gate + serialize
writeCacheFile — persist to disk with null guard
In a 90+ flow codebase, ~20 flows implement this. That's ~120 steps of identical boilerplate, each with the same bug surface: step ordering (buildResult must precede buildCacheEntry), null guards on writeCacheFile, TTL math, and quality gates. We maintain a regression test specifically to catch cache ordering bugs.
Proposed solution — native cache step:
{
"id": "cachedResearch",
"type": "cache",
"cache": {
"namespace": "research",
"key": "{{$.input.domain}}-{{$.input.depth}}",
"ttlDays": 30,
"qualityCheck": "output && output.length > 100",
"compute": [
{ "id": "doWork", "type": "bash", "bash": { "command": "..." } }
]
}
}
The CLI handles: file I/O, TTL evaluation, null guarding, and only runs the compute steps on cache miss. One step replaces six.
2. Repeated multi-step patterns (3-step boilerplate per invocation)
Beyond caching, several 3-step patterns repeat across 30+ flows:
- LLM call pattern: file-write prompt → bash
claude --print → code parse envelope/fences
- HTTP POST + persist: code build payload → bash POST to server → bash upload result
- API lookup: code build params → bash curl → code parse response
We extracted shared utility flows for these, but sub-flow output wrapping (#21, #47) makes them fragile. A template/macro system would let you define reusable step groups that expand inline (no sub-flow overhead):
{
"id": "analyzeCompany",
"type": "template",
"template": "llm-call",
"params": {
"prompt": "{{$.steps.buildPrompt.output}}",
"model": "haiku",
"outputFormat": "json"
}
}
Templates would be defined once (e.g., in a .one/templates/ directory) and expanded at flow load time — avoiding the sub-flow output path issues entirely.
Impact
Alternatives considered
Problem
Multi-step patterns that repeat across many flows have no abstraction mechanism. Two specific cases:
1. Caching (6-step boilerplate per flow)
The recommended caching pattern requires 6 steps every time:
buildCacheKey— deterministic key from inputsreadCacheFile— file-read withonError: continuecheckCache— evaluate hit/miss/expiredbuildResult— actual computationbuildCacheEntry— quality gate + serializewriteCacheFile— persist to disk with null guardIn a 90+ flow codebase, ~20 flows implement this. That's ~120 steps of identical boilerplate, each with the same bug surface: step ordering (buildResult must precede buildCacheEntry), null guards on writeCacheFile, TTL math, and quality gates. We maintain a regression test specifically to catch cache ordering bugs.
Proposed solution — native cache step:
{ "id": "cachedResearch", "type": "cache", "cache": { "namespace": "research", "key": "{{$.input.domain}}-{{$.input.depth}}", "ttlDays": 30, "qualityCheck": "output && output.length > 100", "compute": [ { "id": "doWork", "type": "bash", "bash": { "command": "..." } } ] } }The CLI handles: file I/O, TTL evaluation, null guarding, and only runs the
computesteps on cache miss. One step replaces six.2. Repeated multi-step patterns (3-step boilerplate per invocation)
Beyond caching, several 3-step patterns repeat across 30+ flows:
claude --print→ code parse envelope/fencesWe extracted shared utility flows for these, but sub-flow output wrapping (#21, #47) makes them fragile. A template/macro system would let you define reusable step groups that expand inline (no sub-flow overhead):
{ "id": "analyzeCompany", "type": "template", "template": "llm-call", "params": { "prompt": "{{$.steps.buildPrompt.output}}", "model": "haiku", "outputFormat": "json" } }Templates would be defined once (e.g., in a
.one/templates/directory) and expanded at flow load time — avoiding the sub-flow output path issues entirely.Impact
Alternatives considered