-
Notifications
You must be signed in to change notification settings - Fork 14.3k
Description
Feature hasn't been suggested before.
- I have verified this feature I'm about to request hasn't been suggested before.
TL;DR
Let the LLM write bash scripts that call back into the running agent — deterministic control flow where code excels, AI judgment where LLMs excel.
More detail
Recently I stumbled across this article about — in the end — why Claude Code is so successful (amongst other things): Because it leans into the power of the Unix CLI:
- CLI is a native language of LLMs — they have read millions of Bash examples
- The "everything is text" principle fits very well with LLMs
Second thing I stumbled across: Fabric — AI as a Unix pipe:
yt --transcript https://youtu.be/... | fabric -sp extract_wisdom
This is where I had
The idea
Give the LLM the ability to write scripts that call back into the agent to:
- Create a deterministic execution umbrella like a Ralph Loop — the LLM writes the retry loop, bash runs it reliably, each callback gets fresh context
- Unleash the beauty of something like Fabric in openCode — pipe file contents through AI judgment, map-reduce across a codebase
- Let scripts do what they can do better: more reliably, faster, and save tokens
-> Deterministic work (loops, pipes) goes into in bash where it's reliable. Non-deterministic judgment ("is this buggy?", "what's the fix?") calls back to the LLM where it excels. The LLM writes the script that orchestrates both.
The problem
- LLMs are good at figuring out things and generating code
- LLMs are bad at:
- Iterations — they lose count, skip steps, forget where they were
- Following through — they lose track of the control flow once grinding too much on one aspect
- Deterministic control flow — loops, conditionals, batch operations are something LLMs are really bad at