Skip to content

tolibear/souls-cli

Repository files navigation

souls

Diagnose and fix Openclaw AI agent workspaces. One command tells you what's working, what's broken, and fixes it for you.

npm i -g souls

Quick Start

# Auto-discover workspaces and pick one
souls

# Diagnose a specific workspace
souls doctor ~/.openclaw/workspace-gary

Auto-discovery checks ~/.openclaw, ~/.clawdbot, and ~/.moltbook.

The CLI reads your SOUL.md, AGENTS.md, TOOLS.md, USER.md, and MEMORY.md, loads quality rules from souls.zip, and runs an interactive diagnostic with per-category scores, specific findings, and actionable fixes.

Why

The soul that was sharp on day one becomes bloated by week two. Instructions contradict each other. Delegation boundaries blur. Anti-patterns creep in as generic filler. Nobody notices until the agent starts producing mediocre output and no one can explain why.

souls catches drift before it becomes a problem. The rules are distilled from 160+ arXiv papers across agent persona design, multi-agent coordination, autonomous systems, and production reliability, plus 500+ hours of production OpenClaw usage.

When the research says detailed experiential identities outperform generic role labels (Xu 2023, EMNLP 2024), that becomes a rule that flags "You are a helpful assistant" as a failure. When "Lost in the Middle" (Liu et al., TACL 2024) shows models ignore mid-context instructions, that becomes a rule about where to place hard rules in your soul. When scaling law research shows coordination overhead grows quadratically with team size, that becomes a rule about agent span of control.

Every rule traces back to either a published finding or a production failure we hit firsthand and wrote a lesson about.

Rules are constantly updated to keep up with new research, findings, and best practices.

What It Checks

Identity quality. Is your SOUL.md a lived-in partner or a generic job description? Are beliefs encoded as experience or compliance? Can someone predict the agent's behavior just from reading the soul?

Delegation boundaries. Does every agent know what's not its domain? Are owners named? Are fallback rules explicit? Agents without boundaries drift into out-of-domain work the moment something seems urgent.

Security. Credentials in workspace files. Secrets in markdown. Exposed connection strings. The stuff that gets committed accidentally and lives there forever.

Structural integrity. Missing sections, oversized files, contradicting rules, dead weight that wastes context tokens without affecting behavior.

Auto-Fix

Found issues come with proposed fixes. Select a finding and apply the fix directly. The fix engine carries soul engineering judgment: it preserves voice, protects identity content, and refuses destructive changes.

How It Works

Rules are maintained at souls.zip and fetched at runtime. Your workspace content is sent to whichever model provider your OpenClaw config selects for analysis. souls.zip and souls cli does not use telemetry.

Requirements

  • Node.js 18+
  • OpenClaw setup under ~/.openclaw, ~/.clawdbot, or ~/.moltbook
  • At least one configured model provider

Keyboard Shortcuts

Key Action
↑ ↓ Navigate
Enter Select
r Run another workspace
q Quit
Esc Back

Links

Troubleshooting

If npm i -g souls fails with EACCES on macOS/Linux:

sudo npm i -g souls

Only needed when npm globals live in a protected directory (/usr/local/lib/node_modules). Not needed with nvm, fnm, or volta.

About

Interactive CLI for diagnosing OpenClaw agent workspaces

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors