Warp-style command prediction and command palette for any terminal.
A Rust-powered command prediction engine with ghost-text autosuggestions, an interactive command palette TUI, and AI-powered command assistance — works in any terminal emulator (Ghostty, Alacritty, Kitty, iTerm2, WezTerm, etc.).
Inspired by Warp's multi-tier prediction approach.
Waz uses a multi-tier prediction system (same approach as Warp terminal):
| Tier | Strategy | Confidence | Description |
|---|---|---|---|
| 0 | Output Hint | Highest | Parses command output for suggested follow-up commands (like Warp's ghost text from npm install → npm start). |
| 1 | Sequence | High | Predicts based on command patterns. If you always run git push after git commit, it learns that. |
| 2 | CWD History | Medium | Falls back to your most recently used commands in the current directory. |
| 3 | LLM | Low | Uses an LLM to predict based on shell context. Supports multiple providers with key rotation. |
Proactive prediction: Ghost text appears even on an empty prompt — right after a command finishes, waz suggests what you'll probably run next.
make installThis builds the release binary, installs it to ~/.cargo/bin and ~/.local/bin, and reminds you to reload your shell.
Or manually:
cargo build --release
cp target/release/waz ~/.local/bin/Add to your shell config (quotes are required for Zsh/Bash):
Zsh (~/.zshrc):
eval "$(waz init zsh)"Bash (~/.bashrc):
eval "$(waz init bash)"Fish (~/.config/fish/config.fish):
waz init fish | sourceFor Ghostty users, add this to your config to launch the TUI with Cmd+I:
keybind = super+i=text:\x1b[119;97;122~
This sends a custom escape sequence that the waz shell integration picks up.
Bootstrap predictions from your existing shell history:
waz import # auto-detect all shells
waz import --shell zsh # import from specific shellSupports custom $HISTFILE locations (e.g., ~/.config/zsh/.zsh_history).
Once installed, waz works automatically:
- Ghost text appears as you type — dim gray suggestions
- Ghost text appears on empty prompt — proactive next-command prediction
- Right arrow → Accept full suggestion
- Alt+F → Accept next word
- Ctrl+Space → Fill in the predicted command
waz predict --cwd . # Proactive prediction (no prefix)
waz predict --prefix "git" --format json # Prediction with prefix
waz record -- "git push" # Manually record a command
waz stats # Show database statistics
waz generate brew # Generate TMP schema for a CLI tool
waz generate brew --verify # Review & approve schema in TUI
waz generate brew --history # Show schema version history
waz schema list # List all installed schemas
waz schema share cargo # Export shareable schema
waz schema import ./brew-schema.json # Install shared schema
waz session-id # Generate a new session IDA Warp-like unified command palette built with ratatui. Launch it with:
| Trigger | Where |
|---|---|
| Cmd+I | Ghostty (via custom escape sequence) |
| Ctrl+T | Any terminal |
waz tui |
Manual launch |
The TUI starts in an Empty state showing mode hints. Type a prefix to enter a mode:
| Mode | Prefix | What it does |
|---|---|---|
| TMP | / |
Context-aware command palette with token forms |
| Shell | ! |
Direct shell command input |
| AI | (any text) | Natural language → AI suggests runnable commands |
| Key | Action |
|---|---|
| Esc | Go back one layer (see below) |
| Backspace (empty input) | Return to Empty mode |
| Cmd+Backspace / Ctrl+U | Clear entire input line |
| ↑ / ↓ | Navigate command list |
| Tab | Select command / next token field |
| Shift+Tab | Previous token field |
| Enter | Run selected command |
| 1-9 | Quick-select AI command by number |
Each Esc press peels back one layer — never dumps you out unexpectedly:
Placeholder editing → Command selection → AI conversation → Empty mode → Quit
Context-aware command palette powered by unified JSON schemas. On first launch, curated schemas are auto-installed. Additional schemas can be AI-generated or imported.
| Schema | Commands | Dynamic Data Sources |
|---|---|---|
cargo |
12 | bins, features, packages, profiles, tests, benches from Cargo.toml |
git |
12 | branches, remotes from local repo |
npm |
8 | scripts from package.json |
bun |
8 | scripts from package.json |
npx |
1 | — |
bunx |
1 | — |
Schemas are contextual — cargo commands only appear when Cargo.toml exists, npm/bun when package.json exists. Git is always available.
| Source | Commands Loaded |
|---|---|
~/.config/waz/schemas/*.json |
Any CLI tool — AI-generated or imported schemas (see Schema Generation) |
Type to filter — uses score-based ranking that prioritizes subcommand names:
/commit → git commit (exact match, ranked first)
/git com → git commit (full command match)
/build → cargo build (subcommand match)
/install → brew install (from generated schema)
Description-only matches are excluded to avoid false positives.
When selecting a command with arguments:
- Boolean tokens → toggle with
Space/y/n,Tabcycles - Enum tokens →
Tabcycles through values (packages, scripts, branches) - String tokens → free-text input
- Live preview → shows the resolved command as you fill tokens
Just start typing a question — waz auto-detects natural language:
how to create a new database in psql
find large files over 100mb
The AI responds with an explanation and numbered command suggestions.
- Press 1-9 to quick-select by number
- Use ↑/↓ and Enter to select
If the AI suggests a command with placeholders (e.g., psql -U <username> -c "CREATE DATABASE <db_name>"), waz detects them and shows an inline editing form:
⌨ Fill in placeholders:
→ psql -U postgres -c "CREATE DATABASE <db_name>"
username: postgres█
db_name:
- Tab / Shift+Tab → navigate between fields
- Live preview updates as you type
- Enter → run the resolved command
- Esc → back to command selection (pick a different command)
After getting AI results:
- Press Esc once → exit command selection, type a new question
- Press Esc twice → clear AI conversation, start fresh
- Just start typing → clears old response, asks new question
Direct shell command input — type ! followed by any command:
!docker compose up -d
!kubectl get pods
Press Enter to execute immediately.
Waz uses a unified JSON schema system for all CLI tool commands. Six curated schemas ship built-in, and you can generate schemas for any tool using AI, or import schemas shared by others.
On first TUI launch (or manually with --init), curated schemas are auto-installed:
waz generate cargo --init # Install all 6 curated schemasGenerate schemas for any CLI tool using AI:
waz generate brew # Generate with default model
waz generate kubectl --model gemini-2.5-pro # Use specific AI model
waz generate docker --force # Regenerate (versions old first)How it works:
- Runs
<tool> --helpand recursively<tool> <subcommand> --help(up to 20 subcommands) - Sends the help output to your configured LLM (Gemini by default)
- AI extracts commands, flags, and argument types into a structured
SchemaFile - Schema is saved to
~/.config/waz/schemas/<tool>.jsonwith metadata - Next TUI launch, the commands appear alongside curated ones
Review and approve schemas before using them:
waz generate brew --verifyOpens a two-pane TUI for human-in-the-loop review:
| Key | Action |
|---|---|
j/k or ↑/↓ |
Navigate commands/tokens |
Space |
Toggle command verified ✅ |
Tab |
Switch between Commands / Tokens panes |
n/d/f |
Edit token name / description / flag |
r |
Toggle required / optional |
t |
Cycle token type (String → Boolean → Enum → File → Number) |
x |
Test data source live (runs resolver, shows results) |
a / Del |
Add / delete token |
Ctrl+V |
Verify all commands at once |
s |
Save changes to JSON |
q |
Quit |
List, share, and import schemas:
waz schema list # Show all installed schemas
waz schema share cargo # Export portable .json to CWD
waz schema import ./brew-schema.json # Install from file
waz schema import https://example.com/schema.json # Install from URLwaz schema list output:
Tool Ver Status Cmds Source Coverage
────────────────────────────────────────────────────────
bun v1 ✅ verified 8/8 curated full
cargo v1 ✅ verified 12/12 curated full
git v1 ✅ verified 12/12 curated full
npm v1 ✅ verified 8/8 curated full
Share strips runtime-resolved values (keeps data source definitions so importers resolve locally).
Import auto-backups existing schemas before overwriting.
Export the battle-tested Rust-based cargo/git/npm schemas to JSON:
waz generate cargo --export # From a Cargo project directory
waz generate git --export
waz generate npm --exportEvery --force regeneration auto-versions the old schema. Full version history:
waz generate brew --history
# 📋 Version history for 'brew' (3 versions):
# ─────────────────────────────────────────
# v1 │ 2h ago │ 15 commands
# v2 │ 1h ago │ 12 commands
# v3 │ 5m ago │ 14 commands ← latestRollback to any version:
waz generate brew --rollback # Restore latest versioned backup
waz generate brew --rollback 1 # Restore specific versionOn --force, a colorized diff is shown:
- 🟢
+ brew search— new command added - 🔴
- brew cleanup— command removed - 🟡
~ brew install— tokens changed
If generation fails, the previous version is auto-restored.
Schemas support two kinds of dynamic data sources:
Shell commands — run at TUI load time:
{
"name": "formula",
"token_type": "Enum",
"data_source": { "command": "brew list --formula", "parse": "lines" }
}Built-in resolvers — use waz's optimized Rust parsers:
{
"name": "feature",
"token_type": "Enum",
"data_source": { "resolver": "cargo:features", "parse": "lines" }
}Available resolvers: cargo:bins, cargo:examples, cargo:packages, cargo:features, cargo:profiles, cargo:tests, cargo:benches, git:branches, git:remotes, npm:scripts.
All schemas use a unified SchemaFile format with metadata:
{
"meta": {
"tool": "cargo",
"version": 1,
"generated_by": "human",
"verified": true,
"coverage": "full",
"requires_file": "Cargo.toml",
"requires_binary": "cargo"
},
"commands": [ ... ]
}~/.config/waz/schemas/
├── cargo.json ← active curated schema
├── git.json
├── npm.json
├── bun.json
├── brew.json ← AI-generated
├── versions/
│ ├── brew/
│ │ ├── v1.json
│ │ └── v2.json
│ └── cargo/
│ └── v1.json
Ask natural language questions directly from the command line:
# Just type a question as a command — waz intercepts it automatically
how to find large files
# Or use the ask command explicitly
waz ask "how to uninstall a package with homebrew"
waz ask --json "how to search in files" # Structured JSON outputwaz clear # Clear history for current directory
waz clear --all # Clear ALL history across all directoriesTier 3 uses an LLM when local history can't produce a prediction. Waz supports 6 providers:
| Provider | Default Model | Base URL | Free Tier |
|---|---|---|---|
| Gemini | gemini-3.1-flash-lite-preview |
generativelanguage.googleapis.com |
50 req/day |
| GLM (z.ai) | glm-4.7 |
api.z.ai |
Free for dev use |
| Qwen (Alibaba) | qwen3.5-plus |
dashscope-intl.aliyuncs.com |
1M tokens free |
| MiniMax | MiniMax-M2.5 |
api.minimax.io |
Free credits |
| OpenAI | gpt-4o-mini |
api.openai.com |
Paid only |
| Ollama | llama3.2 |
localhost:11434 |
Local, always free |
Just export your API key — waz auto-detects it:
# Any ONE of these is enough to enable LLM predictions:
export GEMINI_API_KEY="your-key" # Google Gemini
export GLM_API_KEY="your-key" # z.ai GLM
export DASHSCOPE_API_KEY="your-key" # Alibaba Qwen
export MINIMAX_API_KEY="your-key" # MiniMax
export OPENAI_API_KEY="your-key" # OpenAIMultiple env vars? Waz creates a provider for each and uses fallback strategy by default.
For advanced control, create ~/.config/waz/config.toml:
Use only Gemini, nothing else:
[llm]
strategy = "single"
default = "gemini"
[[llm.providers]]
name = "gemini"
keys = ["your-gemini-key"]Try providers in order — if one fails (rate limit, timeout), try the next:
[llm]
strategy = "fallback"
order = ["gemini", "glm", "qwen", "minimax"]
timeout_secs = 3
[[llm.providers]]
name = "gemini"
keys = ["gemini-key-1"]
[[llm.providers]]
name = "glm"
keys = ["glm-key-1"]
[[llm.providers]]
name = "qwen"
keys = ["dashscope-key-1"]
model = "qwen3.5-plus"If Gemini hits its 50 req/day limit → automatically falls back to GLM → then Qwen.
Spread requests evenly across providers:
[llm]
strategy = "round-robin"
order = ["gemini", "glm", "qwen"]
[[llm.providers]]
name = "gemini"
keys = ["key-1"]
[[llm.providers]]
name = "glm"
keys = ["key-1"]
[[llm.providers]]
name = "qwen"
keys = ["key-1"]Request 1 → Gemini, Request 2 → GLM, Request 3 → Qwen, Request 4 → Gemini...
Rotate keys within a single provider (useful for multiple free-tier accounts):
[llm]
strategy = "single"
default = "gemini"
[[llm.providers]]
name = "gemini"
keys = ["account-1-key", "account-2-key", "account-3-key"]
model = "gemini-3.1-flash-lite-preview"Each request uses the next key: key1 → key2 → key3 → key1...
Maximum free-tier usage — rotate keys AND providers:
[llm]
strategy = "fallback"
order = ["gemini", "glm", "qwen", "minimax"]
[[llm.providers]]
name = "gemini"
keys = ["gemini-acct-1", "gemini-acct-2"]
model = "gemini-3.1-flash-lite-preview"
[[llm.providers]]
name = "glm"
base_url = "https://api.z.ai/api/paas/v4"
keys = ["glm-key-1", "glm-key-2"]
model = "glm-4.7"
[[llm.providers]]
name = "qwen"
base_url = "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
keys = ["qwen-key-1"]
model = "qwen3.5-plus"
[[llm.providers]]
name = "minimax"
base_url = "https://api.minimax.io/v1"
keys = ["mm-key-1"]
model = "MiniMax-M2.5"Env vars are additive — they add to the key pool, never override:
| Setup | Key Pool |
|---|---|
Only GEMINI_API_KEY="A" |
["A"] — 1 key |
Only config keys = ["A", "B"] |
["A", "B"] — 2 keys |
Env "C" + config ["A", "B"] |
["A", "B", "C"] — 3 keys |
Env "A" + config ["A", "B"] |
["A", "B"] — deduped |
Any service with an OpenAI-compatible /v1/chat/completions endpoint works:
[[llm.providers]]
name = "custom"
base_url = "https://your-api-endpoint.com/v1"
keys = ["your-key"]
model = "your-model-name"┌─────────────────────────────────────────────────────┐
│ Shell Integration Layer │
│ ┌─────────┐ ┌──────────┐ ┌────────────┐ │
│ │ Zsh │ │ Bash │ │ Fish │ │
│ │ (ZLE) │ │(readline)│ │ (events) │ │
│ └────┬────┘ └────┬─────┘ └─────┬──────┘ │
│ │ │ │ │
│ Cmd+I / Ctrl+T launches unified TUI │
│ Ghost-text autosuggestions via predictions │
│ Output capture → hint suggestions (Tier 0) │
└───────┼────────────┼───────────────┼────────────────┘
└────────────┼───────────────┘
│
┌───────────▼───────────┐
│ waz binary (Rust) │
│ │
│ ┌─ Prediction ───────┤
│ │ Tier 0: Output │
│ │ Tier 1: Sequence │
│ │ Tier 2: CWD │
│ │ Tier 3: LLM │
│ ├─ Unified TUI ─────┤
│ │ / → TMP mode │
│ │ ! → Shell mode │
│ │ text → AI mode │
│ │ Placeholder edit │
│ │ Score filtering │
│ ├─ Schema System ───┤
│ │ 6 curated JSON │
│ │ AI-powered gen │
│ │ Verify TUI │
│ │ Share/import │
│ │ Version control │
│ │ Built-in resolvers│
│ ├─ AI Assistant ────┤
│ │ Structured JSON │
│ │ Command resolver │
│ └────────────────────┤
│ │
│ SQLite History │
└───────────────────────┘
- History DB:
~/Library/Application Support/waz/history.db(macOS) /~/.local/share/waz/history.db(Linux) - Rotation state:
~/Library/Application Support/waz/rotation.json - Config:
~/.config/waz/config.toml - Curated Schemas:
schemas/curated/*.json(bundled in repo, auto-copied on first run) - Active Schemas:
~/Library/Application Support/waz/schemas/*.json(installed schemas) - Schema Versions:
~/Library/Application Support/waz/schemas/versions/<tool>/v1.json ...
MIT