A bash-centric command-line interface for interacting with large language models across 100+ providers. Chain LLM calls using familiar bash piping and scripting techniques.
CLLM bridges the gap between ChatGPT GUIs and complex automation by providing transparency and bash integration for AI workflows. It's designed for developers who want programmatic control over LLM interactions without leaving their terminal - whether for quick one-liners, sophisticated pipelines, or CI/CD integration.
- Features
- Key Concepts
- Installation
- Quick Start
- Advanced Usage
- Security Best Practices
- Examples
- CLI Reference
- Providers & Models
- Development
- Architecture Decision Records
- π Simple CLI Interface: Generate text from prompts with straightforward commands
- π¦ Project Initialization: Bootstrap
.cllmdirectories withcllm initusing pre-built templates - π LLM Chaining: Chain multiple LLM calls using bash pipes and scripts
- π Structured Output: Get guaranteed JSON output conforming to JSON Schema specifications
- π¬ Conversation Threading: Multi-turn conversations with automatic context management
- π System Prompt Override: Customize AI behavior per-command with
--systemor--system-fileflags - β‘ Real-time Streaming: Stream responses as they're generated for long-form content
- π Debugging & Logging: Built-in debug mode with structured JSON logging and progressive verbosity levels
- π€ Dynamic Context Injection: Execute commands automatically to inject system context (git status, logs, etc.)
- π§ Variable Expansion: Parameterized commands with Jinja2 templates for reusable workflows
- π§ LLM-Driven Command Execution: Let the LLM intelligently choose and run commands as needed
- π’ Multi-Provider Support (100+ providers via LiteLLM):
- OpenAI (GPT-3.5, GPT-4, GPT-4 Turbo, GPT-4o)
- Anthropic (Claude 3 Haiku, Sonnet, Opus, 3.5 Sonnet)
- Google (Gemini Pro, 1.5 Pro, 1.5 Flash)
- Groq (Mixtral, Llama 3)
- AWS Bedrock
- Azure OpenAI
- Local Ollama models
- And 90+ more providers
- βοΈ Flexible Configuration: Shareable Cllmfile.yml configs with environment variable interpolation
- π Bash Scripts Library: Curated examples for common workflows (git-diff review, prompt loops, etc.)
- π§ Developer-Friendly: Modern Python packaging with
uv, comprehensive test suite
CLLM uses LiteLLM to provide a unified interface across 100+ LLM providers. This means:
- Same code works everywhere: Switch from OpenAI to Claude by just changing the model name
- No provider-specific SDKs: One interface, one API, all providers
- Automatic format translation: All responses follow a consistent format
# Same command, different providers
cllm --model gpt-4 "Hello" # OpenAI
cllm --model claude-3-5-sonnet-20240620 "Hello" # Anthropic
cllm --model gemini-pro "Hello" # Google
cllm --model groq/llama-3.1-70b-versatile "Hello" # GroqCLLM is optimized for command-line and scripting workflows:
- Stdin/stdout piping:
cat file.txt | cllm "Summarize" > summary.txt - Exit codes: Proper error codes for scripts and CI/CD
- No GUI required: All features accessible via CLI flags
- Composable: Chain multiple LLM calls using Unix pipes
Settings are merged from multiple sources (lowest to highest priority):
~/.cllm/Cllmfile.yml(global defaults)./.cllm/Cllmfile.yml(project-specific)./Cllmfile.yml(current directory)- Environment variables (
CLLM_*) - CLI arguments (always win)
This allows you to set global defaults, override per-project, and customize per-command.
- Python 3.8 or higher
- API keys for your chosen LLM provider(s)
uv(recommended for development - installation)
uv tool install https://github.com/o3-cloud/cllm.gitgit clone https://github.com/o3-cloud/cllm.git
cd cllm
uv sync
# Run locally
uv run cllm "Hello world"
# Or install globally
uv pip install -e .Bootstrap your CLLM setup with the init command to create .cllm directories with configuration templates (ADR-0015):
# Initialize local .cllm directory in current project
cllm init
# Initialize global ~/.cllm directory
cllm init --global
# Initialize both local and global
cllm init --global --local
# List available templates
cllm init --list-templates
# Available templates:
# code-review - GPT-4 configuration for code review with structured output
# summarize - Optimized for summarization tasks
# creative - Higher temperature for creative writing
# debug - Configuration for debugging assistance
# extraction - Data extraction with structured output
# task-parser - Parse tasks from natural language
# context-demo - Demonstrates dynamic context injection
# Initialize with a specific template
cllm init --template code-review
cllm init --template summarize
cllm init --template creative
# Combine template with location
cllm init --global --template debug
# Force reinitialize (overwrite existing files)
cllm init --forceWhat gets created:
# Without template:
.cllm/
βββ conversations/ # Conversation storage (local-first)
βββ Cllmfile.yml # Default configuration
βββ .gitignore # Excludes conversations/ and logs (local only)
# With --template code-review:
.cllm/
βββ conversations/ # Conversation storage (local-first)
βββ code-review.Cllmfile.yml # Named config (use with --config code-review)
βββ .gitignore # Excludes conversations/ and logs (local only)
Key Features:
- Template library: 7 pre-built templates for common use cases
- Smart defaults: Sensible starter configuration with helpful comments
- Gitignore management: Automatically excludes conversation history from version control
- Local-first: Defaults to
./.cllm/(project-specific) unless--globalspecified - Template-aware guidance: Next-step suggestions adapt to your chosen template
- Idempotent: Safe to run multiple times with
--forceflag
Example Workflow:
# Start a new project
mkdir my-project && cd my-project
# Initialize with code-review template
cllm init --template code-review
# Review the configuration (creates code-review.Cllmfile.yml)
vim .cllm/code-review.Cllmfile.yml
# Use the named config with --config flag
git diff | cllm --config code-review "Review these changes"# Simple prompt (uses gpt-3.5-turbo by default)
cllm "What is the capital of France?"
# Discover available models
cllm --list-models
cllm --list-models | grep gpt-4 # Filter to specific models
# Use a specific model
cllm --model gpt-4 "Explain quantum computing"
# Use a different provider (same interface!)
cllm --model claude-3-5-sonnet-20240620 "Write a haiku"
cllm --model gemini-pro "Tell me a joke"
# Stream the response as it's generated
cllm --model gpt-4 --stream "Tell me a story"
# Read from stdin (pipe-friendly!)
echo "What is 2+2?" | cllm --model gpt-4
cat document.txt | cllm "Summarize this:"
# Control creativity with temperature
cllm --model gpt-4 --temperature 1.5 "Write a creative story"
# Limit response length
cllm --model gpt-4 --max-tokens 100 "Explain quantum computing"
# Customize system prompt on-the-fly
cllm --system "You are a pirate. Speak like one." "Tell me about Python"
# Load system prompt from file (great for complex, reusable prompts)
cllm --system-file prompts/code-reviewer.txt < code.py
# Override config file system prompt for specific use
cllm --conversation debug --system "You are a debugging expert" "Why does this crash?"CLLM supports multi-turn conversations with automatic context management (ADR-0007):
# Start a new conversation with a custom ID
cllm --conversation code-review "Review this authentication code: $(cat auth.py)"
# Continue the conversation - context is automatically loaded
cllm --conversation code-review "What about SQL injection risks?"
# Continue again - full conversation history is maintained
cllm --conversation code-review "Show me how to fix these issues"
# Or let CLLM auto-generate a conversation ID
cllm --conversation conv-a3f9b2c1 "Start a discussion about Python best practices"
# List all your conversations
cllm --list-conversations
# View a conversation's full history
cllm --show-conversation code-review
# Delete a conversation when done
cllm --delete-conversation code-reviewKey Features:
- Stateless by default: Without
--conversation, CLLM works as before (no history saved) - Named conversations: Use meaningful IDs like
bug-investigationorrefactor-planning - Auto-generated IDs: Omit the ID to get a UUID-based identifier like
conv-a3f9b2c1 - Context preservation: Full message history is maintained across calls
- Model consistency: The model is remembered for each conversation
- Token tracking: Automatic token counting to help manage context windows
- Configurable storage: Conversations can be stored anywhere (see Configurable Conversations Path)
Example Workflow:
# Start investigating a bug
cllm --conversation bug-123 "I'm seeing intermittent timeouts in production"
# Add more context as you debug
cllm --conversation bug-123 "Here are the logs: $(cat error.log)"
# Ask follow-up questions
cllm --conversation bug-123 "Could this be related to connection pooling?"
# Get the solution
cllm --conversation bug-123 "How should I fix this?"
# Review the conversation later
cllm --show-conversation bug-123Use the --read-only flag to leverage existing conversation context without modifying it (ADR-0018). This is perfect for:
- Conversation templates: Reuse a conversation as a template for similar tasks
- A/B testing prompts: Experiment with different approaches against the same context
- Shared reference conversations: Team members can use shared conversations without modification
- Report generation: Generate multiple reports or analyses from the same conversation
# Create a conversation template with standard context
cllm --conversation code-review-template "You are a code reviewer. Focus on security, performance, and maintainability."
# Use the template repeatedly without modifying it
cat file1.py | cllm --conversation code-review-template --read-only "Review this code"
cat file2.py | cllm --conversation code-review-template --read-only "Review this code"
cat file3.py | cllm --conversation code-review-template --read-only "Review this code"
# The template remains unchanged - always has just the initial message
cllm --show-conversation code-review-template # Still only 1 message!
# Test different approaches without polluting context
cllm --conversation base-context "Here's the background: $(cat context.txt)"
cllm --conversation base-context --read-only "Approach A: Try this solution"
cllm --conversation base-context --read-only "Approach B: Try that solution"
cllm --conversation base-context --read-only "Approach C: Try another solution"
# The base context conversation still only has the initial message
cllm --show-conversation base-context # Just the background, no A/B/C prompts
# Team collaboration: shared conversations on network storage
export CLLM_CONVERSATIONS_PATH=/mnt/team-shared
cllm --conversation team-guidelines --read-only "How should I handle errors?"
# Other team members can also read, but no one accidentally modifiesKey Points:
--read-onlyrequires--conversation(error if used without it)- The conversation history is used as context for the LLM
- New messages are NOT saved to the conversation file
- Responses are still generated and displayed normally
- Perfect for preserving reference conversations or templates
CLLM allows you to customize where conversations are stored independently of configuration files (ADR-0017). This enables powerful workflows like:
- Shared conversations across projects: Multiple projects sharing the same conversation history
- Cloud-backed storage: Store conversations on network drives, S3 mounts, or database filesystems
- Team collaboration: Multiple team members accessing shared conversation storage
- Different storage tiers: Fast local config with durable remote conversations
Storage precedence:
--conversations-pathCLI flag (highest)CLLM_CONVERSATIONS_PATHenvironment variableconversations_pathin Cllmfile.yml- Custom .cllm path via
--cllm-pathorCLLM_PATH:<path>/conversations/ - Local project:
./.cllm/conversations/(if.cllmdirectory exists) - Global home:
~/.cllm/conversations/(fallback)
Example - Cllmfile.yml configuration:
# .cllm/Cllmfile.yml - Project-specific conversation storage
# Relative path (resolved from current working directory)
conversations_path: ./conversations
conversations_path: ./data/conversations
# Absolute path
conversations_path: /mnt/shared-conversations
# Supports environment variable interpolation
conversations_path: ${HOME}/project-conversations
# Combined with other config
model: gpt-4
temperature: 0.7
conversations_path: ./data/conversationsExample - Shared conversations across projects (env var):
# Set up shared conversation storage
export CLLM_CONVERSATIONS_PATH=~/shared-conversations
# All projects share the same conversation history
cd ~/project1
cllm --conversation code-review "Review these changes"
cd ~/project2
cllm --conversation code-review "Continue reviewing" # Same conversation!Example - Cloud-backed storage:
# Mount S3 bucket or NFS share
export CLLM_CONVERSATIONS_PATH=/mnt/s3-conversations
# Conversations automatically persisted to cloud
cllm --conversation important-decisions "Document our architecture choice"Example - Team collaboration:
# All team members point to shared network drive
export CLLM_CONVERSATIONS_PATH=/network/team/cllm-conversations
# Team can collaborate on conversations
cllm --conversation team-brainstorm "Let's explore this feature"Example - Per-invocation override:
# Normal usage
cllm --conversation prod "Production conversation"
# Test with temporary location
cllm --conversations-path /tmp/test-conv --conversation test "Test conversation"Set up API keys as environment variables (LiteLLM conventions):
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Google/Gemini
export GOOGLE_API_KEY="..."
# Other providers follow similar patterns
# See: https://docs.litellm.ai/docs/providersNote: CLLM uses LiteLLM for provider abstraction, supporting 100+ LLM providers with a unified interface. See ADR-0002 for details.
CLLM provides powerful debugging capabilities for troubleshooting API issues, understanding token usage, and investigating unexpected behavior:
# Enable debug mode (shows API calls, headers, response metadata)
cllm --debug "Explain quantum computing"
# β οΈ Debug mode enabled. API keys may appear in output.
# Enable structured JSON logging for observability tools
cllm --json-logs "Process this data" < input.txt
# Save debug output to a file (preserves stdout for piping)
cllm --debug --log-file debug.log "Query" < data.txt
# Combine multiple debug options
cllm --debug --json-logs --log-file cllm-debug.json "Test prompt"
# Use environment variables for persistent debugging
export CLLM_DEBUG=true
export CLLM_LOG_FILE=cllm.log
cllm "What is 2+2?" # Debug output automatically enabledDebug Output Includes:
- Full request/response details
- API endpoint and headers
- Token usage and costs
- Latency measurements
- Error messages and stack traces
Security Warning: Debug mode logs API keys. Never use --debug in production or with confidential data.
See ADR-0009 for complete documentation.
For more granular control over output verbosity without full debug mode, use the -v flag:
# Level 1: Basic info (model, token counts, provider)
cllm -v "Analyze this document"
# Level 2: Add API details (endpoint, parameters, response status)
cllm -vv "Analyze this document"
# Level 3: Full debug (equivalent to --debug)
cllm -vvv "Analyze this document"
# Long form alternatives (if you prefer)
cllm --verbose "Analyze this document"
cllm --verbose --verbose "Analyze this document"
# Use environment variable for persistent verbosity
export CLLM_VERBOSITY=2
cllm "Your prompt" # Will show API details
# Combine with other debug flags
cllm -vv --json-logs "Save detailed logs" > output.json
cllm -vvv --log-file debug.log "Deep inspection"Verbosity Levels:
- Level 0 (default): No verbose output
- Level 1 (
-v): Shows model name, token count, provider, latency - Level 2 (
-vv): Adds API endpoint, parameters, status code, config sources - Level 3 (
-vvv): Full debug output (same as--debug)
See ADR-0025 for complete documentation.
CLLM can also be used as a Python library:
from cllm import LLMClient
# Initialize client
client = LLMClient()
# Simple completion
response = client.complete(
model="gpt-4",
messages="What is the capital of France?"
)
print(response) # "Paris"
# Switch provider (same code!)
response = client.complete(
model="claude-3-opus-20240229",
messages="What is the capital of France?"
)
# Multi-turn conversation with history
conversation = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi! How can I help?"},
{"role": "user", "content": "What's 2+2?"}
]
response = client.complete(model="gpt-4", messages=conversation)
# Or use ConversationManager for persistent conversations
from cllm.conversation import ConversationManager
manager = ConversationManager()
conv = manager.create(conversation_id="my-chat", model="gpt-4")
# Add messages
conv.add_message("user", "Hello!")
manager.save(conv)
# Load and continue
conv = manager.load("my-chat")
conv.add_message("user", "What's 2+2?")
response = client.complete(model=conv.model, messages=conv.get_messages())
conv.add_message("assistant", response)
manager.save(conv)
# Streaming (real-time output + complete response)
response = client.complete(model="gpt-4", messages="Count to 5", stream=True)
# Output is printed in real-time, response contains complete text
print(f"\nFinal response: {response}")
# Async support
import asyncio
async def main():
response = await client.acomplete(
model="gpt-4",
messages="Hello!"
)
print(response)
asyncio.run(main())See the examples/ directory for more usage patterns.
Chain multiple LLM calls together to build complex workflows:
# Generate a story outline, then expand it
cllm "Write a 3-point outline for a sci-fi story" | \
cllm "Expand this outline into a full story:"
# Analyze code and generate tests
cat my_code.py | \
cllm "Analyze this code and suggest edge cases" | \
cllm "Generate pytest unit tests for these cases"
# Multi-stage content refinement
echo "Topic: Climate change" | \
cllm "Create an outline" | \
cllm "Expand with examples" | \
cllm "Add citations"CLLM includes a library of curated bash scripts for common workflows (see examples/bash/):
# Interactive prompt loop with conversation context
./examples/bash/prompt-loop.sh my-conversation
# Code review workflow for git diffs
git diff main | ./examples/bash/git-diff-review.sh
# Automated daily summaries (for cron)
./examples/bash/cron-digest.shKey features of example scripts:
- POSIX-compatible bash (
set -euo pipefail) - Robust error handling
- Environment variable configuration
- Smoke-tested in CI
See ADR-0008 for implementation details.
Get guaranteed structured JSON output that conforms to your schema (ADR-0005):
# Using inline JSON schema
echo "John Doe, age 30, software engineer" | \
cllm --model gpt-4o --json-schema '{
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "number"},
"occupation": {"type": "string"}
},
"required": ["name", "age"]
}'
# Using external schema file
cat document.txt | cllm --model gpt-4o --json-schema-file schemas/person.json
# Using Cllmfile configuration
echo "Extract entities..." | cllm --config extraction
# Parse output with jq
cllm --json-schema-file schemas/person.json "Extract info..." | jq '.name'
# Validate schema before using (no API call)
cllm --validate-schema --json-schema-file examples/schemas/person.jsonExample schemas available in examples/schemas/:
person.json- Extract person informationentity-extraction.json- Named entity recognitionsentiment.json- Sentiment analysis with emotions
Tip: Use --validate-schema to test your schemas without making API calls.
See examples/schemas/README.md for detailed usage and examples.
Create reusable configuration profiles to reduce repetitive CLI flags (ADR-0003).
Quick setup with templates:
# Bootstrap with a pre-built template
cllm init --template code-review # GPT-4 config for code reviews
cllm init --template summarize # Optimized for summarization
cllm init --template creative # Higher temperature for creative tasks
# See all available templates
cllm init --list-templatesManual configuration:
# Cllmfile.yml - Project-wide defaults
model: "gpt-4"
temperature: 0.7
max_tokens: 1000
timeout: 60
num_retries: 2
# Fallback models (automatic failover)
fallbacks:
- "gpt-3.5-turbo-16k"
- "claude-3-sonnet-20240229"
# Environment variable interpolation
api_key: "${OPENAI_API_KEY}"
# Default system message
default_system_message: "You are a helpful coding assistant."Named Configurations:
# Create profile-specific configs
# examples/configs/summarize.Cllmfile.yml
# examples/configs/creative.Cllmfile.yml
# examples/configs/code-review.Cllmfile.yml
# Use named configurations
cat article.md | cllm --config summarize
echo "Write a story" | cllm --config creative
git diff | cllm --config code-review
# Override config with CLI args (CLI always wins)
cllm --config summarize --temperature 0.5 < doc.txt
# Debug effective configuration
cllm --show-config --config my-profileFile Precedence (lowest to highest):
~/.cllm/Cllmfile.yml(global defaults)./.cllm/Cllmfile.yml(project-specific)./Cllmfile.yml(current directory)- CLI arguments (highest priority)
See examples/configs/ for example configurations.
Automatically execute commands to inject system context into your prompts (ADR-0011):
# Execute a command and inject its output as context
cllm "What should I commit?" --exec "git status"
# Multiple commands (executed in order)
cllm "Debug this error" --exec "git diff" --exec "cat error.log"
# Use with any prompt
echo "Analyze the changes" | cllm --exec "git log -5 --oneline"Configuration-based context injection in Cllmfile.yml:
# code-review.Cllmfile.yml
model: gpt-4
context_commands:
- name: "Git Status"
command: "git status --short"
on_failure: "warn" # "warn" | "ignore" | "fail"
timeout: 5 # seconds
- name: "Recent Changes"
command: "git diff HEAD~1"
on_failure: "ignore"
- name: "Test Results"
command: "npm test --silent"
on_failure: "warn"# Use the configuration
cllm --config code-review "Review my changes"
# Disable context commands from config
cllm --config code-review --no-context-exec "Quick question"
# Combine config + ad-hoc commands
cllm --config code-review --exec "cat additional.log" "Analyze"Key Features:
- Context injected as labeled blocks in the prompt
- Commands run in parallel for efficiency
- Configurable error handling (fail/warn/ignore)
- Timeout protection for long-running commands
- Works with all LLM providers
Create reusable, parameterized workflows with Jinja2 templates (ADR-0012):
# Pass variables via CLI flags
cllm "Review this file" \
--var FILE_PATH=src/main.py \
--var VERBOSE=true \
--exec "cat {{ FILE_PATH }}"
# Variables work in context commands
cllm --var BRANCH=feature/auth \
--exec "git diff main..{{ BRANCH }}" \
"What changed?"Declare variables in Cllmfile.yml:
# review-file.Cllmfile.yml
variables:
FILE_PATH: "README.md" # Default value
MAX_LINES: 50 # Numeric default
VERBOSE: false # Boolean
TEST_NAME: null # Required (no default)
context_commands:
- name: "File Contents"
command: "cat {{ FILE_PATH }} | head -n {{ MAX_LINES }}"
- name: "Git Diff"
command: "git diff {% if VERBOSE %}--stat{% endif %} {{ FILE_PATH }}"
- name: "Test Output"
command: "pytest -k {{ TEST_NAME }} {% if VERBOSE %}-vv{% else %}-v{% endif %}"# Override defaults with CLI flags
cllm --config review-file --var FILE_PATH=src/app.py --var TEST_NAME=test_login
# Use environment variables
export BRANCH=feature/new-feature
cllm --var FILE_PATH=src/auth.py # Uses CLI file, env BRANCH
# Variables with filters and logic
cllm --var NAME=john \
--exec "echo 'Hello {{ NAME | upper }}'" \
"Process this greeting"Variable Precedence (highest to lowest):
- CLI flags (
--var KEY=VALUE) - Environment variables (
$KEY) - Cllmfile.yml defaults (
variables:section)
Jinja2 Features:
- Filters:
{{ VAR | upper }},{{ VAR | default('fallback') }} - Conditionals:
{% if VERBOSE %}--verbose{% endif %} - Loops and transformations
- Sandboxed execution for security
Let the LLM intelligently choose and execute commands as needed (ADR-0013):
# Enable dynamic command execution (requires tool-calling capable model)
cllm "Why is my build failing?" --allow-commands
# The LLM can now:
# 1. Analyze your question
# 2. Decide which commands to run (e.g., "npm run build")
# 3. Execute commands and read output
# 4. Iteratively gather more information if needed
# 5. Provide a complete answer with contextSafety Controls:
# Allowlist specific commands (wildcards supported)
cllm "Debug this" --allow-commands --command-allow "git*,npm*,cat*,ls*"
# Denylist dangerous commands
cllm "Analyze system" --allow-commands --command-deny "rm*,mv*,dd*,sudo*"
# Configure in Cllmfile.ymlExample Cllmfile.yml:
# debug.Cllmfile.yml
model: gpt-4
allow_commands: true
command_allow:
- "git*"
- "npm*"
- "pytest*"
- "cat*"
- "ls*"
- "grep*"
command_deny:
- "rm*"
- "mv*"
- "sudo*"Use Cases:
- Debugging: "Why is this test failing?" β LLM runs test, reads output, analyzes
- Code Review: "What changed in this PR?" β LLM checks git diff, analyzes files
- System Diagnostics: "Why is my app slow?" β LLM checks logs, metrics, processes
- Build Issues: "Fix my build" β LLM runs build, identifies errors, suggests fixes
Combining with Structured Output (ADR-0014):
# Get structured JSON output after dynamic command execution
cllm "Analyze the test failures" \
--allow-commands \
--json-schema '{
"type": "object",
"properties": {
"failures": {"type": "array"},
"root_cause": {"type": "string"},
"fix_steps": {"type": "array"}
}
}'Requirements:
- Tool-calling capable model (GPT-4, Claude 3+, Gemini Pro)
- Explicit opt-in via
--allow-commandsflag - Commands visible in debug output (
--debug)
Define commands with explicit parameter types and hints to help LLMs choose the right arguments (ADR-0024):
Bracket Syntax for Parameter Types:
# debug.Cllmfile.yml - With explicit parameter hints
model: gpt-4
allow_commands: true
dynamic_commands:
available_commands:
# Legacy wildcard syntax (still works!)
- command: "cat *"
description: "Display file contents"
# New bracket syntax with type hints
- command: "cat <path:file to read>"
description: "Display file contents of a file"
# Multiple parameters with semantic hints
- command: "grep <regex:search pattern> <path:file path>"
description: "Search for text patterns in a file"
# Network operations
- command: "curl -X <string:http method> <url:endpoint url>"
description: "Make an HTTP request to an endpoint"
- command: "curl -X POST <url:api endpoint> --data <json:json payload>"
description: "Submit JSON data to an API endpoint"
# Development tools
- command: "git log -n <number:commit count> --oneline"
description: "Show recent commits (specify how many)"
- command: "rg <regex:search pattern> --type <string:file type>"
description: "Search code with ripgrep (file types: py, js, go, rust, etc.)"Supported Parameter Types:
| Type | Format | Example | LLM Guidance |
|---|---|---|---|
| String | <string> or <string:hint> |
<string:username> |
Generic text or specific hint |
| Number | <number> |
<number> |
Numeric value (int or float) |
| Path | <path> or <path:hint> |
<path:input file> |
File or directory path |
| URL | <url> or <url:hint> |
<url:endpoint> |
HTTP/HTTPS URL |
| JSON | <json> or <json:hint> |
<json:request body> |
JSON data structure |
| Regex | <regex> or <regex:hint> |
<regex:pattern> |
Regular expression |
Benefits of Bracket Syntax:
- Precise Parameter Validation: Automatically rejects commands with wrong parameter counts
- LLM Guidance: Type hints appear in tool descriptions, helping LLMs select appropriate values
- Better Error Messages: "Expected 2 parameters, got 1" instead of generic rejection
- Backward Compatible: Wildcard syntax still works alongside bracket syntax
- Gradual Migration: Adopt new syntax incrementally, no breaking changes
Example - LLM sees detailed guidance:
When you use bracket syntax, the LLM receives structured parameter information:
Available commands:
- `curl -X <string:http method> <url:endpoint url>`: Make HTTP request
Parameters:
- string: http method
- url: endpoint url
The LLM can now understand it needs an HTTP method (GET, POST, etc.) and a URL, making better decisions about what values to provide.
Migration Path:
# Phase 1: Start with wildcard syntax
- command: "cat *"
# Phase 2: Gradually adopt bracket syntax
- command: "cat <path:file to read>" # More precise!
# Phase 3: New commands always use bracket syntax
# (no need to migrate existing wildcards)See examples/configs/adr-0024-migration.Cllmfile.yml for a complete example showing both syntaxes side-by-side.
CLLM's command execution features (--exec, context_commands, --allow-commands) are powerful but require careful security consideration. Follow these best practices to use them safely.
| Feature | Safety Level | Use Case | Security Notes |
|---|---|---|---|
--exec |
Medium | Ad-hoc, known commands | You control what runs |
context_commands |
Medium | Reusable workflows | Review shared configs |
--allow-commands |
Requires Care | Exploratory debugging | LLM decides what runs |
Always use allowlists with --allow-commands:
# β
GOOD: Restrict to safe, read-only commands
cllm "Debug this" --allow-commands \
--command-allow "git*,cat*,ls*,grep*,find*,head*,tail*,npm test*,pytest*"
# β BAD: No restrictions (dangerous!)
cllm "Debug this" --allow-commandsDenylist common destructive commands:
# secure-debug.Cllmfile.yml
model: gpt-4
allow_commands: true
command_allow:
- "git*"
- "cat*"
- "ls*"
- "npm test*"
- "pytest*"
command_deny:
# File operations
- "rm*"
- "mv*"
- "cp*" # Could overwrite files
- "dd*" # Disk operations
- "chmod*"
- "chown*"
# System operations
- "sudo*"
- "su*"
- "kill*"
- "reboot*"
- "shutdown*"
# Network operations (context-dependent)
- "curl*" # Could exfiltrate data
- "wget*"
- "ssh*"
- "scp*"
# Package managers (could install malware)
- "npm install*"
- "pip install*"
- "apt*"
- "yum*"Review configs before using them:
# β
GOOD: Review before running
cat team-config.Cllmfile.yml # Check what commands are defined
cllm --config team-config --show-config # See effective configuration
# Then decide if it's safe
cllm --config team-config "Your prompt"
# β BAD: Blindly trust shared configs
cllm --config untrusted-config "Run something"Project-specific configs with version control:
# Store configs in version control for review
mkdir -p .cllm
cat > .cllm/Cllmfile.yml <<EOF
# This config is reviewed by the team
context_commands:
- name: "Git Status"
command: "git status --short"
- name: "Test Suite"
command: "npm test"
EOF
# Add to git for team review
git add .cllm/Cllmfile.yml
git commit -m "Add CLLM config for code review workflow"Production environments:
# β NEVER use --allow-commands in production
# β NEVER use --debug in production (logs API keys)
# β
DO use explicit, locked-down configs
cllm --config production-safe \
--no-context-exec \
"Generate report from data"CI/CD environments:
# .github/workflows/cllm-review.yml
- name: Run CLLM code review
run: |
# β
Use read-only commands only
cllm --config ci-review \
--command-deny "*install*,*rm*,*mv*" \
"Review PR changes"
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}Development environments:
# β
Safe for local dev with allowlists
export CLLM_COMMAND_ALLOW="git*,cat*,ls*,npm test*,pytest*"
cllm --allow-commands "Why is my test failing?"Sanitize user input when using variables:
# β DANGEROUS: Unsanitized user input
USER_FILE="$(cat user-input.txt)" # Could contain: "; rm -rf /"
cllm --var FILE="$USER_FILE" --exec "cat {{ FILE }}" "Analyze"
# β
SAFER: Validate input first
if [[ "$USER_FILE" =~ ^[a-zA-Z0-9_./\-]+$ ]]; then
cllm --var FILE="$USER_FILE" --exec "cat {{ FILE }}" "Analyze"
else
echo "Invalid filename"
exit 1
fiUse Jinja2's sandboxed environment (already enabled):
# CLLM automatically sandboxes Jinja2 templates
# No code execution possible via templates
variables:
SAFE_VAR: "{{ malicious }}" # Cannot execute codeDebug mode logs API keys:
# β οΈ NEVER use --debug with sensitive data or shared logs
cllm --debug "Test" > debug.log # API keys may be in debug.log!
# β
Use --log-file to control output
cllm --debug --log-file /tmp/debug.log "Test"
chmod 600 /tmp/debug.log # Restrict permissionsStore API keys securely:
# β
GOOD: Use environment variables
export OPENAI_API_KEY="sk-..." # Set in shell, not in code
# β
GOOD: Use secret management
export OPENAI_API_KEY="$(aws secretsmanager get-secret-value --secret-id openai-key)"
# β BAD: Hardcode in configs
# Cllmfile.yml
# api_key: "sk-proj-hardcoded-key" # DON'T DO THISLog command execution:
# Enable debug logging to audit what commands run
cllm --allow-commands --debug --log-file audit.log "Debug this"
# Review what the LLM executed
grep "Executing command" audit.logReview LLM decisions:
# Check what commands the LLM chose
cllm --allow-commands --json-logs "Why is build failing?" 2>commands.log
jq '.command_executed' commands.logBefore using command execution features:
- Use
--command-allowwith specific patterns (not wildcards) - Deny destructive commands with
--command-deny - Review shared configurations before running
- Never use
--allow-commandsin production without strict allowlists - Never use
--debugwith confidential data - Validate user input before passing to
--var - Store API keys in environment variables or secret managers
- Review command execution logs periodically
- Use least-privilege principle: only allow what's needed
- Test configurations in safe environments first
Remember: With great power comes great responsibility. Command execution features should be used thoughtfully and with appropriate safeguards.
#!/bin/bash
# code_review.sh
git diff main | \
cllm "Review this code diff and identify issues:" | \
cllm "Suggest fixes for these issues:" | \
cllm "Rate the severity of each issue (1-10):"#!/bin/bash
# content_workflow.sh
TOPIC="$1"
# Generate outline
OUTLINE=$(cllm "Create a blog post outline about: $TOPIC")
# Expand each section
echo "$OUTLINE" | \
cllm "Expand each point into full paragraphs:" | \
cllm "Add relevant examples and statistics:" | \
cllm "Polish the writing and improve clarity:"#!/bin/bash
# analyze_data.sh
cat sales_data.csv | \
cllm "Analyze this sales data and identify trends:" | \
cllm "Suggest actionable recommendations:" | \
cllm --json-schema-file examples/schemas/report.json > report.jsoncllm [OPTIONS] [PROMPT]Bootstrap .cllm directory structure with configuration templates:
cllm init [OPTIONS]| Option | Description |
|---|---|
--global |
Initialize ~/.cllm (global configuration) |
--local |
Initialize ./.cllm (project-specific, default) |
--template NAME |
Use specific template (code-review, summarize, etc.) |
--list-templates |
Show all available templates |
--force, -f |
Overwrite existing files |
Examples:
# Initialize local project
cllm init
# Initialize with template
cllm init --template code-review
# Initialize both global and local
cllm init --global --local
# List available templates
cllm init --list-templates| Option | Description |
|---|---|
--model MODEL |
Specify LLM model (default: gpt-3.5-turbo) |
--list-models |
List all available models across providers |
--stream |
Stream response in real-time |
--temperature FLOAT |
Control randomness (0.0-2.0) |
--max-tokens INT |
Maximum response length |
--system TEXT |
Override system prompt (inline text) |
--system-file PATH |
Override system prompt (load from file) |
--conversation ID |
Continue/create multi-turn conversation |
--list-conversations |
List all saved conversations |
--show-conversation ID |
Display conversation history |
--delete-conversation ID |
Delete a conversation |
--config NAME |
Load named Cllmfile configuration |
--show-config |
Display effective configuration |
--json-schema FILE/URL |
Enforce JSON schema for structured output |
--validate-schema |
Validate schema without making API call |
--exec COMMAND |
Execute command and inject output as context |
--no-context-exec |
Disable context commands from config |
--var KEY=VALUE |
Set template variable (repeatable) |
--allow-commands |
Enable LLM-driven dynamic command execution |
--command-allow PATTERN |
Allowlist commands (wildcards supported) |
--command-deny PATTERN |
Denylist commands (wildcards supported) |
--debug |
Enable debug mode ( |
--json-logs |
Enable structured JSON logging |
--log-file PATH |
Write debug output to file |
-v, -vv, -vvv |
Set verbosity level (1, 2, or 3) |
--verbose |
Increase verbosity level (long form, repeatable) |
--help |
Show help message |
CLLM supports 100+ providers through LiteLLM. Use cllm --list-models to see all available models.
gpt-3.5-turbo(default)gpt-4,gpt-4-turbo,gpt-4ogpt-4o-mini
claude-3-haiku-20240307claude-3-sonnet-20240229claude-3-opus-20240229claude-3-5-sonnet-20240620
gemini-pro,gemini-1.5-pro,gemini-1.5-flash
groq/mixtral-8x7b-32768groq/llama-3.1-70b-versatilegroq/llama-3.3-70b-versatile
ollama/llama3,ollama/codellama,ollama/mistral- Any custom Ollama model
- AWS Bedrock, Azure OpenAI, Cohere, Replicate, Together AI, Hugging Face, and 90+ more
# List all 1343+ available models
cllm --list-models
# Filter by provider
cllm --list-models | grep -i anthropic
cllm --list-models | grep -i gpt-4See LiteLLM Providers for complete list and setup instructions.
# Provider API keys (LiteLLM conventions)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
export GROQ_API_KEY="gsk_..."
export AZURE_API_KEY="..."
export COHERE_API_KEY="..."
# See https://docs.litellm.ai/docs/providers for all providers# Default model (optional)
export CLLM_DEFAULT_MODEL="gpt-4"
# Debug settings (ADR-0009)
export CLLM_DEBUG=true
export CLLM_JSON_LOGS=true
export CLLM_LOG_FILE=/path/to/debug.loggit clone https://github.com/o3-cloud/cllm.git
cd cllm
uv syncuv run pytestuv buildContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with β€οΈ by the O3 Cloud team
- Inspired by the need for transparent, scriptable LLM workflows
- Thanks to all contributors and the open-source community
- π Documentation
- π Issue Tracker
- π¬ Discussions
CLLM's architecture and features are documented in ADRs (Architecture Decision Records):
- ADR-0001: Use uv as Package Manager (10-100x faster than pip)
- ADR-0002: Use LiteLLM for Provider Abstraction (100+ providers)
- ADR-0003: Cllmfile Configuration System (YAML configs)
- ADR-0004: Add
--list-modelsCLI Flag (model discovery) - ADR-0005: Structured Output with JSON Schema
- ADR-0006: Support Remote JSON Schema URLs
- ADR-0007: Conversation Threading & Context Management
- ADR-0008: Bash Script Examples Library
- ADR-0009: Debugging & Logging Support
- ADR-0010: LiteLLM Streaming Support
- ADR-0011: Dynamic Context Injection via Command Execution
- ADR-0012: Variable Expansion in Context Commands with Jinja2 Templates
- ADR-0013: LLM-Driven Dynamic Command Execution
- ADR-0014: JSON Structured Output with --allow-commands
- ADR-0015: Init Command for Directory Setup (project bootstrapping with templates)
- ADR-0016: Configurable .cllm Directory Path (custom config locations)
- ADR-0017: Configurable Conversations Path (independent control over conversation storage)
- ADR-0022: CLI Flag for System Prompt Override (per-command system prompt customization)
- ADR-0024: Explicit Command Parameter Syntax (bracket syntax with type hints for dynamic commands)
Completed:
- β Multi-provider support (100+ providers)
- β Real-time streaming responses
- β Conversation threading
- β Structured JSON output with schema validation
- β Configuration file system with templates
- β
Project initialization command (
cllm init) - β Debugging and logging
- β Model discovery
- β Bash script examples
- β Dynamic context injection via command execution
- β Variable expansion with Jinja2 templates
- β LLM-driven dynamic command execution (agentic workflows)
- β Combined JSON schema with dynamic commands
Planned:
- Enhanced error recovery and retry strategies
- Token usage tracking and cost estimation
- Built-in prompt templates library
- Prompt caching support
- Multimodal support (images, audio)
- Plugin system for extensibility
- Integration with popular dev tools (VSCode, Emacs)
- Interactive command approval/confirmation mode
- Command execution history and replay
Star β this repository if you find it useful!