🚀 li is a lightweight terminal assistant that converts natural language to shell commands. Just type plain English like "make a new git repo" and li will generate a safe, minimal command plan for you to review and execute.
- 🧠 Natural Language to Commands: Type plain English, get shell commands
- 🛡️ Safe Execution: Every plan is previewed before execution
- 💬 Direct AI Chat: Use
--chatflag for conversational AI assistance - 🧠 AI Intelligence Mode: Use
-iflag to explain command outputs in human-friendly terms - 🌐 Provider Choice: Switch between OpenRouter and Cerebras with
li --provider - 🔧 Interactive Setup: Easy first-time configuration with
li --setup - 🎨 Visual Separators: Clear distinction between li output and command output
- 📋 Model Selection: Browse OpenRouter's free models when using that provider
git clone https://github.com/bitrifttech/li.git
cd li
./install.shbrew tap bitrifttech/homebrew-li
brew install licargo install --git https://github.com/bitrifttech/li.git-
Run interactive setup:
li --setup
This will guide you through:
- Choosing your AI provider (OpenRouter or Cerebras)
- Supplying the provider API key
- Selecting a planner model (OpenRouter only)
- Configuring timeout and token limits
-
Add your provider API key:
- OpenRouter:
- Visit https://openrouter.ai/
- Sign up for a free account
- Copy your API key (starts with
sk-or-v1-)
- Cerebras:
- Use your Cerebras Inference API key (set via the Cerebras account dashboard)
- Export it as
CEREBRAS_API_KEYor provide it during setup
- OpenRouter:
-
Try it out:
li 'list all files in current directory' li 'create a new git repository' li 'show system disk usage'
# Plan and execute commands
li 'list files in current directory'
li 'make a new git repo and connect to GitHub'
li 'find the 10 largest files in this folder'
# Direct AI conversation
li --chat 'what is the capital of France?'
li --chat 'explain quantum computing simply'
# AI Intelligence Mode - explain command outputs or answer questions
li -i 'df -h' # Explain disk usage output
li --intelligence 'ps aux' # Understand running processes
li -i 'mount' # Learn about mounted filesystems
li -i --question 'Which disk has most space?' "df -h" # Ask a specific question
li -i 'ls -la' # Understand file permissions
df | li -i # Analyze piped command output
df | li -q 'Which disk has the most space?' # Ask questions about piped output
# Interactive model selection
li --model
li --model list
# Provider selection
li --provider
li --provider list
# Manual configuration
li config --api-key YOUR_OPENROUTER_API_KEY
li config --planner-model minimax/minimax-m2:freeli --help # Show all options
li --setup # Interactive first-time setup
li --chat "message" # Direct AI conversation
li -i "command" # Explain command output with AI
li --intelligence "command" # Long form of -i flag
li --model # Interactive model selection
li --model list # Show available models
li config # View current configurationli 'list all files including hidden ones'
li 'create a backup of this directory'
li 'find all Python files in current folder'
li 'remove all .log files older than 30 days'li 'initialize a new git repository'
li 'add all files and make initial commit'
li 'create a new branch called feature-x'
li 'merge develop branch into main'li 'show system disk usage'
li 'list all mounted drives'
li 'check system memory usage'
li 'show running processes'li 'install npm dependencies'
li 'run the development server'
li 'build the project for production'
li 'run all tests'The intelligence mode (-i or --intelligence) helps you understand command outputs by running a command and then using AI to explain what the output means in human-friendly terms. You can also pipe existing command output into li for analysis without re-running the original command.
- Execute or Receive Output: li runs your specified shell command, or consumes piped stdin if provided
- Capture Output: Both stdout and stderr are collected
- AI Explanation: The output is sent to the AI model for analysis
- Human-Friendly Breakdown: Get explanations, insights, and warnings
Tip: Passing
--questionautomatically enables intelligence mode, even if you omit-i.
# Understand disk usage
li -i "df -h"
li -i --question "Which disk has most free space?" "df -h"
df | li -idf | li -q "Which disk has the most space?"
log show --predicate 'process == "kernel"' | li -q "Are there any kernel panics?"# Understand file permissions
li -i "ls -la /etc"
# Analyze directory structure
li --intelligence "tree -L 2"
# Check file sizes
li -i "du -sh * | sort -hr | head -10"Each intelligence explanation provides:
- Simple Meaning: What the output means in plain English
- Key Insights: Important information and patterns
- Warnings: Things to pay attention to or avoid
- Practical Understanding: What you should do with this information
- Learning: Understand unfamiliar commands
- Troubleshooting: Get insights into system issues
- Security: Analyze what's running on your system
- Optimization: Identify resource usage patterns
li stores configuration in ~/.li/config (JSON format):
{
"openrouter_api_key": "sk-or-v1-your-api-key",
"timeout_secs": 30,
"max_tokens": 2048,
"planner_model": "minimax/minimax-m2:free"
}You can override configuration with environment variables:
export OPENROUTER_API_KEY="sk-or-v1-your-api-key"
export CEREBRAS_API_KEY="cb-your-api-key"
export LI_PROVIDER="openrouter" # or 'cerebras'
export LI_LLM_BASE_URL="https://openrouter.ai/api/v1"
export LI_TIMEOUT_SECS="60"
export LI_MAX_TOKENS="4096"
export LI_PLANNER_MODEL="minimax/minimax-m2:free"# Set API key
li --config --api-key sk-or-v1-your-key
# Set custom models
li --config --planner-model minimax/minimax-m2:free
# Adjust settings
li --config --timeout 60
li --config --max-tokens 4096
# Switch providers on the fly
li --provider cerebrasli ships with OpenRouter defaults and supports additional providers such as Cerebras.
- Planner:
minimax/minimax-m2:free- Intelligent shell command planning
li --model list # Show all available free models
li --model # Interactive model selection- Provide model IDs from your Cerebras workspace during setup or via
li --config - Use
CEREBRAS_API_KEYand optionalLI_LLM_BASE_URLto target custom deployments
Example output using the OpenRouter provider
$ li 'create a new git repository'
Provider: OpenRouter
Model: minimax/minimax-m2:free
Plan confidence: 1.00
Dry-run Commands:
1. git status
Execute Commands:
1. git init
2. git add .
3. git commit -m "Initial commit"
Notes: Created minimal git repo with initial commit.
Execute this plan? [y/N]: y
=== Executing Plan ===
[Dry-run Phase]
> Running check 1/1: git status
┌─ COMMAND OUTPUT: git status
│
│ fatal: not a git repository (or any of the parent directories)
│
└─ Command completed successfully
✓ All dry-run checks passed.
[Execute Phase]
> Executing 1/3: git init
┌─ COMMAND OUTPUT: git init
│
│ Initialized empty Git repository in /path/to/repo/.git/
│
└─ Command completed successfully
> Executing 2/3: git add .
> Executing 3/3: git commit -m "Initial commit"
✓ Plan execution completed.Example output using the OpenRouter provider
$ li --chat "what is the capital of France?"
Provider: OpenRouter
Model: minimax/minimax-m2:free
Choice 1:
The capital of France is **Paris**. It's also famous for landmarks like the Eiffel Tower and the Louvre Museum.
Finish reason: stop# Add cargo to PATH (if using cargo install)
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc# Verify your API key is valid
li config
# Get a new key from https://openrouter.ai/
li config --api-key sk-or-v1-your-new-key# Test connectivity
curl -I https://openrouter.ai/
# Check if behind a proxy
export HTTPS_PROXY=your-proxy-url# Update Rust toolchain
rustup update
# Clean and rebuild
cargo clean
cargo build --releaseSet LI_LOG_DIR to enable debug logging:
export LI_LOG_DIR="/tmp/li-logs"
li 'test command'
# Logs will be written to /tmp/li-logs/git clone https://github.com/bitrifttech/li.git
cd li
# Install dependencies
cargo build
# Run tests
cargo test
# Install locally
cargo install --path .src/
├── main.rs # Entry point
├── cli.rs # CLI arguments and commands
├── config.rs # Configuration management
├── client.rs # LLM provider client (OpenRouter, Cerebras)
├── classifier/ # Command classification logic
├── planner/ # Command planning
├── exec/ # Command execution
│ └── mod.rs # Execution implementation
# Unit tests
cargo test
# Integration tests (requires API key)
OPENROUTER_API_KEY=your-key cargo test --test integration_testMIT License - see LICENSE file for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Better portability shims (BSD vs GNU utilities)
- Command history and favorites
- Custom command templates
- Code generation and multi-file scaffolding
- Windows support
- Local model support
- Plugin system
Made with ❤️ by the bitrifttech team
Transform your terminal experience with AI-powered natural language command generation! 🚀
