A powerful Python CLI tool that converts natural language into Linux/Unix shell commands using LLMs.
- π€ Multiple LLM Backends: GPT4All (local, default), OpenAI, Ollama, or custom APIs
- π Privacy-First: Uses GPT4All locally by default - no data leaves your machine
- π― Smart Command Generation: Converts natural language to accurate shell commands
- β Safe Execution: Confirmation prompts before running commands
- π¨ Beautiful Output: Colored terminal output using Rich
- βοΈ Flexible Configuration: YAML-based config at
~/.llmshell/config.yaml - π§ Easy Setup: Auto-downloads models, handles fallbacks gracefully
pip install llmshell-cligit clone https://github.com/imgnr/llmshell-cli.git
cd llmshell-cli
pip install -e ".[dev]"llmshell run "list all docker containers"
# Output: docker ps -allmshell run "find large files" --explainllmshell run "remove all logs" --dry-runllmshell run "show disk usage" --execute
# Note: Dangerous commands will still require confirmationGenerate and optionally execute shell commands:
llmshell run "your natural language request"
llmshell run "list python files" --dry-run
llmshell run "check memory usage" --explain
llmshell run "restart nginx" --executeOptions:
--dry-run/-d: Show command without executing--explain/-x: Include explanation with the command--execute/-e: Skip confirmation prompt (except for dangerous commands)--backend/-b: Override default backend (gpt4all, openai, ollama, custom)
Safety Note: Dangerous commands (like rm -rf /, mkfs, etc.) will always require confirmation, even with --execute.
Manage configuration:
# Show current configuration
llmshell config show
# Set a configuration value
llmshell config set llm_backend openai
llmshell config set backends.openai.api_key sk-xxxxx
# List available backends
llmshell config backendsManage GPT4All models:
# Show available models to download
llmshell model show-available
# Install/download the default model
llmshell model install
# Install a specific model
llmshell model install --name Meta-Llama-3-8B-Instruct.Q4_0.gguf
# List installed models
llmshell model listDiagnose setup and check backend availability:
llmshell doctorOutput shows:
- Configuration file status
- Available backends
- Model installation status
- API connectivity
Configuration is stored at ~/.llmshell/config.yaml:
llm_backend: gpt4all
backends:
gpt4all:
model: mistral-7b-instruct-v0.2.Q4_0.gguf
model_path: null # Auto-detected
openai:
api_key: sk-your-api-key-here
model: gpt-4-turbo
base_url: null # Optional custom endpoint
ollama:
model: llama3
api_url: http://localhost:11434
custom:
api_url: https://your-llm-endpoint/v1/chat/completions
headers:
Authorization: Bearer YOUR_TOKEN
execution:
auto_execute: false
confirmation_required: true
output:
colored: true
verbose: falseNo setup required! On first run:
# Show available models
llmshell model show-available
# Install a model (default: Meta Llama 3)
llmshell model install
# Or install a specific model
llmshell model install --name Phi-3-mini-4k-instruct.Q4_0.ggufThis downloads the model locally (~2-5GB depending on the model).
- Get API key from OpenAI
- Configure:
llmshell config set backends.openai.api_key sk-xxxxx
llmshell config set llm_backend openai- Install Ollama
- Pull a model:
ollama pull llama3- Configure:
llmshell config set llm_backend ollamaFor any OpenAI-compatible API:
llmshell config set llm_backend custom
llmshell config set backends.custom.api_url https://your-endpoint
llmshell config set backends.custom.headers.Authorization "Bearer TOKEN"# Docker commands
llmshell run "stop all running containers"
llmshell run "remove unused images"
# File operations
llmshell run "find files modified in last 24 hours"
llmshell run "compress all logs to archive"
# System monitoring
llmshell run "show top 10 memory-consuming processes"
llmshell run "check disk space on all mounts"
# Git operations
llmshell run "show commits from last week"
llmshell run "list branches sorted by recent activity"
# Network operations
llmshell run "check if port 8080 is open"
llmshell run "show active network connections"You can also use llmshell programmatically:
from gpt_shell.config import Config
from gpt_shell.llm_manager import LLMManager
# Initialize
config = Config()
manager = LLMManager(config)
# Generate command
command = manager.generate_command("list all docker containers")
print(f"Generated: {command}")
# With explanation
result = manager.generate_command("find large files", explain=True)
print(result)Run llmshell in a Docker container for isolated environments.
# Build the image
docker build -t llmshell:latest .
# Run a command
docker run --rm llmshell:latest run "list files"
# With persistent config
docker run -it --rm \
-v llmshell-data:/root/.llmshell \
llmshell:latest model install# Interactive mode
docker-compose run --rm llmshell
# Inside container
llmshell run "show disk usage"For detailed Docker instructions, see DOCKER.md
# Run all tests
pytest
# Run with coverage
pytest --cov=gpt_shell --cov-report=html
# Run specific test file
pytest tests/test_config.py# Clone and install
git clone https://github.com/imgnr/llmshell-cli.git
cd llmshell-cli
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black src tests
# Type checking
mypy src
# Linting
flake8 src testsContributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new features
- Ensure all tests pass
- Submit a pull request
- Python 3.8+
- ~4GB disk space for GPT4All model (optional)
- Internet connection (for OpenAI/Ollama/custom backends)
- GPT4All: All processing happens locally, no data sent anywhere
- OpenAI/Custom APIs: Commands are sent to external services
- Ollama: Runs locally, no data sent to external servers
llmshell model installllmshell config set backends.openai.api_key sk-xxxxx
llmshell doctor# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama
ollama serve# Reset to defaults
rm ~/.llmshell/config.yaml
llmshell config showMIT License - see LICENSE file for details.
- GPT4All - Local LLM runtime
- Typer - CLI framework
- Rich - Terminal formatting
- OpenAI - API integration
- Ollama - Local LLM platform
llmshell run "create a backup of /etc directory"
llmshell run "find processes using more than 1GB RAM"
llmshell run "schedule a cron job for midnight"llmshell run "count lines of code in this project"
llmshell run "find all TODO comments in python files"
llmshell run "generate requirements.txt from imports"llmshell run "extract column 2 from CSV file"
llmshell run "convert all PNG images to JPG"
llmshell run "merge all text files into one"Made with β€οΈ for developers who prefer typing naturally