Skip to content

imgnr/llmshell-cli

Repository files navigation

🐚 llmshell-cli

A powerful Python CLI tool that converts natural language into Linux/Unix shell commands using LLMs.

✨ Features

  • πŸ€– Multiple LLM Backends: GPT4All (local, default), OpenAI, Ollama, or custom APIs
  • πŸ”’ Privacy-First: Uses GPT4All locally by default - no data leaves your machine
  • 🎯 Smart Command Generation: Converts natural language to accurate shell commands
  • βœ… Safe Execution: Confirmation prompts before running commands
  • 🎨 Beautiful Output: Colored terminal output using Rich
  • βš™οΈ Flexible Configuration: YAML-based config at ~/.llmshell/config.yaml
  • πŸ”§ Easy Setup: Auto-downloads models, handles fallbacks gracefully

πŸ“¦ Installation

pip install llmshell-cli

Development Installation

git clone https://github.com/imgnr/llmshell-cli.git
cd llmshell-cli
pip install -e ".[dev]"

πŸš€ Quick Start

Generate a Command

llmshell run "list all docker containers"
# Output: docker ps -a

Get Command with Explanation

llmshell run "find large files" --explain

Dry Run (Don't Execute)

llmshell run "remove all logs" --dry-run

Auto-Execute (Skip Confirmation)

llmshell run "show disk usage" --execute
# Note: Dangerous commands will still require confirmation

πŸ“– CLI Commands

llmshell run

Generate and optionally execute shell commands:

llmshell run "your natural language request"
llmshell run "list python files" --dry-run
llmshell run "check memory usage" --explain
llmshell run "restart nginx" --execute

Options:

  • --dry-run / -d: Show command without executing
  • --explain / -x: Include explanation with the command
  • --execute / -e: Skip confirmation prompt (except for dangerous commands)
  • --backend / -b: Override default backend (gpt4all, openai, ollama, custom)

Safety Note: Dangerous commands (like rm -rf /, mkfs, etc.) will always require confirmation, even with --execute.

llmshell config

Manage configuration:

# Show current configuration
llmshell config show

# Set a configuration value
llmshell config set llm_backend openai
llmshell config set backends.openai.api_key sk-xxxxx

# List available backends
llmshell config backends

llmshell model

Manage GPT4All models:

# Show available models to download
llmshell model show-available

# Install/download the default model
llmshell model install

# Install a specific model
llmshell model install --name Meta-Llama-3-8B-Instruct.Q4_0.gguf

# List installed models
llmshell model list

llmshell doctor

Diagnose setup and check backend availability:

llmshell doctor

Output shows:

  • Configuration file status
  • Available backends
  • Model installation status
  • API connectivity

βš™οΈ Configuration

Configuration is stored at ~/.llmshell/config.yaml:

llm_backend: gpt4all

backends:
  gpt4all:
    model: mistral-7b-instruct-v0.2.Q4_0.gguf
    model_path: null  # Auto-detected
  
  openai:
    api_key: sk-your-api-key-here
    model: gpt-4-turbo
    base_url: null  # Optional custom endpoint
  
  ollama:
    model: llama3
    api_url: http://localhost:11434
  
  custom:
    api_url: https://your-llm-endpoint/v1/chat/completions
    headers:
      Authorization: Bearer YOUR_TOKEN

execution:
  auto_execute: false
  confirmation_required: true

output:
  colored: true
  verbose: false

πŸ”§ Backend Setup

GPT4All (Default - Local)

No setup required! On first run:

# Show available models
llmshell model show-available

# Install a model (default: Meta Llama 3)
llmshell model install

# Or install a specific model
llmshell model install --name Phi-3-mini-4k-instruct.Q4_0.gguf

This downloads the model locally (~2-5GB depending on the model).

OpenAI

  1. Get API key from OpenAI
  2. Configure:
llmshell config set backends.openai.api_key sk-xxxxx
llmshell config set llm_backend openai

Ollama

  1. Install Ollama
  2. Pull a model:
ollama pull llama3
  1. Configure:
llmshell config set llm_backend ollama

Custom API

For any OpenAI-compatible API:

llmshell config set llm_backend custom
llmshell config set backends.custom.api_url https://your-endpoint
llmshell config set backends.custom.headers.Authorization "Bearer TOKEN"

πŸ’‘ Usage Examples

# Docker commands
llmshell run "stop all running containers"
llmshell run "remove unused images"

# File operations
llmshell run "find files modified in last 24 hours"
llmshell run "compress all logs to archive"

# System monitoring
llmshell run "show top 10 memory-consuming processes"
llmshell run "check disk space on all mounts"

# Git operations
llmshell run "show commits from last week"
llmshell run "list branches sorted by recent activity"

# Network operations
llmshell run "check if port 8080 is open"
llmshell run "show active network connections"

🐍 Python API

You can also use llmshell programmatically:

from gpt_shell.config import Config
from gpt_shell.llm_manager import LLMManager

# Initialize
config = Config()
manager = LLMManager(config)

# Generate command
command = manager.generate_command("list all docker containers")
print(f"Generated: {command}")

# With explanation
result = manager.generate_command("find large files", explain=True)
print(result)

🐳 Docker Support

Run llmshell in a Docker container for isolated environments.

Quick Start

# Build the image
docker build -t llmshell:latest .

# Run a command
docker run --rm llmshell:latest run "list files"

# With persistent config
docker run -it --rm \
  -v llmshell-data:/root/.llmshell \
  llmshell:latest model install

Using Docker Compose

# Interactive mode
docker-compose run --rm llmshell

# Inside container
llmshell run "show disk usage"

For detailed Docker instructions, see DOCKER.md

πŸ§ͺ Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=gpt_shell --cov-report=html

# Run specific test file
pytest tests/test_config.py

πŸ› οΈ Development

Setup

# Clone and install
git clone https://github.com/imgnr/llmshell-cli.git
cd llmshell-cli
pip install -e ".[dev]"

# Run tests
pytest

# Format code
black src tests

# Type checking
mypy src

# Linting
flake8 src tests

🀝 Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new features
  4. Ensure all tests pass
  5. Submit a pull request

πŸ“‹ Requirements

  • Python 3.8+
  • ~4GB disk space for GPT4All model (optional)
  • Internet connection (for OpenAI/Ollama/custom backends)

πŸ”’ Privacy

  • GPT4All: All processing happens locally, no data sent anywhere
  • OpenAI/Custom APIs: Commands are sent to external services
  • Ollama: Runs locally, no data sent to external servers

πŸ› Troubleshooting

GPT4All model not found

llmshell model install

OpenAI API errors

llmshell config set backends.openai.api_key sk-xxxxx
llmshell doctor

Ollama not connecting

# Check if Ollama is running
curl http://localhost:11434/api/tags

# Start Ollama
ollama serve

Configuration issues

# Reset to defaults
rm ~/.llmshell/config.yaml
llmshell config show

πŸ“ License

MIT License - see LICENSE file for details.

πŸ™ Acknowledgments

πŸ“š More Examples

System Administration

llmshell run "create a backup of /etc directory"
llmshell run "find processes using more than 1GB RAM"
llmshell run "schedule a cron job for midnight"

Development

llmshell run "count lines of code in this project"
llmshell run "find all TODO comments in python files"
llmshell run "generate requirements.txt from imports"

Data Processing

llmshell run "extract column 2 from CSV file"
llmshell run "convert all PNG images to JPG"
llmshell run "merge all text files into one"

Made with ❀️ for developers who prefer typing naturally

About

Convert natural language to shell commands using LLMs (GPT4All, OpenAI, Ollama). Local-first, safe, and easy to use.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors