Skip to content

Conversation

@Sahilbhatane
Copy link
Collaborator

Build LLM Integration Layer for Command Interpretation

Fixes #1

Summary

Implements the LLM integration layer for command interpretation, enabling Cortex to convert natural language requests into safe, validated bash commands using Claude and OpenAI GPT-4 APIs.

What Was Implemented

Core Features

  • Multi-Provider Support: Seamless integration with both Claude (Anthropic) and OpenAI GPT-4 APIs
  • Command Interpretation: Converts natural language input (e.g., "install docker with nvidia support") into executable bash command sequences
  • Safety Validation: Built-in dangerous command filtering to prevent destructive operations
  • Context-Aware Parsing: Optional system context parameter for more intelligent command generation
  • Error Handling: Comprehensive error handling for API failures and invalid responses

Key Components

LLM/interpreter.py (Main Module)

  • CommandInterpreter class with support for both API providers
  • Intelligent prompt engineering with system-level instructions
  • JSON-based command parsing with markdown code block support
  • Command validation with dangerous pattern detection
  • Context-aware command generation

LLM/test_interpreter.py (Test Suite)

  • 23 comprehensive unit tests
  • Mocked API calls for reliable testing
  • Tests for validation, parsing, error handling, and both API providers
  • 95%+ test coverage (exceeds the 80% requirement)

LLM/requirements.txt

  • Minimal dependencies: openai>=1.0.0 and anthropic>=0.18.0

Requirements Met

Requirement Status Implementation
Support Claude API and OpenAI GPT-4 done Both providers supported via APIProvider enum
Input: Natural language string done parse() method accepts user intent strings
Output: List of safe bash commands done Returns List[str] of validated commands
Prompt templates for different command types done System prompt with clear rules and examples
Error handling for API failures done Try-catch blocks with descriptive RuntimeError messages
Unit tests with 80%+ coverage done 23 tests with 95%+ coverage

Security Features

Dangerous Pattern Detection:

  • Filters rm -rf / commands
  • Blocks disk formatting operations (mkfs., dd if=)
  • Prevents fork bombs
  • Can be disabled with validate=False parameter for advanced users

API Safety:

  • Low temperature (0.3) for predictable outputs
  • System prompt enforces safe command generation
  • JSON-structured responses for reliable parsing

Usage Examples

from LLM.interpreter import CommandInterpreter

# Using OpenAI GPT-4
interpreter = CommandInterpreter(api_key="sk-...", provider="openai")
commands = interpreter.parse("install docker with nvidia support")
# Returns: ["sudo apt update", "sudo apt install -y docker.io", 
#           "sudo apt install -y nvidia-docker2", "sudo systemctl restart docker"]

# Using Claude
interpreter = CommandInterpreter(api_key="sk-ant-...", provider="claude")
commands = interpreter.parse("install python 3.11 and setup virtual environment")

# With system context
interpreter.parse_with_context(
    "optimize this server for machine learning",
    system_info={"os": "ubuntu", "version": "22.04", "gpu": "RTX 4090"}
)

@mikejmorgan-ai mikejmorgan-ai merged commit 4d3aad4 into cortexlinux:main Nov 9, 2025
@mikejmorgan-ai
Copy link
Member

Outstanding implementation @Sahilbhatane!

✅ Both Claude and GPT-4 support
✅ Safety validation included
✅ 23 unit tests with 80%+ coverage
✅ Excellent documentation

Merging now. $200 bounty paid within 3 days.

Contact: mike@cortexlinux.com or Discord for payment details.

Thank you for this solid foundation! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Build LLM integration layer for command interpretation

2 participants