Answer: Yes, this is a fully functional AI assistant CLI that works right now.
A simple, powerful AI assistant for your command line that supports both local and cloud AI models with privacy-focused features.
Direct Answer: This CLI gives you an AI assistant in your terminal that can:
- Chat with you naturally and answer questions
- Execute commands safely with permission checks
- Read and write files for you
- Manage AWS, Azure, GCP, and Oracle cloud resources
- Detect and protect sensitive information (PII)
- Run 117 comprehensive tests to ensure everything works
- Privacy First: Uses local AI models (no data sent to cloud) or your choice of cloud providers
- Smart & Safe: Asks before running risky commands, detects sensitive data
- Multi-Cloud: Works with AWS, Azure, Google Cloud, and Oracle
- Easy Exit: Press Ctrl+C three times to quit safely (prevents accidental exits)
- Comprehensive Testing: 117 test scenarios ensure reliability
Answer: Installation takes 1 command, setup takes 2 minutes.
choco install hello-ai-clibrew install hello-ai-cli# Windows
iwr -useb https://raw.githubusercontent.com/hans-zand/hai-ai-cli/main/install-windows.ps1 | iex
# Linux/macOS
curl -fsSL https://raw.githubusercontent.com/hans-zand/hai-ai-cli/main/install-unix.sh | bash# Start the AI assistant
hello-ai-cli
# Or get help
hello-ai-cli --helpAnswer: It works out-of-the-box, but you can customize it.
The CLI uses local AI by default for privacy. No cloud setup required.
# Install Ollama (AI model runner)
curl -fsSL https://ollama.ai/install.sh | sh
# Download AI model (one-time, ~4GB)
ollama pull deepseek-coder:6.7b
# Start Ollama service
ollama serveIf you prefer cloud AI models, configure your preferred provider:
# AWS (for Claude models)
aws configure
# Azure (for GPT models)
az login
# Google Cloud (for Gemini models)
gcloud auth loginAnswer: Your data stays private by default.
- Local Processing: Uses local AI models (Ollama) - no data sent anywhere
- PII Protection: Automatically detects and protects sensitive information
- Safe Commands: Asks permission before running risky operations
- 3-Press Exit: Prevents accidental shutdowns (Ctrl+C three times)
Answer: Yes, it's thoroughly tested with 117 scenarios.
# Run all tests
hello-ai-cli
/test all
# Run specific test
/test 50
# Test categories available
/test core-scenarios # Basic functionality
/test agent-scenarios # AI intelligence
/test layout-format # Output formattingTest Coverage:
- β 116/117 tests passing (99.1% success rate)
- β 1 test failing (layout formatting - non-critical)
Latest Updates (Feature Branch: 20250113-2012):
- β Local-only PII scanning (privacy-focused)
- β 3-press Ctrl+C exit (safety improvement)
- β 117 comprehensive test scenarios (reliability)
- β Fixed mock tests (now real functional tests)
- β Enhanced installation methods (Chocolatey + Homebrew)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Hello AI CLI β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β π§ Configuration (config.toml) β
β βββ Local AI Settings (Ollama) β
β βββ Cloud AI Settings (AWS/Azure/GCP/Oracle) β
β βββ Privacy Controls (PII Detection) β
β βββ Safety Settings (Permission Checks) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β π€ AI Engine β
β βββ Input Processing & Safety Checks β
β βββ AI Model Router (Local β Cloud) β
β βββ Tool Detection & Execution β
β βββ Response Processing & Formatting β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β π οΈ Integrated Tools (13 Available) β
β βββ File Operations (read, write, search) β
β βββ Cloud Management (AWS, Azure, GCP, Oracle) β
β βββ Command Execution (bash, with safety) β
β βββ Knowledge Management (save context) β
β βββ Task Management (todo lists) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β π‘οΈ Security & Privacy Layer β
β βββ PII Detection & Sanitization β
β βββ Permission System (π’π‘π΄ risk levels) β
β βββ Command Safety Validation β
β βββ Data Residency Controls β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β βββ Agent Configuration Management β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β Tool Ecosystem β β βββ File Operations (read/write/search) β β βββ AWS CLI Integration β β βββ Code Analysis & Security Scanning β β βββ Knowledge Management β β βββ Task Management β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β Security Layer β β βββ PII Detection (Comprehend/Local LLM) β β βββ Nightfall DLP Integration β β βββ Permission System β β βββ SES Alert System β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
## π¦ Installation
### Quick Install (Recommended)
#### Windows (Chocolatey)
```powershell
# One-liner installation
choco install hello-ai-cli
# Or use our installer script
iwr -useb https://raw.githubusercontent.com/hans-zand/hai-ai-cli/main/install-windows.ps1 | iex
# One-liner installation
brew install hello-ai-cli
# Or use our installer script
curl -fsSL https://raw.githubusercontent.com/hans-zand/hai-ai-cli/main/install-unix.sh | bash# Direct download (Linux/macOS)
curl -fsSL https://raw.githubusercontent.com/hans-zand/hai-ai-cli/main/install-unix.sh | bash --no-homebrew
# Windows without Chocolatey
# Download from: https://github.com/hans-zand/hai-ai-cli/releases/latestAnswer: 13 integrated tools are ready to use:
| Tool | What It Does | Example |
|---|---|---|
| ποΈ File Operations | Read, write, search files | "Show me package.json" |
| β‘ Command Execution | Run bash commands safely | "List directory contents" |
| βοΈ AWS Management | Manage AWS resources | "List my S3 buckets" |
| π· Azure Management | Manage Azure resources | "Show my resource groups" |
| π Google Cloud | Manage GCP resources | "List my GCP projects" |
| πΆ Oracle Cloud | Manage OCI resources | "Show my compartments" |
| π§ Knowledge Base | Save conversation context | "Remember this for later" |
| β Task Management | Create and manage todos | "Add task: review code" |
| π€ Deep Thinking | Complex problem solving | "Think through this step by step" |
| π Introspection | Show CLI capabilities | "What can you do?" |
| π― Multi-Agent | Coordinate multiple AI agents | "Analyze and improve this" |
| π Code Analysis | Review and explain code | "Explain this function" |
| π‘οΈ Security Scanning | Check for vulnerabilities | "Scan this code for issues" |
Answer: Just type naturally - the AI understands plain English.
# Start the CLI
hello-ai-cli
# Then type naturally:
"Show me what's in this directory"
"Create a backup of my config file"
"List my AWS S3 buckets"
"Help me debug this Python error"
"What files have changed recently?"
"Explain this code to me"/test all # Run all tests
/quit # Exit the CLI
/help # Show help
/orchestrate # Multi-agent modeAnswer: Multiple safety layers protect you:
- π’ Green: Safe commands run automatically
- π‘ Yellow: Medium risk - asks for confirmation
- π΄ Red: High risk - always confirms before running
- β±οΈ Timeout: All commands timeout after 2 minutes
- π PII Protection: Automatically detects and protects sensitive data
- π‘οΈ Permission Checks: Validates commands before execution
Answer: Works on any modern system.
- Operating System: Windows 10+, macOS 10.15+, Linux (any recent distro)
- Memory: 4GB RAM minimum (8GB recommended for local AI)
- Storage: 5GB free space (for AI models)
- Network: Internet connection for cloud AI (optional for local AI)
Answer: Common issues and quick fixes:
# Windows: If Chocolatey fails
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
# Linux/macOS: If Homebrew fails
sudo chown -R $(whoami) /usr/local/share/zsh
# Manual installation
# Download from: https://github.com/hans-zand/hai-ai-cli/releases# If AI model fails to load
ollama pull deepseek-coder:6.7b
ollama serve
# If commands timeout
# Check your internet connection
# Try: hello-ai-cli --timeout 300
# If PII detection is too strict
# Set environment: SIMPLE_PII_SCAN=true# Built-in help
hello-ai-cli --help
# Test the system
hello-ai-cli
/test basic
# Check system status
hello-ai-cli doctorAnswer: Multiple ways to get help or contribute:
- π Documentation: GitHub Wiki
- π Bug Reports: GitHub Issues
- π‘ Feature Requests: GitHub Discussions
- π§ Direct Support: Create an issue with
/issuecommand
MIT License - See LICENSE file for details.
π― Bottom Line: This is a production-ready AI CLI assistant that prioritizes your privacy, works locally or in the cloud, and makes command-line tasks easier through natural language interaction.
- Rust 1.70+
- AWS CLI configured (for Bedrock models)
- Ollama installed (for local models)
git clone <repository-url>
cd hai-llm-cli
cargo build --release./target/release/hai-llm-cliHello AI CLI stores all configuration files and data in the ~/.hai directory:
~/.hai/
βββ mcp-config.json # Model Context Protocol servers
βββ mcp-servers.json # MCP server configurations
βββ settings.json # CLI settings and preferences
βββ session # Authentication session data
βββ config.json # Agent and deployment configs
βββ model.txt # Current model selection
βββ experiments.json # Feature experiments
βββ conversations/ # Saved conversation history
βββ context/ # Context management files
βββ prompts/ # Custom prompt templates
βββ agents/ # Agent configurations
Note: Previous versions used ~/.amazonq - this has been changed to ~/.hai to remove Amazon Q branding.
Hello AI CLI supports both MCP (Model Context Protocol) and native Extensions:
Extensions - Native Rust plugins for direct integration:
- Built-in: NewRelic, Wiz Security, PostgreSQL
- Custom extensions for tools without MCP support
- High performance, tight CLI integration
- See EXTENSIONS_GUIDE.md for development guide
MCP Integration - Standardized protocol support:
- External process communication
- Language-agnostic server development
- Cross-platform compatibility
- Configuration in
~/.hai/mcp-config.json
Usage:
# Extensions
/extensions # List available extensions
/ext:newrelic:alerts # Get NewRelic alerts
/ext:wiz:scan:myproject # Run Wiz security scan
/ext:postgres:backup:db1 # Backup PostgreSQL database
# MCP
/mcp # MCP management
hello-ai-cli mcp list # List MCP serversHello AI CLI provides a unified interface to work with any major cloud provider. Simply set your preferred cloud provider and the system will automatically use their respective LLM services and PII detection capabilities.
- LLM Service: Amazon Bedrock (Claude, Titan models)
- PII Detection: Amazon Comprehend
- Configuration: AWS profile and region-based
- Models: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- LLM Service: Azure OpenAI Service
- PII Detection: Azure Text Analytics/Purview
- Configuration: Subscription and resource group-based
- Models: GPT-4, GPT-3.5-turbo, and other OpenAI models
- LLM Service: Vertex AI (Gemini models)
- PII Detection: Google Cloud DLP API
- Configuration: Project ID and service account-based
- Models: Gemini Pro, Gemini Pro Vision
- LLM Service: Oracle Generative AI Service
- PII Detection: Oracle Data Safe
- Configuration: Compartment and OCI config-based
- Models: Cohere Command R+, Meta Llama models
- Choose Your Cloud Provider:
[cloud]
provider = "aws" # "aws", "azure", "gcp", "oracle"- Configure Provider-Specific Settings:
AWS Setup:
aws_profile = "bedrock"
aws_region = "ap-southeast-2"
aws_llm_model = "anthropic.claude-3-5-sonnet-20241022-v2:0"Azure Setup:
azure_openai_endpoint = "https://your-resource.openai.azure.com/"
azure_openai_key = "" # Set AZURE_OPENAI_KEY env var
azure_openai_model = "gpt-4"
azure_text_analytics_endpoint = "https://your-resource.cognitiveservices.azure.com/"
azure_text_analytics_key = "" # Set AZURE_TEXT_ANALYTICS_KEY env varGCP Setup:
gcp_project_id = "your-project-id" # Set GCP_PROJECT_ID env var
gcp_region = "australia-southeast1"
gcp_credentials_path = "/path/to/service-account.json" # Or set GOOGLE_APPLICATION_CREDENTIALS
gcp_vertex_model = "gemini-pro"Oracle Setup:
oracle_compartment_id = "ocid1.compartment.oc1..your-compartment-id"
oracle_region = "ap-sydney-1"
oracle_config_file = "~/.oci/config"
oracle_profile = "DEFAULT"
oracle_llm_model = "cohere.command-r-plus"Each cloud provider supports environment variables for secure credential management:
- AWS: Uses standard AWS credentials and profiles
- Azure:
AZURE_OPENAI_KEY,AZURE_TEXT_ANALYTICS_KEY - GCP:
GCP_PROJECT_ID,GOOGLE_APPLICATION_CREDENTIALS - Oracle: Uses OCI config file and profiles
The repository includes complete example configurations:
config-aws-example.toml- AWS Bedrock setupconfig-azure-example.toml- Azure OpenAI setupconfig-gcp-example.toml- Google Cloud Vertex AI setupconfig-oracle-example.toml- Oracle Generative AI setup
If the primary cloud provider fails, the system automatically falls back to:
- Local LLM (if configured)
- Standalone provider configurations (OpenAI, Gemini APIs)
- Error handling with clear guidance
Create config.toml in the project root:
[security]
pii_scanner = true
pii_use_local_llm = true # Use local LLM for PII detection
nightfall_scanner = false
security_scanner = true
permission_checker = true
[analysis]
command_analysis = true
ai_analysis = true
[features]
streaming = true
visual_indicators = true
execution_mode = "interactive" # "interactive", "chatbot", "auto"
[model]
use_local_llm = true
local_llm_endpoint = "http://localhost:11434/api/generate"
local_llm_model = "deepseek-coder:6.7b"
default_model = "anthropic.claude-3-5-sonnet-20241022-v2:0"
[region]
aws_region = "ap-southeast-2"
region_display = "ap-southeast-2 (Sydney)"
data_residency = "100% Australia"- AI suggests commands with user confirmation
- Full tool execution capabilities
- Safety confirmations for risky operations
- Pure conversational AI
- No command execution
- Safe for educational environments
- Automatic command execution
- No user confirmations
- Advanced users only
- fs_read: Read files, directories, search content
- fs_write: Create, modify, append files
- use_aws: Execute AWS CLI commands
- ses_alerts: Send security alerts via SES
- security_scanner: Scan code for vulnerabilities
- code_completion: AI-powered code completion
- refactoring_assistant: Code refactoring suggestions
- knowledge: Store and retrieve context
- todo_list: Task management
- thinking: Complex reasoning processes
- execute_bash: Shell command execution with safety checks
- introspect: Q CLI capabilities information
- Code Analysis: Analyzes code quality, security, and best practices
- Response Improvement: Provides suggestions to enhance AI responses
- File-Based Analysis: Reviews actual code files for comprehensive feedback
# Configure improvement agent
/agents config improvement openai gpt-4
/agents config improvement openai gpt-3.5-turbo
# View all agents
/agents config# Analyze AI response
/agents improve "Check disk space with df -h"
# Analyze code file + response
/agents improve src/main.rs "This code handles user input"
# Enable auto-improvements on all responses
export ENABLE_AUTO_IMPROVEMENTS=true- improvement: OpenAI-powered code and response analysis
- deployment: Kubernetes deployment automation
- security: Security analysis and compliance
- troubleshoot: Error analysis and troubleshooting
- validation: Configuration validation and testing
# Required for improvement agent
export OPENAI_API_KEY="your-openai-api-key"
# Optional: Enable automatic improvements
export ENABLE_AUTO_IMPROVEMENTS=true- Amazon Comprehend: Cloud-based PII detection
- Local LLM: Privacy-focused local PII scanning
- Smart Filtering: Excludes common usernames and file paths
- Nightfall Integration: Advanced DLP scanning
- SES Alerts: Automated security notifications
- Permission System: Risk-based command authorization
- π’ Low Risk: Auto-execute (ls, pwd, etc.)
- π‘ Medium Risk: Confirmation required (file operations)
- π΄ High Risk: Always confirm (system modifications)
Configurable AWS regions with data residency compliance:
- ap-southeast-2 (Sydney) - Australia data residency
- us-east-1 (N. Virginia) - US data residency
- eu-west-1 (Ireland) - EU data residency
- Any AWS region with Bedrock support
./hai-llm-cli
> How do I list all files in the current directory?> Analyze the security of main.rs file> Show me all S3 buckets in my account> Read the contents of config.toml and explain the settings# Get improvement suggestions for AI responses
> /agents improve "Check disk space with df -h"
# Analyze code file with AI response
> /agents improve src/main.rs "This function handles user authentication"
# Configure improvement agent
> /agents config improvement openai gpt-4- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh - Pull models:
ollama pull deepseek-coder:6.7b - Start server:
ollama serve - Configure endpoint in
config.toml
- Configure AWS CLI:
aws configure - Enable Bedrock models in AWS Console
- Set appropriate IAM permissions
- Configure region in
config.toml
export NIGHTFALL_API_KEY="your-nightfall-key" # Optional
export AWS_PROFILE="your-profile" # Optional- Privacy: All data stays local
- Cost: No API charges
- Speed: No network latency
- Offline: Works without internet
- Capability: More advanced models
- Reliability: Enterprise-grade infrastructure
- Updates: Latest model versions
- Scale: Handle large workloads
Local LLM not responding:
# Check Ollama status
ollama list
ollama serve
# Test endpoint
curl http://localhost:11434/api/generate -d '{"model":"deepseek-coder:6.7b","prompt":"test"}'AWS Bedrock access denied:
# Check credentials
aws sts get-caller-identity
# Verify region
aws bedrock list-foundation-models --region ap-southeast-2PII scanner issues:
- Ensure AWS credentials are configured for Comprehend
- Check local LLM is running for local PII scanning
- Verify network connectivity
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details
For issues, feature requests, or questions:
- Create an issue in the repository
- Check the troubleshooting section
- Review configuration examples
Hello AI CLI - Empowering developers with intelligent, secure, and configurable multi-cloud AI assistance.