AI-powered terminal command generator — describe what you want in plain English, get the shell command instantly.
ait "find all PDF files modified in the last 7 days"
# Output: find . -name "*.pdf" -mtime -7Module 2 Project for the LLM Engineering & Deployment certification program by Ready Tensor.
See the full publication on Ready Tensor for deployment details and cost analysis.
- Simple CLI — Just run
ait "your description"and get a command - Auto-Detect OS — Automatically generates commands for your current platform
- Target Any OS — Use
-t win,-t linux, or-t macto generate for other platforms - Headless Mode —
ait generate "..."for scripting and piping - OpenAI-Compatible — Works with OpenAI, LiteLLM, Ollama, and any compatible endpoint
- Cross-Platform — Linux, macOS, and Windows support
- Secure — API tokens masked in output, config files with restricted permissions
- Custom Model Support — Uses a fine-tuned Qwen3 model for terminal commands
┌─────────────┐ ┌─────────────────────────────────────────────┐
│ AIT CLI │────▶│ LiteLLM Proxy (HuggingFace Spaces) │
│ (Go binary)│ │ ┌─────────────┐ ┌────────────────────┐ │
└─────────────┘ │ │ LiteLLM │───▶│ HF Endpoint Proxy │ │
│ │ (port 7860)│ │ (port 8000) │ │
│ └─────────────┘ └─────────┬──────────┘ │
└───────────────────────────────┼─────────────┘
│
┌───────────────────────────────▼─────────────┐
│ HuggingFace Dedicated Inference Endpoint │
│ (Qwen3-0.6B Terminal Instruct) │
└─────────────────────────────────────────────┘
If you have Go 1.21+ installed:
go install github.com/Eng-Elias/ait@latestThis installs ait to your $GOPATH/bin (usually ~/go/bin).
Download the latest release for your platform from the Releases page:
| Platform | Binary |
|---|---|
| Linux (AMD64) | ait-linux-amd64 |
| Linux (ARM64) | ait-linux-arm64 |
| macOS (Intel) | ait-darwin-amd64 |
| macOS (Apple) | ait-darwin-arm64 |
| Windows | ait-windows-amd64.exe |
git clone https://github.com/Eng-Elias/ait.git
cd ait
make buildmake installThis copies the binary to /usr/local/bin (Unix) or %PROGRAMFILES% (Windows).
ait setupYou will be prompted for:
- API Endpoint (default: OpenAI)
- API Token (your OpenAI API key)
- Model (default:
gpt-4o-mini)
The wizard will test your connection and save the configuration.
ait "list all files larger than 100MB"AIT will:
- Send your description to the AI
- Display the generated command
- Ask for confirmation (
Y/n) - Execute the command if you confirm
# Auto-detects your OS and generates the right command
ait "show disk usage sorted by size"
# Target a specific OS
ait "find all log files" -t linux
ait "list running processes" -t mac
ait "check open ports" -t win| Flag Value | Target | Shell |
|---|---|---|
win, windows |
Windows | PowerShell |
linux |
Linux | bash |
mac, macos |
macOS | zsh |
| (omitted) | Auto-detected | Auto |
ait generate "list all docker containers"
# Output: docker ps -aGenerates a command and prints it to stdout without confirmation. Useful for scripting:
$(ait generate "count lines in all Python files")# Show current config
ait config
# Get a specific value
ait config get model
# Set a value
ait config set model default
# Run setup wizard again
ait setupait versionRun ait setup to configure your API credentials.
Your API token is invalid or expired. Update it:
ait config set api_token sk-your-new-token- Check your internet connection
- Verify the API endpoint is correct:
ait config get api_endpoint - For local endpoints (LiteLLM, Ollama), ensure the server is running
Wait a moment and try again. Consider upgrading your API plan for higher limits.
Enable debug logging for troubleshooting:
ait "your prompt" --debugLogs are written to ~/.ait/debug.log.
AIT uses a LiteLLM proxy deployed on HuggingFace Spaces to route requests to a fine-tuned Qwen3 model.
| Component | Platform | Description |
|---|---|---|
| LiteLLM Proxy | HuggingFace Spaces | API gateway with auth, rate limiting |
| HF Endpoint Proxy | Same Space (supervisord) | Translates OpenAI format to TGI native |
| Model Endpoint | HF Dedicated Inference | Qwen3-0.6B Terminal Instruct |
| Database | Supabase | Virtual keys, usage tracking |
See deploy/litellm/hf-spaces/README.md for deployment instructions.
make build # Build for current platform
make build-all # Cross-compile for all platforms
make test # Run testsRequires Go 1.21+.
- GitHub: github.com/Eng-Elias/ait
- Fine-Tuned Model: huggingface.co/Eng-Elias/Qwen3-0.6B-terminal-instruct
- LiteLLM Proxy: eng-elias-litellm.hf.space
- Module 1 Project: Fine-Tuning Qwen3 for Terminal Commands
MIT License — see LICENSE for details.
