A minimal bash utility that converts natural language queries into shell commands using a local Ollama instance.
- Ollama installed and running on your machine
- Install from: https://ollama.ai
- Pull a model:
ollama pull devstral-2-small(or any other model you prefer)
- Clone this repository:
git clone <repository-url>
cd qq- Run the installation script:
./install.sh- Reload your shell:
source ~/.bashrc # or ~/.zshrc for zsh usersSimply type qq followed by your natural language request:
qq for each file in this directory, print the first line to stdoutThis will automatically generate and populate your shell with:
find . -maxdepth 1 -type f -exec head -n 1 {} \;- Shell Detection:
qqautomatically detects your shell (bash, zsh, or fish) - Query Processing: Sends your natural language request to Ollama with shell-specific context
- Command Generation: Ollama generates the appropriate shell command
- Auto-population:
- zsh: Command appears directly in your prompt buffer (ready to execute)
- bash: Command is added to history (press Up arrow to access)
- fish: Command appears in your command line buffer
- Other shells: Command is printed to stdout
By default, qq uses the devstral-2-small model. To use a different model, set the QQ_MODEL environment variable:
export QQ_MODEL=codellama # If you want to use the `codellama` model, add this to your ~/.bashrc or ~/.zshrcFor best results with shell commands:
devstral-2-small(default, fast and efficient)codellama(optimized for code)llama3.2(fast and accurate)llama3.1(larger, more capable)
Pull a model with:
ollama pull devstral-2-small# File operations
qq list all python files modified in the last 7 days
# Text processing
qq count the number of lines in all txt files
# System information
qq show disk usage sorted by size
# Git operations
qq show all commits from the last week
# Process management
qq find and kill all node processes- Be specific in your requests for better results
- The generated command appears instantly (1-2 seconds)
- Review generated commands before executing them
- For complex operations, you can refine the natural language query
# Check if Ollama is installed
which ollama
# Install Ollama if needed
# Visit https://ollama.ai# Use a smaller, faster model
export QQ_MODEL=devstral-2-small
# Ensure Ollama is running
ollama list- Make sure you've reloaded your shell after installation
- For bash, use the Up arrow to access the generated command from history
- Check that the installation added the source line to your RC file
Remove the source line from your shell RC file:
# Edit ~/.bashrc or ~/.zshrc and remove:
source "/path/to/qq/qq.sh"MIT
Contributions welcome! Please open an issue or pull request.