A fast, minimal CLI for getting AI answers directly in your terminal.
When running in an interactive terminal, q shows a small ASCII loading indicator on stderr while it waits for the first response text.
npm install -g @hongymagic/qOr download a standalone binary from the releases page.
Or build from source with Bun:
git clone https://github.com/hongymagic/q.git && cd q
bun install && bun run buildq works without a config file.
- Free local setup with Ollama
ollama pull gemma3
q --provider ollama --model gemma3 explain this stack trace- Free cloud setup with Gemini
export GEMINI_API_KEY="your-key-here"
q explain rust lifetimes- Optional: create a pinned config
q config initq auto-detects common provider keys (GEMINI_API_KEY, GROQ_API_KEY, ANTHROPIC_API_KEY, OPENAI_API_KEY) and falls back to local Ollama when available.
q how do I restart docker
GEMINI_API_KEY="your-key" q explain closures in javascript
q --copy what is a kubernetes pod
q --provider ollama --model gemma3 explain this errorPipe content as context for your question:
cat error.log | q "what's wrong here?"
git diff | q "summarise these changes"Or pipe the query itself:
echo "how do I restart docker" | q| Option | Description |
|---|---|
-p, --provider <name> |
Override the default provider |
-m, --model <id> |
Override the default model |
--mode <mode> |
Output mode: command (default) or explain |
--copy |
Copy answer to clipboard |
--no-copy |
Disable copy (overrides config) |
--debug |
Enable debug logging to stderr |
-h, --help |
Show help message |
-v, --version |
Show version |
By default, q returns terse, copy/paste-ready commands. Use --mode explain for detailed explanations:
q how do I restart docker # Returns the command(s)
q --mode explain how do I restart docker # Returns a detailed explanationq config path # Print config file path
q config init # Create optional config file
q config doctor # Diagnose config and setup issues
q providers # List available providers + model and credential statusConfig is optional. q starts with built-in provider presets and per-provider defaults, then applies overrides (later overrides earlier):
- Built-in defaults (
google,groq,anthropic,openai,ollama,azure,bedrock) $XDG_CONFIG_HOME/q/config.toml(or~/.config/q/config.toml)./config.toml(project-specific)- Environment:
Q_PROVIDER,Q_MODEL,Q_COPY - CLI flags:
--provider,--model
Each provider can specify its own default model, which takes precedence over default.model but is overridden by Q_MODEL or --model:
[default]
provider = "google"
# model = "gemini-2.5-flash" # Optional global default
[providers.google]
type = "google"
api_key_env = "GEMINI_API_KEY"
model = "gemini-2.5-flash" # Optional per-provider defaultSee config.example.toml for all options.
| Option | Cost | What you need | Notes |
|---|---|---|---|
ollama |
Free local | Ollama installed and a local model | Best zero-key/offline option |
google |
Free tier | GEMINI_API_KEY (or GOOGLE_API_KEY / GOOGLE_GENERATIVE_AI_API_KEY) |
Best free cloud default |
groq |
Free tier | GROQ_API_KEY |
Fast free cloud fallback |
| Type | Built in | Description |
|---|---|---|
google |
Yes | Google Gemini API |
groq |
Yes | Groq (ultra-fast open models) |
ollama |
Yes | Local Ollama instance |
anthropic |
Yes | Anthropic Claude API |
openai |
Yes | OpenAI API |
azure |
Yes | Azure OpenAI deployments |
bedrock |
Yes | AWS Bedrock (Claude, Titan, Llama) |
openai_compatible |
No | Any OpenAI-compatible API (needs base_url) |
portkey |
No | Portkey AI Gateway (needs base_url and provider_slug) |
Portkey is an AI gateway that provides unified access to multiple LLM providers with features like load balancing, caching, and observability. Use the portkey provider type for self-hosted or cloud deployments.
Self-hosted gateway:
[default]
provider = "portkey_internal"
model = "us.anthropic.claude-sonnet-4-20250514-v1:0"
[providers.portkey_internal]
type = "portkey"
base_url = "https://your-portkey-gateway.internal/v1"
provider_slug = "@your-org/bedrock-provider"
api_key_env = "PORTKEY_API_KEY"
provider_api_key_env = "PROVIDER_API_KEY"Configuration options:
| Field | Description |
|---|---|
base_url |
Your Portkey gateway URL |
provider_slug |
Provider identifier (maps to x-portkey-provider header) |
api_key_env |
Environment variable for Portkey API key (maps to x-portkey-api-key header) |
provider_api_key_env |
Environment variable for underlying provider's API key (maps to Authorization header) |
headers |
Additional custom headers (supports env var interpolation for allowlisted vars) |
Environment variables:
export PORTKEY_API_KEY="your-portkey-key"
export PROVIDER_API_KEY="your-provider-key"q includes an autonomous improvement system powered by GitHub Agentic Workflows. AI agents continuously scan, assess, and improve the codebase across multiple dimensions — security, features, maintenance, performance, test coverage, and usability.
| Agent | Schedule | Purpose |
|---|---|---|
| Security Daily | Daily | Scan and fix security vulnerabilities |
| Feature Daily | Daily | Detect and implement feature gaps |
| Maintenance Daily | Daily | Fix dead code, test gaps, docs drift |
| Self Improve Weekly | Monday | Retrospective on PR quality patterns |
| Performance Weekly | Tuesday | Binary size, speed, memory optimisation |
| Coverage Weekly | Wednesday | Expand test coverage |
| Usability Weekly | Thursday | CLI UX, error messages, help text |
| Self Evolve Fortnightly | 1st & 15th | Meta-agent: improve the agents themselves |
All agent PRs are created as drafts and require human approval to merge. The system is governed by .github/CONSTITUTION.md which defines immutable safety rules. All autonomous changes are tracked in .github/EVOLUTION.md.
"Setup required" error:
Use one of these quick starts:
export GEMINI_API_KEY="your-key-here"
q explain kubernetes podsq --provider ollama --model gemma3 explain kubernetes podsOr create a config file with q config init.
"Missing API key" error:
Ensure your API key environment variable is set:
export GEMINI_API_KEY="your-key-here"Diagnose config issues:
Run q config doctor to check config files, environment overrides, built-in defaults, and provider health at a glance.
Failure logs:
Most non-usage failures print a short error plus Full log: <path> on stderr. Logs are written to your platform log directory, for example ~/.local/state/q/errors on Linux.
In an interactive terminal, query failures also offer quick recovery options: press r to retry, Enter to print the full log, or q/Esc to exit.
Debug mode:
Use --debug to keep detailed diagnostics on stderr while still writing the full failure log:
q --debug "how do I list docker containers"Piped content not working:
Ensure you're piping content correctly. The query should be in arguments:
cat file.txt | q "explain this" # Correct
cat file.txt | q # Uses stdin as query (no context)MIT