v is a command-line tool that uses AI to analyze build errors and provide actionable debugging suggestions. It automatically captures command output, sanitizes sensitive data, and sends it to an AI provider for intelligent analysis.
- Features
- Installation
- Configuration
- Usage
- Examples
- Command-Line Flags
- Sanitization
- Smart Truncation
- Troubleshooting
- Contributing
- License
- Multi-Provider Support: Works with Groq, OpenAI, and Anthropic AI providers
- Dual Input Modes: Execute commands directly or pipe output for analysis
- Automatic Sanitization: Redacts 20+ patterns including API keys, passwords, IPs, emails
- Smart Truncation: Intelligently preserves error-prone lines when input exceeds token limits
- Dry Run Mode: Preview sanitized input without making API calls
- Debug Mode: Inspect raw and sanitized payloads for troubleshooting
- Configurable Timeout: Prevent hung processes with customizable timeouts
# Clone the repository
git clone https://github.com/your-repo/v.git
cd v
# Build the executable
# Windows
go build -o v.exe
# Linux/macOS
go build -o vDownload the correct binary for your platform from the release/ directory or GitHub Releases:
| OS | Architecture | Binary |
|---|---|---|
| Windows | x64 | v-windows-amd64.exe |
| Linux | x64 | v-linux-amd64 |
| macOS | ARM64 (Apple Silicon) | v-darwin-arm64 |
| macOS | x64 | v-darwin-amd64 |
chmod +x release/v-linux-amd64
sudo mv release/v-linux-amd64 /usr/local/bin/vIf you prefer a user-local install:
mkdir -p "$HOME/bin"
mv release/v-linux-amd64 "$HOME/bin/v"
printf '%s\n' 'export PATH="$HOME/bin:$PATH"' 'export PATH="$PATH:$HOME/bin"' >> ~/.bashrc
source ~/.bashrcIf the binary is not named
v, you can also rename it before moving it:mv release/v-linux-amd64 /usr/local/bin/v
- Download
v-windows-amd64.exe. - Move it to a folder on your
PATH, such asC:\toolsorC:\Program Files\v. - If needed, add the folder to
PATH:
setx PATH "$env:PATH;C:\tools"- Open a new terminal and run
v.exe.
If you want the command to be exactly
v, rename the file tov.exebefore moving it.
Before using the tool, set the API key for your chosen provider:
# For Groq
export GROQ_API_KEY="your_groq_api_key"
# For OpenAI
export OPENAI_API_KEY="your_openai_api_key"
# For Anthropic
export ANTHROPIC_API_KEY="your_anthropic_api_key"Optionally set a default model:
export V_MODEL="gpt-4o-mini"On Windows PowerShell:
setx GROQ_API_KEY "your_groq_api_key"
setx V_MODEL "gpt-4o-mini"Run a command directly through v:
v go build ./...
v npm run buildPipe build or test output into v:
go test ./... 2>&1 | v
cat build.log | vCommon flags:
v --version
v --dry-run go test ./...
v --provider openai --model gpt-4o-mini npm run buildOn macOS/Linux, install into a directory already on your PATH like /usr/local/bin or add your custom bin directory to PATH.
On Windows, place v.exe in a directory already on PATH, or add the install directory to PATH and reopen your terminal.
This makes v available as a global CLI command from any folder.
| Variable | Required | Description | Default Model |
|---|---|---|---|
GROQ_API_KEY |
For Groq provider | API key from Groq | llama-3.3-70b-versatile |
OPENAI_API_KEY |
For OpenAI provider | API key from OpenAI | gpt-4o-mini |
ANTHROPIC_API_KEY |
For Anthropic provider | API key from Anthropic | claude-haiku-4-5-20251001 |
V_MODEL |
Optional | Override default model (lowest priority) | Provider-specific |
Groq (Default)
- Fast inference with free tier available
- Get API key: https://console.groq.com/
OpenAI
- GPT-4o, GPT-4o-mini, and other models
- Get API key: https://platform.openai.com/api-keys
Anthropic
- Claude models (Haiku, Sonnet, Opus)
- Get API key: https://console.anthropic.com/
v supports multiple ways to configure environment variables with a priority system. Create .env files in any of these locations:
- Explicit file via
--env-fileflag - Current working directory (project-specific)
- Executable directory (global install)
- User config directory (global access)
Create a .env file in your project root:
# Choose one or more providers
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ANTHROPIC_API_KEY=sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
V_MODEL=gpt-4o-mini # Optional model overrideFor system-wide access from any directory, create a global config:
Windows (PowerShell):
# Create the config directory
mkdir -p "$env:USERPROFILE\.config\vpipe"
# Create and edit the .env file
notepad "$env:USERPROFILE\.config\vpipe\.env"Linux/macOS:
# Create the config directory
mkdir -p "$HOME/.config/vpipe"
# Create and edit the .env file
nano "$HOME/.config/vpipe/.env"
# or
code "$HOME/.config/vpipe/.env"Add your API keys to the global .env file:
GROQ_API_KEY=your_groq_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
V_MODEL=optional_model_overrideBenefits of global config:
- ✅ Works from any directory
- ✅ Single setup for all projects
- ✅ Lower priority than local configs (can be overridden per-project)
- ✅ Secure (only accessible by your user account)
Use the --env-file flag to specify any .env file location:
v --env-file /path/to/custom/.env go build ./...
v --env-file ./config/prod.env npm run buildSet environment variables directly in your shell (highest precedence except explicit --env-file):
# Linux/macOS
export GROQ_API_KEY="your_key"
export V_MODEL="gpt-4o"
# Windows PowerShell
$env:GROQ_API_KEY = "your_key"
$env:V_MODEL = "gpt-4o"
# Windows Command Prompt
set GROQ_API_KEY=your_key
set V_MODEL=gpt-4oNote: System environment variables take precedence over all .env files except when using --env-file.
Run a command directly through v. It executes the command, captures stdout/stderr, and analyzes the output:
v <command> [args...]Example:
v go build ./...
v npm run build
v cargo build 2>&1Pipe output from any command into v for analysis:
<command> | v
<command> 2>&1 | v
cat error.log | vExample:
npm run build 2>&1 | v
go test ./... 2>&1 | v
docker build . 2>&1 | vv flags must come before the command. Use -- to separate v flags from command arguments:
# v flag before command
v --timeout 60 npm run build
# Separate v flags from command args using --
v --dry-run -- npm run build --verboseAnalyze a failed Go build:
v go build ./...Analyze npm build errors:
v npm run buildAnalyze cargo errors:
v cargo buildPipe existing error log:
cat build_errors.log | vAnalyze make output:
make 2>&1 | vUse OpenAI instead of Groq:
v --provider openai go build ./...Use Anthropic with custom model:
v --provider anthropic --model claude-sonnet-4-20250514 npm run buildSet default model via environment:
export V_MODEL=gpt-4o
v npm run buildDry run — preview sanitized input:
v --dry-run go build ./...Debug mode — see raw + sanitized payloads:
v --debug npm run buildCustom timeout for long-running commands:
v --timeout 120 go test ./... -vCombine multiple flags:
v --provider openai --model gpt-4o --max-tokens 800 --debug go build ./...Preview what would be sent (dry-run with piped input):
go build ./... 2>&1 | v --dry-runUse specific model override:
v --model llama-3.1-70b-versatile npm run build| Flag | Short | Default | Description |
|---|---|---|---|
--provider |
- | groq |
AI provider: groq, openai, or anthropic |
--model |
- | (provider default) | Override the AI model |
--max-tokens |
- | 600 |
Maximum tokens in AI response |
--timeout |
- | 30 |
Command timeout in seconds |
--dry-run |
- | false |
Show sanitized input without calling AI |
--debug |
- | false |
Show raw and sanitized payloads |
--version |
- | false |
Print version and exit |
--help |
-h |
false |
Show help message |
v automatically redacts sensitive patterns before sending data to AI:
| Pattern | Example |
|---|---|
| AWS Access Keys | AKIAIOSFODNN7EXAMPLE |
| AWS Secret Keys | aws_secret_access_key=... |
| API Keys/Tokens | api_key=abc123... |
| Email Addresses | user@example.com |
| IPv4 Addresses | 192.168.1.100 |
| Windows Paths | C:\Users\John\file.txt |
| Unix Paths | /home/user/project |
| SSH Private Keys | -----BEGIN PRIVATE KEY-----... |
| Passwords in URLs | postgres://admin:pass@localhost |
| JWT Tokens | eyJhbGci... |
| Environment Username | USERNAME value |
| Environment Hostname | COMPUTERNAME value |
Example dry-run output showing sanitization:
$ echo "My API key is sk-1234567890abcdef" | v --dry-run
🔎 Dry Run — Sanitized Input:
My API key is [REDACTED]When input exceeds 6,000 characters, v uses intelligent truncation:
- Scores each line based on error signal keywords (error, fail, panic, exception, etc.)
- Preserves short lines (stack traces, error summaries)
- Selects highest-scoring lines up to the character limit
- Maintains original order with
[lines omitted]markers
This ensures the AI receives the most relevant error information even from large logs.
"missing GROQ_API_KEY environment variable"
- Set the appropriate API key for your chosen provider
- Check
.envfile is in the correct directory
"unsupported provider"
- Use:
groq,openai, oranthropic
Command times out
- Increase timeout:
v --timeout 120 <command>
Empty AI response
- Check API key is valid
- Try
--debugto see request/response details
Sanitization not working as expected
- Use
--dry-runto preview output - Use
--debugto see raw vs sanitized
Run the included test suite:
go test -v ./...Tests cover:
- Sanitization of AWS keys, emails, IPs, JWTs
- Smart truncation preserving error lines
- Error signal detection
- Configuration loading
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
MIT License — see LICENSE file for details.
Current version: 2.1.0
