AIR is a toolkit for optimizing tool outputs for AI consumption. The name is inspired by "accessibility retrofitting" - just as we retrofit infrastructure for accessibility, we need to retrofit developer tools for AI ergonomics.
AI context windows are the scarcest resource. A 200K token window sounds large, but:
- A single
npm installoutput can consume 2000+ tokens - A test report can eat 5000+ tokens
- 50-96% of this is noise: progress bars, blank lines, redundant info
The real cost isn't tokens - it's attention dilution. Even with unlimited tokens, noise in context degrades AI reasoning quality.
AIR intercepts tool outputs at the source, filtering noise before it enters the context window. Prevention, not cleanup.
| Package | Description | npm |
|---|---|---|
@10iii/air-core |
Core compression library | |
@10iii/air |
CLI tool (air command) |
|
@10iii/air-mcp-server |
MCP server for Claude/etc | |
@10iii/air-oc-plugin |
OpenCode plugin |
# Install globally
npm install -g @10iii/air
# Read a file with compression
air read src/index.ts --skeleton
# Run a command with output compression
air bash "npm install"
# Search with de-duplication
air grep "TODO" --include "*.ts"
# Test output compression
air test "npm test"import { ReadCompressor, BashCompressor, GrepCompressor } from '@10iii/air-core';
// Compress file content
const read = new ReadCompressor();
const result = read.compress(fileContent, { mode: 'skeleton' });
// Compress command output
const bash = new BashCompressor();
const result = bash.compress(output, { command: 'npm install' });Add to your Claude Desktop config:
{
"mcpServers": {
"air": {
"command": "npx",
"args": ["@10iii/air-mcp-server"]
}
}
}Add to your opencode.json:
{
"plugins": ["@10iii/air-oc-plugin"]
}File content compression with intelligent truncation and structure awareness.
- skeleton: Extract function/class signatures only
- focused: Line range extraction with context
- truncate: Smart truncation with size limits
Command output compression with pattern recognition.
- npm/pnpm/yarn install → progress removal, error extraction
- git operations → diff summarization
- Generic → intelligent truncation
Search result compression with path de-duplication and context optimization.
Test output parsing for pytest, jest, go test, and more.
Web page content extraction with readability and markdown conversion.
Directory listing with tree structure and smart filtering.
Diff output compression with hunk summarization.
Direct file editing without pre-reading (search/replace).
Session/conversation history compression.
API response compression (JSON field filtering).
Media file metadata extraction (images, audio, video).
Web search with multiple engine support (DuckDuckGo, Bing, Baidu, Sogou).
| Compressor | Typical TSR |
|---|---|
| air-test | 90%+ |
| air-bash | 60-90% |
| air-read | 50-80% |
| air-grep | 40-60% |
| air-web | 70-90% |
- Prevention > Cleanup - Filter at the source, not after
- Rule-based > LLM-based - Deterministic compression, no API calls
- Progressive Disclosure - Skeleton first, details on demand
- Cross-platform - Works with any AI tool
- Zero API Key - No external API dependencies
- Node.js >= 18
MIT
AIR collects anonymous usage statistics to improve the product. This includes compressed content hashes (not the content itself), compression ratios, and basic metadata. No personal data is collected. You can modify this via air config.
Contributions welcome! Please read CONTRIBUTING.md and the Architecture docs first.