See where your tokens go. Retroactive analysis, hotspot detection, and cost optimization for AI agents. Zero dependencies.
You're burning $50/day on AI tokens and have no idea where it goes. Sound familiar?
Token usage analytics tools exist (token-dashboard, codeburn) — but they're Python-heavy dashboard apps. You just want to open a file and understand your costs.
Skill Token Analyzer is a pure markdown skill — no Python, no database, no server. Just structured analysis patterns your agent already knows how to follow.
╔══════════════════════════════════════════════════════════════════╗
║ TOKEN ANALYZER DASHBOARD ║
║ Updated: Wed 22 Apr 18:06 ║
╠══════════════════════════════════════════════════════════════════╣
║ ║
║ TODAY'S USAGE ║
║ ───────────────────────────────────────────────────────────── ║
║ Sessions: 4 ║
║ Tokens: 487,234 ║
║ Cost: $12.34 ║
║ Status: ████████████░░░░░░░░ 61.7% of daily budget ║
║ ║
║ COST BREAKDOWN (Last 7 Days) ║
║ ───────────────────────────────────────────────────────────── ║
║ ║
║ $40 ┤ ║
║ $35 ┤ ██ ║
║ $30 ┤ ██ ║
║ $25 ┤ ██ ██ ██ ║
║ $20 ┤ ██ ██ ██ ██ ██ ║
║ $15 ┤ ██ ██ ██ ██ ██ ██ ║
║ $10 ┤ ██ ██ ██ ██ ██ ██ ║
║ $5 ┤ ██ ██ ██ ██ ██ ██ ██ ║
║ └──┬─────┬─────┬─────┬─────┬─────┬─────┬── ║
║ Mon Tue Wed Thu Fri Sat Sun ║
║ ║
║ TOP TOOLS BY COST ║
║ ───────────────────────────────────────────────────────────── ║
║ 1. Read ████████████░░░░ $47.23 ║
║ 2. Bash ██████████░░░░░ $39.87 ║
║ 3. Write ████████░░░░░░░ $31.45 ║
║ 4. Search ██████░░░░░░░░░ $24.12 ║
║ 5. Think █████░░░░░░░░░░ $19.87 ║
║ ║
║ HOTSPOTS DETECTED ║
║ ───────────────────────────────────────────────────────────── ║
║ 🔴 Redundant reads: 3 instances (est. $0.93 wasted) ║
║ 🟡 Context bloat: 2 sessions >80% ║
║ 🟢 Model mismatch: 4 simple tasks on opus ║
║ ║
║ QUICK STATS ║
║ ───────────────────────────────────────────────────────────── ║
║ Cache hit rate: 34% ║
║ Avg session cost: $5.14 ║
║ Model efficiency: 7.2/10 ║
║ Budget health: ✅ On track ║
║ ║
╚══════════════════════════════════════════════════════════════════╝
BEFORE AFTER
(without analysis) (with Skill Token Analyzer)
Daily cost: $50.00 ──────────────→ $15.00 (70% savings)
Model mix: 90% opus ──────────────→ 30% opus, 50% sonnet, 20% haiku
Context bloat: 5 sessions/week 0 sessions/week
Redundant reads: ~$5/day wasted ~$0.50/day wasted
Cache hit rate: 15% 45%
Monthly savings: $1,050 → $315 = $735/month
1. Install
# Via ClawHub
clawhub install aptratcn/skill-token-analyzer
# Or clone directly
cd ~/.openclaw/workspace/skills
git clone https://github.com/aptratcn/skill-token-analyzer.git2. Analyze
Ask your agent:
"Analyze my token usage from today"
"Show me a cost dashboard for this week"
"Find token waste patterns in my recent sessions"
"Am I over budget?"
That's it. The skill lives in your workspace — your agent reads it, follows the patterns, and gives you structured analysis. No servers, no APIs, no setup beyond the file.
| Feature | Description |
|---|---|
| 📊 Transcript Analysis | Parse Claude Code & OpenAI JSONL transcripts for token usage |
| 💰 Cost Breakdown | Per-session, per-tool, per-model cost attribution |
| 🔍 Hotspot Detection | Auto-flag redundant reads, repeated searches, context bloat |
| 📈 Session Comparison | Daily/weekly trends, week-over-week deltas |
| 💡 Optimization Recommendations | Specific actionable tips ranked by impact |
| 📋 Token Budgeting | Set daily/weekly/monthly limits, get alerts |
| 🔄 Model Cost Comparison | Know when to use opus vs sonnet vs haiku |
| 🗜️ Compression Analysis | Measure context compression savings |
| 🖥️ ASCII Dashboard | Terminal-friendly cost visualization |
| 📝 Markdown Export | Generate shareable cost reports |
| 📚 Patterns Library | Top 10 token waste patterns with fixes |
| ⚙️ YAML Configuration | .token-analyzer.yaml for budgets and thresholds |
This is a skill file — not a tool or app. Your AI agent reads SKILL.md and follows the analysis patterns described inside:
- Read transcripts — Parse
.jsonlfiles from Claude Code or OpenAI - Extract metrics — Token counts, tool usage, model selection, timestamps
- Apply pricing — Calculate costs using current model pricing tables
- Detect patterns — Flag hotspots from the patterns library
- Generate output — ASCII dashboards, markdown reports, recommendations
No runtime. No dependencies. The intelligence is in the patterns.
| Skill Token Analyzer | token-dashboard | |
|---|---|---|
| Dependencies | None (pure markdown) | Python, pip, plotly |
| Installation | Copy file | pip install + setup |
| Output | Terminal/text | Web dashboard |
| Agent integration | Native (it IS a skill) | External tool |
| Model comparison | Built-in patterns | Limited |
| Pattern detection | 10 built-in patterns | Basic alerts |
| Budget alerts | Built-in | Not included |
| Use case | During coding sessions | Post-hoc review |
Use both! token-dashboard for pretty charts. Skill Token Analyzer for in-session, real-time analysis by your agent.
The skill includes a comprehensive library of common waste patterns:
- The Repeater — Reading same files repeatedly
- The Scattergun — Overlapping search queries
- The Whale — Unchecked context bloat
- The Over-Engineer — Opus for simple tasks
- The Bulk Loader — Reading entire files when parts suffice
- The Retry Storm — Failed attempts without state checks
- The Verbose Prompter — Bloated system prompts
- The Non-Cacher — Ignoring prompt caching
- The Solo Artist — Sequential instead of parallel
- The Memory Leaker — Cross-contaminated sessions
Each pattern includes ❌ Bad / ✅ Good examples and estimated savings.
# .token-analyzer.yaml
budgets:
daily: 20
weekly: 100
monthly: 400
hotspots:
redundant_reads: 5
context_pressure: 0.8
search_overlap: 0.6
reports:
output_dir: ./token-reports
format: markdown- Claude Code / OpenClaw users who want to understand token costs
- Teams tracking AI spend across sessions
- Budget-conscious developers optimizing their AI workflow
- Anyone who looked at their API bill and thought "where did that go?"
Ideas welcome! The skill is a single markdown file — contributions are easy:
- Fork the repo
- Edit
SKILL.md - Open a PR
MIT — Use freely, optimize responsibly.