Skip to content

charl-kruger/ape

Β 
Β 

Repository files navigation

ape

ape together strong. token count small.

Stars Last Commit License

Before/After β€’ Install β€’ Levels β€’ Skills β€’ Benchmarks β€’ Evals


A Claude Code skill/plugin and Codex plugin that makes agent answer in a compressed ape voice β€” cutting output tokens hard while keeping full technical accuracy. Now with terse commits, one-line code reviews, and a compression tool that cuts ~45% of input tokens every session.

Based on the viral observation that ape-speak dramatically reduces LLM token usage without losing technical substance. So we made it a one-line install.

Before / After

πŸ—£οΈ Normal Claude (69 tokens)

"The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."

🦍 Ape Claude (19 tokens)

"New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo."

πŸ—£οΈ Normal Claude

"Sure! I'd be happy to help you with that. The issue you're experiencing is most likely caused by your authentication middleware not properly validating the token expiry. Let me take a look and suggest a fix."

🦍 Ape Claude

"Bug in auth middleware. Token expiry check use < not <=. Fix:"

Same fix. Less word. Brain still big.

Pick your troop level:

πŸͺΆ Lite

"Your component re-renders because you create a new object reference each render. Inline object props fail shallow comparison every time. Wrap it in useMemo."

πŸͺ¨ Full

"New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo."

πŸ”₯ Ultra

"Inline obj prop β†’ new ref β†’ re-render. useMemo."

⚑ Micro

"New ref each render. useMemo."

Same answer. You pick how many word.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  TOKENS SAVED          β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 75% β”‚
β”‚  TECHNICAL ACCURACY    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 100%β”‚
β”‚  SPEED INCREASE        β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ~3x β”‚
β”‚  VIBES                 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ OOG β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  • Faster response β€” less token to generate = speed go brrr
  • Easier to read β€” no wall of text, just the answer
  • Same accuracy β€” all technical info kept, only fluff removed (science say so)
  • Save money β€” fewer output tokens = less cost
  • Fun β€” every code review become comedy

Install

Claude Code (recommended)

Install as a plugin β€” includes skills + auto-loading hooks (ape activates every session, mode badge tracks /ape ultra etc.):

claude plugin marketplace add JuliusBrussee/ape
claude plugin install ape@ape

Any agent (Claude Code, Cursor, Copilot, Windsurf, Cline, Codex)

npx skills add JuliusBrussee/ape

For a specific agent: npx skills add JuliusBrussee/ape -a cursor

Note

npx skills installs skills only (no hooks). For Claude Code auto-loading hooks, use the plugin install above or run bash hooks/install.sh.

Codex

  1. Clone repo β†’ Open Codex in repo β†’ /plugins β†’ Search Ape β†’ Install

Note

Windows Codex users: Clone repo β†’ VS Code β†’ Codex Settings β†’ Plugins β†’ find Ape under local marketplace β†’ Install β†’ Reload Window. Also enable git config core.symlinks true before cloning (requires developer mode or admin).

Install once. Use in all sessions after that. One troop. That it.

Optional: Statusline Badge

Add a [APE:ULTRA] badge to your statusline showing which mode is active. See hooks/README.md for the snippet.

Usage

Trigger with:

  • /ape or Codex $ape
  • "talk like ape"
  • "ape mode"
  • "less tokens please"

Stop with: "stop ape" or "normal mode"

Intensity Levels

Level Trigger What it do
Lite /ape lite Drop filler, keep grammar. Professional but no fluff
Full /ape full Default ape. Drop articles, fragments, sparse ape tone
Ultra /ape ultra Maximum compression. Telegraphic. Abbreviate hard
Micro /ape micro Answer only. Minimal words. No framing

Level stick until you change it or session end.

Ape Skills

Skill What it do Trigger
ape-commit Terse commit messages. Conventional Commits. ≀50 char subject. Why over what. /ape-commit
ape-review One-line PR comments: L42: πŸ”΄ bug: user null. Add guard. No throat-clearing. /ape-review

ape-compress

Ape make Claude speak with fewer tokens. Compress make Claude read fewer tokens.

Your CLAUDE.md loads on every session start. Ape Compress rewrites memory files into ape-speak so Claude reads less β€” without you losing the human-readable original.

/ape:compress CLAUDE.md
CLAUDE.md          ← compressed (Claude reads this every session β€” fewer tokens)
CLAUDE.original.md ← human-readable backup (you read and edit this)
File Original Compressed Saved
claude-md-preferences.md 706 285 59.6%
project-notes.md 1145 535 53.3%
claude-md-project.md 1122 687 38.8%
todo-list.md 627 388 38.1%
mixed-with-code.md 888 574 35.4%
Average 898 494 45%

Code blocks, URLs, file paths, commands, headings, dates, version numbers β€” anything technical passes through untouched. Only prose gets compressed. See the full ape-compress README for details. Security note: Snyk flags this as High Risk due to subprocess/file patterns β€” it's a false positive.

Benchmarks

Historical token counts from an earlier benchmark run (reproduce it yourself):

Task Normal (tokens) Ape (tokens) Saved
Explain React re-render bug 1180 159 87%
Fix auth middleware token expiry 704 121 83%
Set up PostgreSQL connection pool 2347 380 84%
Explain git rebase vs merge 702 292 58%
Refactor callback to async/await 387 301 22%
Architecture: microservices vs monolith 446 310 30%
Review PR for security issues 678 398 41%
Docker multi-stage build 1042 290 72%
Debug PostgreSQL race condition 1200 232 81%
Implement React error boundary 3454 456 87%
Average 1214 294 65%

Range: 22%–87% savings across prompts.

Important

Ape only affects output tokens β€” thinking/reasoning tokens are untouched. Ape no make brain smaller. Ape make mouth smaller. Biggest win is readability and speed, cost savings are a bonus.

A March 2026 paper "Brevity Constraints Reverse Performance Hierarchies in Language Models" found that constraining large models to brief responses improved accuracy by 26 percentage points on certain benchmarks and completely reversed performance hierarchies. Verbose not always better. Sometimes less word = more correct.

Evals

Ape not just claim compression. Ape measure it.

The evals/ directory has a three-arm eval harness that measures real token compression against a proper control β€” not just "verbose vs skill" but "terse vs skill". Because comparing ape to verbose Claude conflate the skill with generic terseness. That cheating. Ape not cheat.

# Run the eval (needs claude CLI)
uv run python evals/llm_run.py

# Read results (no API key, runs offline)
uv run --with tiktoken python evals/measure.py

Snapshots are local generated artifacts and are not committed. Run the eval when you want fresh numbers. Add a skill, add a prompt β€” harness picks it up automatically.

Star This Repo

If ape save you mass token, mass money β€” leave mass star. ⭐

Star History Chart

Also by Julius Brussee

  • Cavekit β€” specification-driven development for Claude Code. Ape language β†’ specs β†’ parallel builds β†’ working software.
  • Revu β€” local-first macOS study app with FSRS spaced repetition, decks, exams, and study guides. revu.cards

License

MIT β€” free like mass mammoth on open plain.

About

πŸͺ¨ why use many token when few token do trick β€” Claude Code skill that cuts 65% of tokens by talking like caveman

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 86.6%
  • JavaScript 6.7%
  • Shell 6.7%