feat(context): per-tool output caps + stale tool-result pruning#38
Merged
feat(context): per-tool output caps + stale tool-result pruning#38
Conversation
Two independent knobs for in-session token hygiene:
1. Tool-level output caps
- read_file gains offset/limit (line-based), defaults to 500 lines
capped at ~20K chars; includes pagination hint in the truncated
response so the model knows how to page.
- run_shell and claude-subagent switch from head-only / tail-only to
head+tail truncation (70/30) via a shared helper
src/tools/truncate.ts#headAndTail — so neither the initial plan
nor the final error/answer is silently lost.
2. pruneStaleToolResults in transformContext
- Runs every turn before compaction. Preserves the last FRESH_TURNS
user turns (default 4, env MAX_FRESH_TURNS) intact; older
toolResult bodies are replaced with a short stub that keeps the
tool name, length, and a head prefix for continuity.
- Structure-preserving: toolCallId/role/isError untouched so
tool_use ↔ tool_result pairing stays valid for Anthropic's API.
- Idempotent (second pass returns same reference).
Before: a single heavy gh-diff / browser-scrape early in a session
re-billed itself at full size on every subsequent iteration. After:
stale bodies collapse to a ~200-char breadcrumb after a few turns.
Tests: 7 new in tests/context.test.ts covering no-op, prune-stale,
keep-fresh, skip-small, idempotence, and structural preservation.
Updated existing claude-subagent truncation test to the head+tail
behaviour.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ff85ed4 to
361f76a
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Two independent levers for in-session token hygiene:
1. Tool-level output caps
read_filegainsoffset/limitparams (line-based), defaults to 500 lines capped at ~20K chars, includes pagination hint in the truncated response so the model knows how to pagerun_shellandclaude-subagentswitch to head+tail truncation (70/30) via a shared helpersrc/tools/truncate.ts#headAndTail— so neither the initial plan nor the final error/answer is silently lost2.
pruneStaleToolResultsintransformContextFRESH_TURNSuser turns (default 4, override viaMAX_FRESH_TURNS)toolResultbodies replaced with a short stub — tool name, length, head prefixtoolCallId/role/isErroruntouched sotool_use↔tool_resultpairing remains valid for Anthropic's APIWhy
Before: a single heavy
gh pr diff/ browser-scrape early in a session re-bills itself at full size on every subsequent agent iteration. After: stale bodies collapse to a ~200-char breadcrumb after 4 turns.This is the biggest in-session lever other than caching: it operates on the active
messagesarray on every turn, not just at compaction time.Test plan
tests/context.test.ts(no-op, prune-stale, keep-fresh, skip-small, idempotence, structural preservation)claude-subagenttruncation test updated to verify head+tailnpm test— 78/78 passnpm run buildcleancontext_infoshows token drop after turn 4🤖 Generated with Claude Code