builtin: add offset and line_count pagination to read_file and read_multiple_files#1828
Merged
trungutt merged 6 commits intodocker:mainfrom Feb 24, 2026
Merged
Conversation
Large file reads could produce tool results exceeding 150K characters, causing the total session context to exceed 100K tokens and trigger 504 Gateway Timeout errors from the production proxy infrastructure. Apply the existing limitOutput() (30,000 char cap) to read_file and read_multiple_files, consistent with shell, sandbox, and API tools. For read_multiple_files the limit is applied per-file so the model knows which specific file was truncated.
There was a problem hiding this comment.
Review Summary
The implementation correctly applies limitOutput() to prevent oversized tool results. However, there's one issue with metadata accuracy: when files are truncated, the LineCount metadata reflects only the truncated content rather than the actual file line count, which could be misleading to users.
LineCount was being calculated on the already-truncated string, so a 10,000-line file truncated to ~1,000 lines would report LineCount as ~1,000. Calculate it on the original content before applying limitOutput.
rumpl
reviewed
Feb 23, 2026
Member
rumpl
left a comment
There was a problem hiding this comment.
I think we should not limit read files, instead we should add offset and size parameters and instruct the LLM to use them. If we limit as simply as this there is no way an agent could read a whole file, which will limit the agent severely
…ultiple_files Replace the blunt limitOutput() cap with proper pagination: both tools now accept offset (1-based line number) and line_count parameters so the model can read large files incrementally rather than receiving a truncated blob. When a subset is returned a header is prepended with the line range and total line count so the model knows how much content remains. The TotalLines metadata field (renamed from LineCount) always reflects the full file regardless of the window requested. Tool descriptions and instructions are updated to guide the model to paginate large files.
…ange The new offset and line_count parameters and updated descriptions are now reflected in all VCR cassettes for the affected providers (OpenAI, Anthropic, Gemini, Mistral).
Move usage guidance to Instructions() only, keeping tool descriptions concise as suggested in review feedback.
krissetto
approved these changes
Feb 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
read_fileandread_multiple_fileshad no output size limit, so a single call reading a large repository could return 150K+ characters in one tool result, pushing the session context over 100K tokens (which is pretty big)limitOutput()cap with proper pagination: both tools now acceptoffset(1-based line number) andline_countparameters so the model can read large files incrementally.[Showing lines 1-150 of 250 from AGENTS.md]) so the model knows how much content remains and can request the next chunk.ReadFileMeta.LineCountrenamed toTotalLines— always reflects the full file regardless of the window requested.Instructions()updated to guide the model to paginate large files.