fix(agent): auto-handoff when context length exceeds model limit#147
Merged
frostming merged 1 commit intobubbuild:mainfrom Apr 7, 2026
Merged
Conversation
When tape history exceeds the model's context window, the LLM API returns a 400 error. Since the model never gets called, it cannot invoke tape.handoff to compress context — a deadlock. Add automatic handoff recovery in _agent_loop: detect context-length errors from the ToolAutoResult, perform tape.handoff to create an anchor that truncates visible history, then retry with the original prompt. Limited to 1 auto-retry to prevent infinite loops.
2 tasks
frostming
approved these changes
Apr 7, 2026
Collaborator
|
Thank you for the contribution. Let's use this as a fallback approach. |
Contributor
Author
Is this an automatic review and reply from your bot? |
Collaborator
Human, our bot uses a dedicated account @dagebot |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Changes
_agent_loop中新增 context 超限自动恢复机制tape.handoff创建 anchor 截断历史,然后重试_is_context_length_error辅助函数,通过正则匹配常见的 context 超限错误模式Motivation
当 tape 历史超过模型 context window(如 202K tokens > 200K 上限)时,LLM API 直接返回 400 错误。系统提示中虽然指示 LLM 使用
tape.handoff压缩上下文,但由于 LLM 根本未被调用,形成死锁。session 永久损坏,后续每条消息都会失败。Testing