Conversation
Summary of ChangesHello @bestony, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求的核心是引入一个详细的需求文档,旨在解决在LLM(大型语言模型)调用中常见的上下文长度限制问题。通过提出统一的token计量、预算和自动压缩策略,该方案旨在提高LLM工作流的稳定性、降低失败率并优化成本。它概述了当前面临的挑战,并为未来的开发奠定了基础,以确保LLM调用能够更有效地处理大量输入数据。 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
| ## 4. User Problems | ||
| - self 的多个工作流会把 issue/PR/计划/日志等内容直接拼接进 LLM 提示词,但当前实现没有统一的 token 计量、上下文预算或自动压缩策略。输入一旦超过模型上下文长度就会触发 API 错误或隐式截断,导致 schedule 运行失败、成本浪费和不可复现的输出差异。 |
There was a problem hiding this comment.
[Backlog Discovery]
Reviewer: Product Manager
- 范围不清:文档提到“所有 LLM 调用”作为前置步骤,但未列出具体工作流/入口,难以验收覆盖面。建议在需求中明确 in-scope 的工作流清单(如 backlog-discovery、reviewer、fixer 等)与 out-of-scope 说明。
- 压缩策略缺少默认行为定义:验收标准只写“可配置”,但未说明默认策略、保留优先级和质量护栏,影响可测试性与一致性。建议补充默认策略与保留顺序(标题/摘要/最近 N 条等),并定义最小可接受输出质量。
- 交付风险:新增 fail-fast 可能提升失败率但缺少回滚/开关策略。建议加入按工作流或仓库的 feature flag/分阶段启用计划,并在验收标准中注明。
|
[Reviewer Workflow] 需求价值评估
价值点
风险与建议
|
[Backlog Discovery]
backlog/20260221091313-llm-context-window-budgeting-and-auto-summarization.mdllm-context-window-budgeting-and-auto-summarization[Backlog Discovery]
Update Record 2026-02-21 17:15:50 +08:00
Update summary:
Refined user problems into three concrete pain points: lack of token budget/compression, late-stage failures, and inconsistent truncation strategies.
Added a placeholder external evidence item to capture 30-day context-length failure counts, rerun cost, and affected workflows.
Introduced a Scope section clarifying in-scope workflows (backlog-discovery, reviewer/product reviewer, fixer) and out-of-scope items.
Expanded acceptance criteria with ordered default compression rules, token change reporting in job summaries, feature flags/rollback, and a minimum output quality baseline.
Status: committed
Commit:
e02ce756a645ac1b46a038445dd1707a27e384a7Trigger: workflow_run.completed
Comment: https://github.com/bestony/self/actions/runs/22254205260
Actor: @github-actions[bot]
Updated At (Asia/Shanghai): 2026-02-21 17:15:50 +08:00