Skip to content

fix: 尝试修复主动回复导致普通 @ 对话失忆及群聊上下文被污染的问题#7634

Closed
lingyun14beta wants to merge 4 commits intoAstrBotDevs:masterfrom
lingyun14beta:fix/ltm-active-reply-context-isolation
Closed

fix: 尝试修复主动回复导致普通 @ 对话失忆及群聊上下文被污染的问题#7634
lingyun14beta wants to merge 4 commits intoAstrBotDevs:masterfrom
lingyun14beta:fix/ltm-active-reply-context-isolation

Conversation

@lingyun14beta
Copy link
Copy Markdown

@lingyun14beta lingyun14beta commented Apr 17, 2026

在群聊中同时开启群聊上下文感知和主动回复后,bot 在普通 @ 对话中也会表现为失忆——无论之前聊过多少内容,超过 max_cnt 条的历史会被完全忽略。经过排查,问题出在 on_req_llmrecord_llm_resp_to_ltm 没有区分请求的触发来源,导致主动回复的处理逻辑错误地应用到了所有 LLM 请求上。

Modifications / 改动点

main.py

  • 主动回复触发时,在 yield event.request_llm(...) 之前通过 event.set_extra("_ltm_active_reply", True) 打标记,确保后续 filter 阶段能正确识别本次请求的触发类型
  • record_llm_resp_to_ltm 中检查该标记,主动回复的响应不再写入 session_chats,避免 chatroom 风格内容污染群聊上下文记忆

long_term_memory.py

  • on_req_llm 的判断条件从 cfg["enable_active_reply"] 改为 cfg["enable_active_reply"] and is_active_reply
  • 只有真正由主动回复触发的请求才进入 chatroom 改写分支
  • 普通 @ 请求保留完整的 req.contexts,长期会话记忆不受影响
  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

(已开启主动回复的测试结果)
image
验证步骤:

  1. 开启 主动回复
  2. @ bot 聊几句
    3.发送/history查看信息是否还存在

Checklist / 检查清单

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Ensure long-term memory handling differentiates between active replies and normal @ mentions to preserve group chat context and avoid pollution.

Bug Fixes:

  • Fix normal @ conversations losing earlier context when active reply is enabled by only applying chatroom-style prompt rewriting to genuinely active-reply-triggered requests.
  • Prevent active-reply-generated responses from being stored into long-term session history to avoid contaminating subsequent group chat context.

Summary by Sourcery

Differentiate long-term memory handling between active replies and normal @ mentions in group chats to preserve expected context behavior.

Bug Fixes:

  • Prevent normal @ conversations from losing historical context when active reply is enabled by only applying chatroom-style prompt rewriting to requests actually triggered by active replies.
  • Avoid polluting group chat long-term memory with chatroom-style responses by excluding active-reply-generated messages from session history.

Enhancements:

  • Introduce an internal flag on events to track active-reply-triggered LLM requests and clear it after use for safe event reuse.

Chores:

  • Add a shared constant for the long-term memory active reply marker key.

@auto-assign auto-assign bot requested review from Fridemn and LIghtJUNction April 17, 2026 16:18
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. area:core The bug / feature is about astrbot's core, backend labels Apr 17, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • The _ltm_active_reply extra key string is now shared across main.py and long_term_memory.py; consider extracting this into a shared constant or helper to avoid typos and make future refactors safer.
  • After using the _ltm_active_reply flag (e.g., once the active-reply request/response cycle is done), consider explicitly clearing it on the event to avoid accidental leakage if the same event object is reused or passed through additional filters.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `_ltm_active_reply` extra key string is now shared across `main.py` and `long_term_memory.py`; consider extracting this into a shared constant or helper to avoid typos and make future refactors safer.
- After using the `_ltm_active_reply` flag (e.g., once the active-reply request/response cycle is done), consider explicitly clearing it on the event to avoid accidental leakage if the same event object is reused or passed through additional filters.

## Individual Comments

### Comment 1
<location path="astrbot/builtin_stars/astrbot/long_term_memory.py" line_range="159-166" />
<code_context>

         cfg = self.cfg(event)
-        if cfg["enable_active_reply"]:
+        is_active_reply = event.get_extra("_ltm_active_reply", False)
+
+        if cfg["enable_active_reply"] and is_active_reply:
</code_context>
<issue_to_address>
**suggestion:** Avoid repeating the magic string key for `_ltm_active_reply` in multiple places.

Since this key is also used in `main.py`, consider defining it once (e.g., as a constant on the event class or in a small config/constants module) to avoid typos and keep future changes in sync.

Suggested implementation:

```python
        chats_str = "\n---\n".join(self.session_chats[event.unified_msg_origin])

        LTM_ACTIVE_REPLY_KEY = "_ltm_active_reply"
        cfg = self.cfg(event)
        is_active_reply = event.get_extra(LTM_ACTIVE_REPLY_KEY, False)

```

To fully align with the review comment and avoid divergence between this file and `main.py`, you should:

1. Introduce a shared constant (e.g. `LTM_ACTIVE_REPLY_KEY = "_ltm_active_reply"`) in a small config/constants module (for example `astrbot/config/constants.py`) or as a class-level constant on the event class.
2. In `astrbot/builtin_stars/astrbot/long_term_memory.py`, remove the local `LTM_ACTIVE_REPLY_KEY` definition above and import/use the shared constant instead.
3. In `main.py`, replace any direct usage of the string `"_ltm_active_reply"` with the imported shared constant to keep both locations in sync and avoid typos.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread astrbot/builtin_stars/astrbot/long_term_memory.py Outdated
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mechanism to distinguish active replies from regular LLM requests by tagging the event object with a specific flag. This ensures that chatroom-style prompt rewriting and context clearing only occur for active replies, and prevents these responses from polluting the long-term memory. A review comment points out a potential issue where the flag, being bound to the event object, might inadvertently affect LLM requests from other plugins triggered by the same event, leading to unintended side effects.

if cfg["enable_active_reply"]:
is_active_reply = event.get_extra("_ltm_active_reply", False)

if cfg["enable_active_reply"] and is_active_reply:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

使用 event.get_extra 获取的标记是绑定在 event 对象上的。如果当前消息事件触发了多个 LLM 请求(例如其他插件也监听了该消息并调用 LLM),这些请求在进入 on_req_llm 过滤器时也会携带此标记,导致它们的上下文被错误清空且 Prompt 被改写。建议在 main.py 中将标记与具体的 req 对象关联(例如存储 id(req)),或者在 on_req_llm 中增加更严谨的判断,以避免对其他插件的请求产生副作用。

@lingyun14beta
Copy link
Copy Markdown
Author

@sourcery-ai review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • The LTM_ACTIVE_REPLY_KEY logic currently assumes only one active-reply-triggered LLM request per event; if multiple LLM requests can be issued on the same AstrMessageEvent before after_message_sent runs, consider guarding against the latest event.set_extra overwriting the previous one and misclassifying other requests.
  • Storing id(req) in event.extra for correlation is somewhat implicit; consider adding a short comment or helper (e.g., mark_active_reply_request(event, req)) to encapsulate this pattern and clarify that it relies on object identity within a single process.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `LTM_ACTIVE_REPLY_KEY` logic currently assumes only one active-reply-triggered LLM request per event; if multiple LLM requests can be issued on the same `AstrMessageEvent` before `after_message_sent` runs, consider guarding against the latest `event.set_extra` overwriting the previous one and misclassifying other requests.
- Storing `id(req)` in `event.extra` for correlation is somewhat implicit; consider adding a short comment or helper (e.g., `mark_active_reply_request(event, req)`) to encapsulate this pattern and clarify that it relies on object identity within a single process.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@lingyun14beta
Copy link
Copy Markdown
Author

标记方式已在后续中更新为存储 id,避免同一 event 触发多个 LLM 请求时误判。

@lingyun14beta lingyun14beta changed the title fix(ltm): 修复主动回复导致普通 @ 对话失忆及群聊上下文被污染的问题 fix: 尝试修复主动回复导致普通 @ 对话失忆及群聊上下文被污染的问题 Apr 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:core The bug / feature is about astrbot's core, backend size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant