fix: 尝试修复主动回复导致普通 @ 对话失忆及群聊上下文被污染的问题#7634
fix: 尝试修复主动回复导致普通 @ 对话失忆及群聊上下文被污染的问题#7634lingyun14beta wants to merge 4 commits intoAstrBotDevs:masterfrom
Conversation
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- The
_ltm_active_replyextra key string is now shared acrossmain.pyandlong_term_memory.py; consider extracting this into a shared constant or helper to avoid typos and make future refactors safer. - After using the
_ltm_active_replyflag (e.g., once the active-reply request/response cycle is done), consider explicitly clearing it on the event to avoid accidental leakage if the same event object is reused or passed through additional filters.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `_ltm_active_reply` extra key string is now shared across `main.py` and `long_term_memory.py`; consider extracting this into a shared constant or helper to avoid typos and make future refactors safer.
- After using the `_ltm_active_reply` flag (e.g., once the active-reply request/response cycle is done), consider explicitly clearing it on the event to avoid accidental leakage if the same event object is reused or passed through additional filters.
## Individual Comments
### Comment 1
<location path="astrbot/builtin_stars/astrbot/long_term_memory.py" line_range="159-166" />
<code_context>
cfg = self.cfg(event)
- if cfg["enable_active_reply"]:
+ is_active_reply = event.get_extra("_ltm_active_reply", False)
+
+ if cfg["enable_active_reply"] and is_active_reply:
</code_context>
<issue_to_address>
**suggestion:** Avoid repeating the magic string key for `_ltm_active_reply` in multiple places.
Since this key is also used in `main.py`, consider defining it once (e.g., as a constant on the event class or in a small config/constants module) to avoid typos and keep future changes in sync.
Suggested implementation:
```python
chats_str = "\n---\n".join(self.session_chats[event.unified_msg_origin])
LTM_ACTIVE_REPLY_KEY = "_ltm_active_reply"
cfg = self.cfg(event)
is_active_reply = event.get_extra(LTM_ACTIVE_REPLY_KEY, False)
```
To fully align with the review comment and avoid divergence between this file and `main.py`, you should:
1. Introduce a shared constant (e.g. `LTM_ACTIVE_REPLY_KEY = "_ltm_active_reply"`) in a small config/constants module (for example `astrbot/config/constants.py`) or as a class-level constant on the event class.
2. In `astrbot/builtin_stars/astrbot/long_term_memory.py`, remove the local `LTM_ACTIVE_REPLY_KEY` definition above and import/use the shared constant instead.
3. In `main.py`, replace any direct usage of the string `"_ltm_active_reply"` with the imported shared constant to keep both locations in sync and avoid typos.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request introduces a mechanism to distinguish active replies from regular LLM requests by tagging the event object with a specific flag. This ensures that chatroom-style prompt rewriting and context clearing only occur for active replies, and prevents these responses from polluting the long-term memory. A review comment points out a potential issue where the flag, being bound to the event object, might inadvertently affect LLM requests from other plugins triggered by the same event, leading to unintended side effects.
| if cfg["enable_active_reply"]: | ||
| is_active_reply = event.get_extra("_ltm_active_reply", False) | ||
|
|
||
| if cfg["enable_active_reply"] and is_active_reply: |
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The
LTM_ACTIVE_REPLY_KEYlogic currently assumes only one active-reply-triggered LLM request per event; if multiple LLM requests can be issued on the sameAstrMessageEventbeforeafter_message_sentruns, consider guarding against the latestevent.set_extraoverwriting the previous one and misclassifying other requests. - Storing
id(req)inevent.extrafor correlation is somewhat implicit; consider adding a short comment or helper (e.g.,mark_active_reply_request(event, req)) to encapsulate this pattern and clarify that it relies on object identity within a single process.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `LTM_ACTIVE_REPLY_KEY` logic currently assumes only one active-reply-triggered LLM request per event; if multiple LLM requests can be issued on the same `AstrMessageEvent` before `after_message_sent` runs, consider guarding against the latest `event.set_extra` overwriting the previous one and misclassifying other requests.
- Storing `id(req)` in `event.extra` for correlation is somewhat implicit; consider adding a short comment or helper (e.g., `mark_active_reply_request(event, req)`) to encapsulate this pattern and clarify that it relies on object identity within a single process.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
标记方式已在后续中更新为存储 id,避免同一 event 触发多个 LLM 请求时误判。 |
在群聊中同时开启群聊上下文感知和主动回复后,bot 在普通 @ 对话中也会表现为失忆——无论之前聊过多少内容,超过
max_cnt条的历史会被完全忽略。经过排查,问题出在on_req_llm和record_llm_resp_to_ltm没有区分请求的触发来源,导致主动回复的处理逻辑错误地应用到了所有 LLM 请求上。Modifications / 改动点
main.pyyield event.request_llm(...)之前通过event.set_extra("_ltm_active_reply", True)打标记,确保后续 filter 阶段能正确识别本次请求的触发类型record_llm_resp_to_ltm中检查该标记,主动回复的响应不再写入session_chats,避免 chatroom 风格内容污染群聊上下文记忆long_term_memory.pyon_req_llm的判断条件从cfg["enable_active_reply"]改为cfg["enable_active_reply"] and is_active_replyreq.contexts,长期会话记忆不受影响Screenshots or Test Results / 运行截图或测试结果
(已开启主动回复的测试结果)

验证步骤:
3.发送/history查看信息是否还存在
Checklist / 检查清单
👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
/ 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。
🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in
requirements.txtandpyproject.toml./ 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到
requirements.txt和pyproject.toml文件相应位置。😮 My changes do not introduce malicious code.
/ 我的更改没有引入恶意代码。
Summary by Sourcery
Ensure long-term memory handling differentiates between active replies and normal @ mentions to preserve group chat context and avoid pollution.
Bug Fixes:
Summary by Sourcery
Differentiate long-term memory handling between active replies and normal @ mentions in group chats to preserve expected context behavior.
Bug Fixes:
Enhancements:
Chores: