Fix/ltm: isolate active reply context from long-term memory and session history#7671
Conversation
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- Using
id(req)inLTM_ACTIVE_REPLY_KEYto correlate the event and request feels a bit fragile (e.g., if the provider wraps or clonesreq, or if multiple LLM requests are issued for a single event); consider passing an explicitis_active_replyflag with the request or storing a more stable token instead of relying on object identity. event.set_extra(LTM_ACTIVE_REPLY_KEY, None)clears the marker by setting it toNone, butget_extra(..., None)cannot distinguish between "never set" and "explicitly cleared"; if you ever need that distinction, consider removing the key entirely instead of assigningNone.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Using `id(req)` in `LTM_ACTIVE_REPLY_KEY` to correlate the event and request feels a bit fragile (e.g., if the provider wraps or clones `req`, or if multiple LLM requests are issued for a single event); consider passing an explicit `is_active_reply` flag with the request or storing a more stable token instead of relying on object identity.
- `event.set_extra(LTM_ACTIVE_REPLY_KEY, None)` clears the marker by setting it to `None`, but `get_extra(..., None)` cannot distinguish between "never set" and "explicitly cleared"; if you ever need that distinction, consider removing the key entirely instead of assigning `None`.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request introduces a mechanism to distinguish and handle 'active replies' within the Long Term Memory (LTM) system. By using a unique key and request ID stored in the event's metadata, the system now ensures that only specific LLM requests trigger chatroom-style prompt rewriting and prevents these responses from polluting the session history. A review comment suggests that the check for active replies in the recording phase might be too broad, potentially affecting other plugins and leading to inconsistent history if certain configuration flags like group_icl_enable are disabled.
| if event.get_extra(LTM_ACTIVE_REPLY_KEY, None) is not None: | ||
| return |
There was a problem hiding this comment.
The check if event.get_extra(LTM_ACTIVE_REPLY_KEY, None) is not None is too broad; it will skip recording for all LLM responses associated with this event if an active reply was triggered, potentially affecting other plugins. Additionally, this block should also verify the group_icl_enable setting. If group_icl_enable is False but active_reply is True, ltm_enabled remains True, causing bot responses to be recorded in session_chats while user messages are skipped (as handle_message is guarded), leading to an inconsistent history.
…d guard session_chats consistency
faf411f to
0068960
Compare
|
主动回答怎么能把上下文也弄没了😭😭😭还有另一个修复的pr也看看嘛#7624 |
Attempts to fix #7622.When both
group_icl_enableandactive_reply.enableare enabled in group chat, the bot loses long-term memory in normal @ conversations — any history beyondmax_cntmessages is silently ignored. The root cause is thaton_req_llmandrecord_llm_resp_to_ltmdo not distinguish between active-reply-triggered requests and regular @ requests, causing active-reply logic to be applied to all LLM requests.Modifications / 改动点
constants.py(new file)long_term_memory.pyon_req_llmfromcfg["enable_active_reply"]tocfg["enable_active_reply"] and is_active_replyreq.contextsand long-term session memorymain.pyid(req)in event extra as a marker beforeyield, so downstream filters can identify whether the current request was triggered by an active replydecorate_llm_req, set_ltm_active_reply_in_progresswhen the request id matches, soon_llm_responsecan precisely identify the active reply response without affecting other pluginsrecord_llm_resp_to_ltm, use_ltm_active_reply_in_progressto skip recording only for the exact active reply response; also addgroup_icl_enableguard to keepsession_chatsconsistent withhandle_messageconversation=Nonefor active replies to prevent chatroom context from being persisted intoconv.historyafter_message_sentto avoid leakage if the event object is reusedScreenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
/ 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。
👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
/ 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。
🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in
requirements.txtandpyproject.toml./ 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到
requirements.txt和pyproject.toml文件相应位置。😮 My changes do not introduce malicious code.
/ 我的更改没有引入恶意代码。
Summary by Sourcery
Fix separation of active-reply-triggered LLM requests from regular conversations to preserve long-term memory and session history in group chats.
Bug Fixes:
Enhancements: