Skip to content

Fix/ltm: isolate active reply context from long-term memory and session history#7671

Open
lingyun14beta wants to merge 3 commits intoAstrBotDevs:masterfrom
lingyun14beta:fix/ltm-active-reply-context-isolation
Open

Fix/ltm: isolate active reply context from long-term memory and session history#7671
lingyun14beta wants to merge 3 commits intoAstrBotDevs:masterfrom
lingyun14beta:fix/ltm-active-reply-context-isolation

Conversation

@lingyun14beta
Copy link
Copy Markdown

@lingyun14beta lingyun14beta commented Apr 19, 2026

Attempts to fix #7622.When both group_icl_enable and active_reply.enable are enabled in group chat, the bot loses long-term memory in normal @ conversations — any history beyond max_cnt messages is silently ignored. The root cause is that on_req_llm and record_llm_resp_to_ltm do not distinguish between active-reply-triggered requests and regular @ requests, causing active-reply logic to be applied to all LLM requests.

Modifications / 改动点

constants.py (new file)

  • Extract the active-reply marker key as a shared constant to avoid magic strings across modules

long_term_memory.py

  • Change the branch condition in on_req_llm from cfg["enable_active_reply"] to cfg["enable_active_reply"] and is_active_reply
  • Only requests actually triggered by active replies enter the chatroom rewrite branch
  • Regular @ requests retain their full req.contexts and long-term session memory

main.py

  • Store id(req) in event extra as a marker before yield, so downstream filters can identify whether the current request was triggered by an active reply
  • In decorate_llm_req, set _ltm_active_reply_in_progress when the request id matches, so on_llm_response can precisely identify the active reply response without affecting other plugins
  • In record_llm_resp_to_ltm, use _ltm_active_reply_in_progress to skip recording only for the exact active reply response; also add group_icl_enable guard to keep session_chats consistent with handle_message
  • Set conversation=None for active replies to prevent chatroom context from being persisted into conv.history
  • Clear both markers in after_message_sent to avoid leakage if the event object is reused
  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

image The screenshot above shows `/history` output with active reply enabled after the fix — conversation history is preserved correctly with proper User/Assistant turns.

Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Fix separation of active-reply-triggered LLM requests from regular conversations to preserve long-term memory and session history in group chats.

Bug Fixes:

  • Ensure only active-reply-triggered LLM requests use the chatroom-style rewrite so normal @ conversations retain full context and long-term memory.
  • Prevent active-reply responses from being written into persistent session history to avoid polluting subsequent context.

Enhancements:

  • Mark active-reply LLM requests via an event extra flag and clear it after sending to avoid leaking state across reused events.
  • Extract a shared constant for the active-reply marker key to eliminate magic strings across modules.

@auto-assign auto-assign bot requested review from Soulter and anka-afk April 19, 2026 06:39
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. area:core The bug / feature is about astrbot's core, backend labels Apr 19, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Using id(req) in LTM_ACTIVE_REPLY_KEY to correlate the event and request feels a bit fragile (e.g., if the provider wraps or clones req, or if multiple LLM requests are issued for a single event); consider passing an explicit is_active_reply flag with the request or storing a more stable token instead of relying on object identity.
  • event.set_extra(LTM_ACTIVE_REPLY_KEY, None) clears the marker by setting it to None, but get_extra(..., None) cannot distinguish between "never set" and "explicitly cleared"; if you ever need that distinction, consider removing the key entirely instead of assigning None.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Using `id(req)` in `LTM_ACTIVE_REPLY_KEY` to correlate the event and request feels a bit fragile (e.g., if the provider wraps or clones `req`, or if multiple LLM requests are issued for a single event); consider passing an explicit `is_active_reply` flag with the request or storing a more stable token instead of relying on object identity.
- `event.set_extra(LTM_ACTIVE_REPLY_KEY, None)` clears the marker by setting it to `None`, but `get_extra(..., None)` cannot distinguish between "never set" and "explicitly cleared"; if you ever need that distinction, consider removing the key entirely instead of assigning `None`.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mechanism to distinguish and handle 'active replies' within the Long Term Memory (LTM) system. By using a unique key and request ID stored in the event's metadata, the system now ensures that only specific LLM requests trigger chatroom-style prompt rewriting and prevents these responses from polluting the session history. A review comment suggests that the check for active replies in the recording phase might be too broad, potentially affecting other plugins and leading to inconsistent history if certain configuration flags like group_icl_enable are disabled.

Comment thread astrbot/builtin_stars/astrbot/main.py Outdated
Comment on lines +111 to +112
if event.get_extra(LTM_ACTIVE_REPLY_KEY, None) is not None:
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The check if event.get_extra(LTM_ACTIVE_REPLY_KEY, None) is not None is too broad; it will skip recording for all LLM responses associated with this event if an active reply was triggered, potentially affecting other plugins. Additionally, this block should also verify the group_icl_enable setting. If group_icl_enable is False but active_reply is True, ltm_enabled remains True, causing bot responses to be recorded in session_chats while user messages are skipped (as handle_message is guarded), leading to an inconsistent history.

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:S This PR changes 10-29 lines, ignoring generated files. labels Apr 19, 2026
@Soulter Soulter force-pushed the master branch 2 times, most recently from faf411f to 0068960 Compare April 19, 2026 09:50
@lingyun14beta
Copy link
Copy Markdown
Author

lingyun14beta commented Apr 20, 2026

主动回答怎么能把上下文也弄没了😭😭😭还有另一个修复的pr也看看嘛#7624

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:core The bug / feature is about astrbot's core, backend size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] 开启主动回复和群聊上下文感知后,会话的长期记忆会被群聊上下文覆盖

1 participant