-
-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Problem
Agent does not maintain conversation context when user messages result in terminal command execution with interpretations but no final LLM response.
Conversation ID affected: 53feaecb-3974-4bdf-badd-517358c74cf3
Symptoms
- Agent forgets previous user messages in subsequent interactions
- LLM context (
chat:conversationRedis key) only contains 2 exchanges - UI history (
chat:sessionRedis key) correctly shows all 21 messages with 5 user messages
Missing from LLM context:
- User message Docs improvements #2: "hello" (17:05:35)
- User message Fix frontend errors and remove unused code #3: "what other devices are on 192.168.168.0/24..." (17:07:09)
- User message Dev new gui #4: "now you scanned network, what ports are open..." (17:09:55)
Root Cause
The _persist_conversation() method in src/chat_workflow_manager.py:1090 is only called when there is a final llm_response from the LLM (line 1476).
When a user message results in:
- Command approval request
- Terminal command execution
- Terminal output
- Terminal interpretation
WITHOUT a final LLM response, _persist_conversation() is never called, so:
- The user/assistant exchange is NOT saved to
chat:conversation:{session_id} - The LLM loses context for that interaction
- Future messages lack crucial conversation history
Example Flow (Broken)
User: "hello" (message #2)
→ Assistant: command_approval_request for hostname
→ Terminal: executes hostname
→ Agent_terminal: interpretation "This is MV-Stealth"
→ ❌ NO llm_response generated
→ ❌ _persist_conversation() never called
→ ❌ Exchange NOT saved to chat:conversation
Fix Applied
Added immediate persistence of terminal interpretations to conversation history in interpret_terminal_command() method.
File: src/chat_workflow_manager.py
Lines: 910-958
Changes:
- After saving terminal interpretation to
chat:session(UI history) - Retrieve the most recent user message from
chat:session - Call
_persist_conversation()to save user message + interpretation tochat:conversation - Ensures LLM maintains full conversation context
Code snippet:
# CRITICAL FIX: Persist to conversation history (chat:conversation) for LLM context
# This fixes the bug where terminal interpretations weren't being tracked in LLM context
try:
session = await self.get_or_create_session(session_id)
# Get the last user message from chat:session
if self.redis_client is not None:
session_key = f"chat:session:{session_id}"
session_data_json = await asyncio.wait_for(
self.redis_client.get(session_key),
timeout=2.0
)
if session_data_json:
session_data = json.loads(session_data_json)
messages = session_data.get("messages", [])
# Find most recent user message
last_user_message = None
for msg in reversed(messages):
if msg.get("sender") == "user":
last_user_message = msg.get("text", "")
break
if last_user_message:
# Persist the exchange to conversation history
await self._persist_conversation(
session_id=session_id,
session=session,
message=last_user_message,
llm_response=interpretation
)Testing
Backend restarted successfully with fix applied.
Next: Test with a new conversation to verify conversation context is maintained across multiple terminal interpretation exchanges.
Labels
bughigh-priorityconversation-trackingredis
Related
- Conversation ID:
53feaecb-3974-4bdf-badd-517358c74cf3 - Redis keys:
chat:conversation:{session_id},chat:session:{session_id}