Skip to content

Bug: Agent loses conversation context when commands require terminal interpretation #177

@mrveiss

Description

@mrveiss

Problem

Agent does not maintain conversation context when user messages result in terminal command execution with interpretations but no final LLM response.

Conversation ID affected: 53feaecb-3974-4bdf-badd-517358c74cf3

Symptoms

  • Agent forgets previous user messages in subsequent interactions
  • LLM context (chat:conversation Redis key) only contains 2 exchanges
  • UI history (chat:session Redis key) correctly shows all 21 messages with 5 user messages

Missing from LLM context:

Root Cause

The _persist_conversation() method in src/chat_workflow_manager.py:1090 is only called when there is a final llm_response from the LLM (line 1476).

When a user message results in:

  1. Command approval request
  2. Terminal command execution
  3. Terminal output
  4. Terminal interpretation

WITHOUT a final LLM response, _persist_conversation() is never called, so:

  • The user/assistant exchange is NOT saved to chat:conversation:{session_id}
  • The LLM loses context for that interaction
  • Future messages lack crucial conversation history

Example Flow (Broken)

User: "hello" (message #2)
  → Assistant: command_approval_request for hostname
  → Terminal: executes hostname
  → Agent_terminal: interpretation "This is MV-Stealth"
  → ❌ NO llm_response generated
  → ❌ _persist_conversation() never called
  → ❌ Exchange NOT saved to chat:conversation

Fix Applied

Added immediate persistence of terminal interpretations to conversation history in interpret_terminal_command() method.

File: src/chat_workflow_manager.py
Lines: 910-958

Changes:

  1. After saving terminal interpretation to chat:session (UI history)
  2. Retrieve the most recent user message from chat:session
  3. Call _persist_conversation() to save user message + interpretation to chat:conversation
  4. Ensures LLM maintains full conversation context

Code snippet:

# CRITICAL FIX: Persist to conversation history (chat:conversation) for LLM context
# This fixes the bug where terminal interpretations weren't being tracked in LLM context
try:
    session = await self.get_or_create_session(session_id)
    
    # Get the last user message from chat:session
    if self.redis_client is not None:
        session_key = f"chat:session:{session_id}"
        session_data_json = await asyncio.wait_for(
            self.redis_client.get(session_key),
            timeout=2.0
        )
        if session_data_json:
            session_data = json.loads(session_data_json)
            messages = session_data.get("messages", [])
            
            # Find most recent user message
            last_user_message = None
            for msg in reversed(messages):
                if msg.get("sender") == "user":
                    last_user_message = msg.get("text", "")
                    break
            
            if last_user_message:
                # Persist the exchange to conversation history
                await self._persist_conversation(
                    session_id=session_id,
                    session=session,
                    message=last_user_message,
                    llm_response=interpretation
                )

Testing

Backend restarted successfully with fix applied.

Next: Test with a new conversation to verify conversation context is maintained across multiple terminal interpretation exchanges.

Labels

  • bug
  • high-priority
  • conversation-tracking
  • redis

Related

  • Conversation ID: 53feaecb-3974-4bdf-badd-517358c74cf3
  • Redis keys: chat:conversation:{session_id}, chat:session:{session_id}

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions