Problem
Long conversations overflow the model's context window. The naive solution (context compaction / summarization) loses critical information — OpenClaw's biggest failure was losing safety instructions during compaction.
Approach
Use the memory system + RAG instead of compaction. Important context is offloaded to persistent storage and retrieved via RAG when needed. The memory system IS the solution to long conversations, not summarization/pruning.
Design:
- As conversation grows, agent proactively saves important facts/decisions to
~/.gaia/memory/
- When context approaches limit, oldest messages are dropped BUT their key content is already in memory
- RAG retrieves relevant memory when the agent needs past context
- No information is permanently lost — it moves from conversation to memory
Dependencies
Acceptance Criteria
Problem
Long conversations overflow the model's context window. The naive solution (context compaction / summarization) loses critical information — OpenClaw's biggest failure was losing safety instructions during compaction.
Approach
Use the memory system + RAG instead of compaction. Important context is offloaded to persistent storage and retrieved via RAG when needed. The memory system IS the solution to long conversations, not summarization/pruning.
Design:
~/.gaia/memory/Dependencies
Acceptance Criteria