-
Notifications
You must be signed in to change notification settings - Fork 100
Memory System
OpenSwarm implements a cognitive memory system using LanceDB vector storage with Xenova multilingual embeddings. This allows agents to retain and recall information across sessions.
- Vector DB: LanceDB (columnar, Apache Arrow-based)
- Embeddings: Xenova/multilingual-e5-base (768 dimensions)
-
Location:
~/.openswarm/memory/
| Type | Description |
|---|---|
belief |
Agent's understanding of a concept or pattern |
strategy |
Approach that worked (or didn't) for a task type |
user_model |
Understanding of user preferences and patterns |
system_pattern |
Observed system behavior or architecture patterns |
constraint |
Known limitations or rules to follow |
Memories are ranked using a hybrid formula:
score = 0.55 × similarity + 0.20 × importance + 0.15 × recency + 0.10 × frequency
| Factor | Weight | Description |
|---|---|---|
| Similarity | 55% | Cosine similarity between query and memory embeddings |
| Importance | 20% | User-assigned or derived importance score |
| Recency | 15% | How recently the memory was accessed |
| Frequency | 10% | How often the memory has been recalled |
The memory system runs background maintenance tasks:
Memories that haven't been accessed gradually lose relevance, preventing stale information from dominating recalls.
Related memories are periodically merged to reduce redundancy and strengthen key patterns.
When a new memory contradicts an existing one, the system flags the conflict for resolution.
Verbose memories are summarized into more compact forms over time.
Search memories directly from Discord:
!memory search "how to handle auth errors"
During pipeline execution, agents automatically:
- Query relevant memories before starting a task
- Store successful strategies and patterns after task completion
- Update existing memories when new information is learned
This enables agents to learn from past work and avoid repeating mistakes.
Getting Started
Reference
Deep Dive
Help