A high-performance, production-ready memory coprocessor for LLM agents.
OpenMem (PNME) provides a durable long-term memory layer that survives across sessions, crashes, and restarts. By combining Hyperdimensional Computing (HDC) with symbolic triples, it offers high-speed retrieval, principled association, and structured fact management.
# Clone the repository
git clone https://github.com/OpenMem-Project/OpenMem
cd OpenMem
# Install dependencies and package
pip install -e .- Principled HDC Encoding: Uses 10,000-dimensional bipolar vectors with deterministic hash-based seeds and role-vector binding.
- Hybrid Retrieval: Combines exact symbolic filtering with sub-symbolic HDC unbinding and multi-factor ranking (recency, strength, provenance).
- Audit & Analytics: Integrated access logging and event tracking for memory lifecycle analysis.
- Portability: JSONL export/import support for cross-system memory migration.
- Safety: Automated scrubbing of sensitive information (secrets, keys) before storage.
OpenMem integrates with Claude via a structured tool adapter. To use it, import the adapter and register the tools:
from pnme.integrations.claude_tools import ClaudeMemoryAdapter
# Initialize adapter
adapter = ClaudeMemoryAdapter(db_path="my_memory.db")
tools = adapter.get_tool_definitions()
# In your Claude tool handler:
def on_tool_call(name, args):
return adapter.handle_tool_call(name, args)Available Tools:
memory_store: Save a specific fact (S, R, O).memory_absorb: Bulk-extract facts from a text block.memory_query: Pattern-based retrieval.memory_hydrate: Inject context into the prompt.
Register OpenMem as a plugin in your OpenClaw setup:
from pnme.integrations.openclaw_plugin import setup_plugin
# Setup the memory plugin
memory_plugin = setup_plugin({"db_path": "openclaw_mem.db"})
agent.register_plugin(memory_plugin)from pnme.api import PNME
memory = PNME()
# Store a fact
memory.store("DeepSeek-V3", "released_by", "DeepSeek")
# Absorb facts from text
memory.absorb("Claude 3.5 Sonnet is a model by Anthropic. It supports computer use.")
# Retrieve hydrated context
prompt = "Tell me about Anthropic models."
hydrated_prompt = memory.hydrate(prompt)MIT