- More Accurate: [48.77% Accuracy Improvement] More accurate than full-context in the LOCOMO benchmark (78.70% VS 52.9%)
- Faster: [91.83% Faster Response] Significantly reduced p95 latency for retrieval compared to full-context (1.44s VS 17.12s)
- More Economical: [96.53% Token Reduction] Significantly reduced costs compared to full-context without sacrificing performance (0.9k VS 26k)
In AI application development, enabling large language models to persistently "remember" historical conversations, user preferences, and contextual information is a core challenge. PowerMem combines a hybrid storage architecture of vector retrieval, full-text search, and graph databases, and introduces the Ebbinghaus forgetting curve theory from cognitive science to build a powerful memory infrastructure for AI applications. The system also provides comprehensive multi-agent support capabilities, including agent memory isolation, cross-agent collaboration and sharing, fine-grained permission control, and privacy protection mechanisms, enabling multiple AI agents to achieve efficient collaboration while maintaining independent memory spaces.
- Lightweight Integration: Provides a simple Python SDK, automatically loads configuration from
.envfiles, enabling developers to quickly integrate into existing projects
- Intelligent Memory Extraction: Automatically extracts key facts from conversations through LLM, intelligently detects duplicates, updates conflicting information, and merges related memories to ensure accuracy and consistency of the memory database
- Ebbinghaus Forgetting Curve: Based on the memory forgetting patterns from cognitive science, automatically calculates memory retention rates and implements time-decay weighting, prioritizing recent and relevant memories, allowing AI systems to naturally "forget" outdated information like humans
- Agent Shared/Isolated Memory: Provides independent memory spaces for each agent, supports cross-agent memory sharing and collaboration, and enables flexible permission management through scope control
- Text, Image, and Audio Memory: Automatically converts images and audio to text descriptions for storage, supports retrieval of multimodal mixed content (text + image + audio), enabling AI systems to understand richer contextual information
- Sub Stores Support: Implements data partition management through sub stores, supports automatic query routing, significantly improving query performance and resource utilization for ultra-large-scale data
- Hybrid Retrieval: Combines multi-channel recall capabilities of vector retrieval, full-text search, and graph retrieval, builds knowledge graphs through LLM and supports multi-hop graph traversal for precise retrieval of complex memory relationships
pip install powermem✨ Simplest Way: Create memory from .env file automatically! Configuration Reference
from powermem import Memory, auto_config
# Load configuration (auto-loads from .env)
config = auto_config()
# Create memory instance
memory = Memory(config=config)
# Add memory
memory.add("User likes coffee", user_id="user123")
# Search memories
memories = memory.search("user preferences", user_id="user123")
for memory in memories:
print(f"- {memory.get('memory')}")For more detailed examples and usage patterns, see the Getting Started Guide.
- LangChain Integration: Build medical support chatbot using LangChain + PowerMem + OceanBase, View Example
- LangGraph Integration: Build customer service chatbot using LangGraph + PowerMem + OceanBase, View Example
- Getting Started: Installation and quick start guide
- Configuration Guide: Complete configuration options
- Multi-Agent Guide: Multi-agent scenarios and examples
- Integrations Guide: Integrations Guide
- Sub Stores Guide: Sub stores usage and examples
- API Documentation: Complete API reference
- Architecture Guide: System architecture and design
- Examples: Interactive Jupyter notebooks and use cases
- Development Documentation: Developer documentation
- Issue Reporting: GitHub Issues
- Discussions: GitHub Discussions
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.