Skip to content

Aeonica-Labs/aeonica-memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aeonica Memory

PyPI version Python 3.9+ License: MIT Tests

Governed AI memory for LLM agents and RAG systems.

The only memory SDK with built-in audit trails, cryptographic proofs, GDPR-compliant deletion, and temporal decay.

pip install aeonica-memory

What Makes It Different

Most vector stores just store and retrieve. Aeonica Memory gives you:

Feature What It Does Why It Matters
Audit Trail Hash-chained log of every operation Tamper-evident compliance
State Proofs Merkle trees over memory state Prove "what AI knew when"
Deletion Certificates Cryptographic proof of deletion GDPR Article 17 compliance
Temporal Decay Memories fade over time Human-like memory behavior
RAG Tracing Full retrieval debugging "Why did it retrieve X?"
Explainability Natural language reasoning Know why, not just what

What It Does

Aeonica Memory is a semantic memory SDK that wraps FAISS with:

  1. Governance - Audit logs, Merkle proofs, deletion certificates
  2. Explainability - Know why a memory was retrieved, not just that it was
  3. Temporal behavior - Decay, importance, auto-expiration
  4. Developer-friendly API - 3 lines to get started
from aeonica_memory import MemoryClient

client = MemoryClient()
client.add("py_1", "Python uses indentation for code blocks")
results = client.query("How does Python structure code?", explain=True)

for r in results:
    print(f"{r.content}")
    print(f"  Confidence: {r.confidence_depth}")
    print(f"  Why: {r.explanation}")

Why Aeonica Memory?

Problem Aeonica Solution
FAISS returns scores, not explanations Natural language reasoning for every result
No visibility into retrieval confidence Cluster-based confidence depth
Pattern discovery is manual Automatic schema detection
Vector DBs charge per query $0 local inference, runs anywhere
Pinecone/Weaviate vendor lock-in Open source, your data stays local

Quick Start

Installation

pip install aeonica-memory

Basic Usage

from aeonica_memory import MemoryClient

# Initialize client
client = MemoryClient(backend="faiss", explainability=True)

# Add memories
client.add("auth_1", "OAuth 2.0 uses token-based authentication")
client.add("auth_2", "JWT tokens provide stateless authentication")
client.add("auth_3", "API keys are simple but less secure than OAuth")

# Query with explainability
results = client.query("How do I authenticate API requests?", top_k=3)

for r in results:
    print(f"[{r.score:.2f}] {r.content}")
    print(f"         {r.explanation}")

Output:

[0.89] OAuth 2.0 uses token-based authentication
       Very strong semantic match to your query - part of a pattern with 3 similar cases

[0.84] JWT tokens provide stateless authentication
       Strong semantic similarity to your query - backed by 3 similar cases in memory

[0.71] API keys are simple but less secure than OAuth
       Moderate semantic relevance to your query

Batch Operations

# Efficient batch add
memories = [
    ("mem_1", "Python uses indentation", {"topic": "syntax"}),
    ("mem_2", "JavaScript uses braces", {"topic": "syntax"}),
    ("mem_3", "SQL queries databases", {"topic": "data"}),
]
client.add_batch(memories)

Web Playground

pip install 'aeonica-memory[playground]'
aeonica-memory playground
# Opens at http://localhost:8000

Features

Explainability

Every retrieval result includes:

Field Description Example
confidence_depth How many similar memories back this result "High confidence (based on 23 similar cases)"
schema_label Detected pattern this memory belongs to "Pattern: API authentication flows"
explanation Natural language reasoning "Strong semantic match - part of a pattern with 5 similar cases"

Schema Detection

Aeonica automatically discovers patterns in your memories:

stats = client.get_stats()
print(f"Detected {stats['total_schemas']} patterns")
print(f"Average cluster size: {stats['avg_cluster_size']}")

Pure FAISS Performance

Under the hood: battle-tested FAISS with IndexFlatIP for cosine similarity.

  • Embedding model: all-MiniLM-L6-v2 (384 dims)
  • Search: C++ SIMD-optimized similarity
  • Cost: $0 (local inference)

API Reference

MemoryClient

client = MemoryClient(
    backend="faiss",           # Only "faiss" supported currently
    explainability=True,       # Enable confidence, schemas, reasoning
    embedding_model="all-MiniLM-L6-v2",  # Sentence transformer model
    storage_path=None,         # Path for persistence (Pro feature)
)

Methods

Method Description
add(id, content, metadata) Add a single memory
add_batch(memories) Add multiple memories efficiently
query(query, top_k, explain) Query with optional explainability
get_stats() Get memory statistics

RetrievalResult

@dataclass
class RetrievalResult:
    id: str                    # Memory identifier
    content: str               # Memory content
    score: float               # Similarity score (0-1)
    confidence_depth: str      # "High confidence (based on N similar cases)"
    schema_label: str          # "Pattern: ..." or None
    explanation: str           # Natural language reasoning
    metadata: dict             # User-provided metadata

Pricing

Tier Price Features
Free $0 Full SDK, 10K memories, community support
Pro $29/mo Persistence, filtering, priority support, 100K memories
Team $79/mo 5 users, SSO, shared collections, 500K memories
Enterprise Custom On-prem, SLA, unlimited memories

Core is open source. Paid tiers add persistence, support, and team features.


Comparison

Feature Aeonica FAISS Pinecone Weaviate
Explainability Yes No No No
Confidence depth Yes No No No
Schema detection Yes No No No
Local/private Yes Yes No No
Cost $0 $0 $70+/mo $90+/mo
Setup time 3 lines 50+ lines Account + API Account + API

Architecture

┌─────────────────────────────────────────────────────────────┐
│                     MemoryClient                            │
│  - add() / add_batch() / query()                           │
│  - Explainability layer                                     │
└─────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────┐
│                  HybridRetrieverFAISS                       │
│  - FAISS IndexFlatIP (cosine similarity)                   │
│  - Sentence transformer embeddings                          │
│  - Optional harmonic reasoning (experimental)               │
└─────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────┐
│                  Explainability Components                  │
│  - ConfidenceTracker (cluster-based)                       │
│  - SchemaDetector (pattern discovery)                       │
│  - ExplanationGenerator (natural language)                  │
└─────────────────────────────────────────────────────────────┘

Use Cases

LLM Agent Memory

# Store agent interactions
client.add(f"turn_{i}", f"User asked about {topic}, agent responded with {response}")

# Retrieve relevant context for next turn
context = client.query(user_message, top_k=5)

RAG Systems

# Index documents
for doc in documents:
    client.add(doc.id, doc.text, metadata={"source": doc.source})

# Retrieve with explainability
results = client.query(question, explain=True)
# Now you can cite WHY each source was selected

Knowledge Bases

# Build team knowledge base
client.add_batch([(id, content, {"author": author}) for ...])

# Query with confidence
results = client.query("How do we handle X?")
# "High confidence based on 15 similar cases" vs "Unique case"

Governance Module (Enterprise)

The aeonica_memory.governance module provides enterprise-grade compliance, audit, and observability features for regulated industries.

Audit Logging

Every memory operation is logged with cryptographic integrity:

from aeonica_memory import MemoryClient
from aeonica_memory.governance import AuditLog, OperationType

# Create audit log
audit = AuditLog("./audit.jsonl")

# Log operations
entry = audit.log_add(
    memory_id="doc_1",
    content_hash="sha256:abc123...",
    state_root="sha256:xyz789..."
)

# Verify chain integrity (tamper detection)
if audit.verify_chain():
    print("Audit log is intact")

# Get history for compliance
history = audit.get_history("doc_1")
for e in history:
    print(f"{e.timestamp}: {e.operation.value}")

Merkle Tree State Proofs

Prove what the AI knew at any point in time:

from aeonica_memory.governance import StateProver

prover = StateProver(client)

# Compute current state root
root = prover.compute_state_root()

# Prove a specific memory was in the state
proof = prover.prove_inclusion("doc_1")

# Verify the proof (can be done by third party)
is_valid = prover.verify_proof(proof)

# Prove retrieval results came from specific state
query_proof = prover.prove_retrieval(["doc_1", "doc_2"], query_hash)

Compliant Deletion (GDPR)

GDPR "right to be forgotten" with cryptographic proof:

from aeonica_memory.governance import DeletionCertifier, verify_deletion_certificate_standalone

# Setup certifier
certifier = DeletionCertifier(client, audit_log=audit, state_prover=prover)

# Delete with certificate
cert = certifier.delete_with_certificate(
    "user_123_data",
    reason="GDPR Article 17 request",
    rebuild_index=True  # Eliminates semantic residue
)

# Certificate proves deletion
print(cert.to_json())
# {
#   "certificate_id": "del_abc123...",
#   "memory_id": "user_123_data",
#   "pre_state_root": "sha256:...",
#   "post_state_root": "sha256:...",
#   "content_hash": "sha256:...",
#   "deletion_timestamp": "2025-03-15T10:30:00+00:00",
#   "index_rebuilt": true
# }

# Third-party verification (no access to system needed)
result = verify_deletion_certificate_standalone(cert_json)
print(result["verified"])  # True

Temporal Memory

Time-aware memory with decay and importance scoring:

from aeonica_memory.governance import TemporalMemory, ExponentialDecay

# Wrap client with temporal features
temporal = TemporalMemory(
    client,
    decay_function=ExponentialDecay(half_life_days=30)
)

# Add with importance (critical info persists longer)
temporal.add("policy_update", "New vacation policy...", importance=0.9)

# Add with expiration (auto-deletes after 2 days)
temporal.add("temp_notice", "Office closed tomorrow", expires_in_days=2)

# Query with temporal scoring
# Final score = semantic_similarity * temporal_decay * importance
results = temporal.query("vacation policy")

# Run maintenance (archive old, delete expired)
archived, deleted = temporal.run_maintenance(
    archive_threshold=0.1,
    delete_expired=True
)

Decay Functions:

  • ExponentialDecay: Natural forgetting curve (default)
  • LinearDecay: Steady decrease to zero
  • StepDecay: Defined retention periods (regulatory)
  • NoDecay: Memories never fade

Retrieval Tracing

Full observability for RAG debugging:

from aeonica_memory.governance import RetrievalTracer, create_retrieval_report

tracer = RetrievalTracer(client)

# Query with full tracing
results = tracer.traced_query("What is the policy?", top_k=5)

# Get trace details
trace = tracer.get_latest_trace()
print(f"Retrieved {len(trace.final_results)} docs in {trace.total_latency_ms:.1f}ms")

# Human-readable report
print(create_retrieval_report(trace))

# Detailed analysis
analysis = tracer.analyze_trace(trace.trace_id)
print(f"Score range: {analysis['scores']['min']:.3f} - {analysis['scores']['max']:.3f}")
print(f"Issues: {analysis['potential_issues']}")

# Compare traces (A/B testing)
comparison = tracer.compare_traces(trace_id_1, trace_id_2)
print(f"Jaccard similarity: {comparison['comparison']['jaccard_similarity']:.2f}")

Roadmap

Version Features Status
0.1.0 Core SDK, explainability, playground Current
0.2.0 Persistence (save/load) Done
0.3.0 Metadata filtering, CRUD Done
0.4.0 Governance module (audit, proofs, deletion) Done
0.5.0 LangChain integration Done
1.0.0 Production-ready, full documentation Planned

Development

# Clone
git clone https://github.com/aeonica-labs/aeonica-memory
cd aeonica-memory

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Format code
black aeonica_memory/
ruff aeonica_memory/

License

MIT License - See LICENSE


Support


Built by Aeonica Labs

Fast, explainable memory for the AI era.

About

Governed AI memory for LLM agents and RAG systems

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages