SharedMemoryServer is a production-grade infrastructure designed to govern Inference-time Latency and System Entropy in complex Agentic Workflows. It provides a persistent, high-integrity context layer that survives ephemeral session boundaries.
This project demonstrates verified proof of:
- State Governance: Managing reasoning continuity across sessions.
- Architectural Determinism: Enforcing data integrity through atomic synchronization.
- Intelligence Provenance: Quantifying the maturity and reuse of knowledge assets.
- Team-Scale Knowledge Hub: Centralizing agentic memory across developers via a persistent SSE server.
SharedMemoryServer utilizes a Compute-then-Write pattern to eliminate database lock contention, ensuring high performance even with multiple simultaneous agents.
graph TD
subgraph "Parallel AI Compute"
A[Agent Request] --> B1[Gemini Embeddings]
A --> B2[Conflict Detection]
end
B1 & B2 --> C{Orchestrator}
subgraph "Atomic Sync"
C --> D[SQLite Transaction]
C --> E[Memory Bank MD]
end
π Deep Dive into Architecture
uv pip install -e .- Mode A (SSE - Recommended): Centralized hub for team collaboration.
uv run shared-memory --sse --port 8377
- Mode B (STDIO): Isolated local use.
uv run shared-memory
Run the 16-test suite covering Chaos, System, and Unit scenarios:
uv run pytest tests -vThis project is dual-licensed to ensure both community openness and commercial sustainability:
- Open Source: Licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). Any network service provided using this software must release its source code.
- Commercial: For SaaS use cases or proprietary integration without AGPL-3.0 obligations, a Commercial License is available.
Contributing: We welcome contributions! Please see our Contributing Guide and note that a CLA agreement is required for all pull requests.
Built to elevate AI Agents from "Simple Assistants" to "Systematic Thinking Assets".