ZeroIPC transforms shared memory from passive storage into an active computational substrate, enabling both imperative and functional programming paradigms across process boundaries. It provides zero-copy data sharing with sophisticated concurrency primitives, reactive streams, and codata structures - bringing modern programming abstractions to inter-process communication.
- 🚀 Zero-Copy Performance - Direct memory access without serialization
- 🌐 Language Independence - C++ and Python implementations, not bindings
- 🔒 Lock-Free Concurrency - Atomic operations and CAS-based algorithms
- 📦 Minimal Metadata - Only store name/offset/size for true flexibility
- 🦆 Duck Typing - Runtime type specification (Python) or compile-time templates (C++)
- 🎯 Simple Discovery - Named structures for easy cross-process lookup
- ⚡ Reactive Programming - Functional reactive streams with operators
- 🔮 Codata Support - Futures, lazy evaluation, and infinite streams
- 🚪 CSP Concurrency - Channels for synchronous message passing
- 🛠️ CLI Tools - Comprehensive inspection and debugging utilities
C++ Producer:
#include <zeroipc/memory.h>
#include <zeroipc/array.h>
// Create shared memory
zeroipc::Memory mem("/sensor_data", 10*1024*1024); // 10MB
// Create typed array
zeroipc::Array<float> temps(mem, "temperature", 1000);
temps[0] = 23.5f;
Python Consumer:
from zeroipc import Memory, Array
import numpy as np
# Open same shared memory
mem = Memory("/sensor_data")
# Read with duck typing - user specifies type
temps = Array(mem, "temperature", dtype=np.float32)
print(temps[0]) # 23.5
Process A - Sensor Data Producer:
#include <zeroipc/memory.h>
#include <zeroipc/stream.h>
zeroipc::Memory mem("/sensors", 10*1024*1024);
zeroipc::Stream<double> temperature(mem, "temp_stream", 1000);
while (running) {
double temp = read_sensor();
temperature.emit(temp);
std::this_thread::sleep_for(100ms);
}
Process B - Stream Processing:
zeroipc::Memory mem("/sensors");
zeroipc::Stream<double> temperature(mem, "temp_stream");
// Create derived streams with functional transformations
auto fahrenheit = temperature.map(mem, "temp_f",
[](double c) { return c * 9/5 + 32; });
auto warnings = fahrenheit.filter(mem, "warnings",
[](double f) { return f > 100.0; });
// Subscribe to processed stream
warnings.subscribe([](double high_temp) {
send_alert("High temperature: " + std::to_string(high_temp));
});
Process A - Computation:
#include <zeroipc/future.h>
zeroipc::Memory mem("/compute", 10*1024*1024);
zeroipc::Future<double> result(mem, "expensive_calc");
// Perform expensive computation
double value = run_simulation();
result.set_value(value);
Process B - Waiting for Result:
zeroipc::Memory mem("/compute");
zeroipc::Future<double> result(mem, "expensive_calc", true);
// Wait with timeout
if (auto value = result.get_for(std::chrono::seconds(5))) {
process_result(*value);
} else {
handle_timeout();
}
Process A - Producer:
#include <zeroipc/channel.h>
zeroipc::Memory mem("/messages", 10*1024*1024);
zeroipc::Channel<Message> ch(mem, "commands", 100); // buffered
Message msg{.type = CMD_START, .data = 42};
ch.send(msg); // Blocks if buffer full
Process B - Consumer:
zeroipc::Memory mem("/messages");
zeroipc::Channel<Message> ch(mem, "commands");
while (auto msg = ch.receive()) {
process_command(*msg);
}
All implementations follow the same binary format defined in SPECIFICATION.md:
[Table Header][Table Entries][Data Structure 1][Data Structure 2]...
- Table Header: Magic number, version, entry count, next offset
- Table Entry: Name (32 bytes), offset (4 bytes), size (4 bytes)
- Data Structures: Raw binary data, layout determined by structure type
Unlike traditional IPC systems, ZeroIPC stores NO type information:
- Name: For discovery
- Offset: Where data starts
- Size: How much memory is used
This enables true language independence:
- Both languages can create: Python and C++ can both allocate new structures
- Both languages can read: Either can discover and access existing structures
- Type safety per language: C++ uses templates, Python uses NumPy dtypes
- ✅ Array - Fixed-size contiguous storage with atomic operations
- ✅ Queue - Lock-free MPMC circular buffer using CAS
- ✅ Stack - Lock-free LIFO with ABA-safe operations
- ✅ Map - Lock-free hash map with linear probing
- ✅ Set - Lock-free hash set for unique elements
- ✅ Pool - Object pool with free list management
- ✅ Ring - High-performance ring buffer for streaming
- ✅ Table - Metadata registry for dynamic discovery
- ✅ Future - Asynchronous computation results across processes
- ✅ Lazy - Deferred computations with automatic memoization
- ✅ Stream - Reactive data flows with FRP operators (map, filter, fold)
- ✅ Channel - CSP-style synchronous/buffered message passing
Traditional data structures store values in space. Codata structures represent computations over time. This enables:
- Cross-process async/await - Future results shared between processes
- Lazy evaluation - Expensive computations cached and shared
- Reactive pipelines - Event-driven processing with backpressure
- CSP concurrency - Go-style channels for structured communication
See Codata Guide for detailed explanation.
- Pure C99 for maximum portability
- Zero dependencies beyond POSIX
- Static library (libzeroipc.a)
- Minimal overhead
- Template-based for zero overhead
- Header-only library
- Modern C++23 features
- RAII resource management
- Pure Python, no compilation required
- NumPy integration for performance
- Duck typing for flexibility
- mmap for direct memory access
cd c
make # Build library
make test # Run tests
cd cpp
cmake -B build .
cmake --build build
ctest --test-dir build
cd python
pip install -e .
python -m pytest tests/
cd interop
./test_interop.sh # C++ writes, Python reads
./test_reverse_interop.sh # Python writes, C++ reads
- Language Equality - No language is "primary", all are first-class
- Minimal Overhead - Table stores only what's absolutely necessary
- User Responsibility - Users ensure type consistency across languages
- Zero Dependencies - Each implementation stands alone
- Binary Compatibility - All languages read/write the same format
- Array Access: Identical to native arrays (zero overhead)
- Queue Operations: Lock-free with atomic CAS
- Memory Allocation: O(1) bump allocation
- Discovery: O(n) where n ≤ max_entries
ZeroIPC excels at:
- ✅ High-frequency sensor data sharing
- ✅ Multi-process simulations
- ✅ Real-time analytics pipelines
- ✅ Cross-language scientific computing
- ✅ Zero-copy producer-consumer patterns
Not designed for:
- ❌ General-purpose memory allocation
- ❌ Network-distributed systems
- ❌ Persistent storage
- ❌ Garbage collection
Comprehensive tool for inspecting and debugging shared memory:
# Build the tool
cd cpp && cmake -B build . && cmake --build build
./build/tools/zeroipc-inspect
# List all ZeroIPC shared memory segments
./zeroipc-inspect list
# Show detailed information about a segment
./zeroipc-inspect show /sensor_data
# Monitor a stream in real-time
./zeroipc-inspect monitor /sensors temperature_stream
# Dump raw memory contents
./zeroipc-inspect dump /compute --offset 0 --size 1024
- Codata Guide - Understanding codata and computational structures
- API Reference - Complete API documentation
- Architecture - System design and memory layout
- Design Patterns - Cross-process communication patterns
- CLI Tools - Command-line utilities documentation
- Examples - Complete working examples
- Design Philosophy - Core principles and trade-offs
- Binary Specification - Wire format all implementations follow
- C++ Documentation - C++ specific details
- Python Documentation - Python specific details
Contributions welcome! When adding new language implementations:
- Follow the binary specification exactly
- Create a new directory for your language
- Implement Memory, Table, and Array as minimum
- Add cross-language tests in
interop/
ZeroIPC brings functional programming paradigms to IPC:
- Lazy Evaluation: Defer expensive computations until needed
- Memoization: Automatic caching of computation results
- Stream Combinators: map, filter, fold, take, skip, window
- Monadic Composition: Chain asynchronous operations with Futures
- Producer-Consumer: Lock-free queues with backpressure
- Pub-Sub: Multiple consumers on reactive streams
- Request-Response: Futures for RPC-like patterns
- Pipeline: Stream transformations across processes
- Fork-Join: Parallel computation with result aggregation
The boundary between data and code continues to blur:
- Persistent Data Structures: Immutable structures with structural sharing
- Software Transactional Memory: ACID transactions in shared memory
- Dataflow Programming: Computational graphs in shared memory
- Actors: Message-passing actors with mailboxes
- Continuations: Suspended computations for coroutines
MIT