The Redis of local-first software.
A background OS daemon that provides a centralized, conflict-free replicated data type (CRDT) engine via Unix domain sockets. Any process on your machine — Rust, Python, Node, shell scripts — can share live, conflict-free state with zero cloud, zero latency, and zero data loss.
HexSync is a persistent background daemon that holds CRDT documents in memory, exposes them via a Unix socket, and lets any local process read, write, or subscribe to live updates — conflict-free, by math.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Neovim │ │ Python │ │ Node.js │
│ plugin │ │ script │ │ dashboard │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
└───────────────────┼───────────────────┘
│ Unix Socket
┌──────▼──────┐
│ hexsync │ ← this project
│ daemon │
│ │
│ Loro CRDT │ ← conflict-free math
│ WAL on │ ← survives reboots
│ NVMe │
└─────────────┘
Think of it as Redis, but instead of a key-value store, every value is a CRDT document that multiple writers can update simultaneously — and it will always converge to a consistent state, guaranteed by mathematics, not by locks.
Every existing CRDT project assumes your nodes are either servers in a datacenter or browser tabs on the internet. Nobody targeted the single developer's machine as the sync domain.
| Tool | Problem |
|---|---|
| Redis | Not conflict-free. Last write wins. |
| SQLite | Not real-time. No pub/sub. |
| Yjs / Automerge | Libraries only. You build transport. No IPC daemon. |
| Cloud sync (Figma, Notion) | Requires internet. Your data leaves your machine. |
| HexSync | Local. Conflict-free. Real-time. Persistent. Zero cloud. |
- Conflict-free by default — powered by Loro, a pure-Rust CRDT engine. Concurrent writes from multiple processes always converge to the same state.
- Real-time pub/sub — processes subscribe to documents and receive updates the moment they're merged. No polling.
- Persistent — every update is appended to a write-ahead log (WAL). State survives crashes and reboots.
- Language-agnostic — any process that can open a Unix socket can talk to HexSync. The protocol is length-prefixed bincode over a socket.
- Tiny footprint — 2.3MB RAM. ~6MB binary. Runs as a systemd user service.
hxCLI — read, write, and watch documents from any terminal.
- Rust 1.75+ (
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh) - Linux (tested on Arch Linux / systemd)
- Linux: Fully supported (systemd user service + Unix socket)
- macOS: Not fully tested; running manually should work (no systemd)
- Windows: Not supported natively; the Linux build should run under WSL
# Clone
git clone https://github.com/Andrew-velox/hexsync
cd hexsync
# Build release binaries
cargo build --release --workspace
# Install to system
sudo cp target/release/hexsync /usr/local/bin/hexsync
sudo cp target/release/hx /usr/local/bin/hxmkdir -p ~/.config/systemd/user
cp scripts/hexsync.service ~/.config/systemd/user/hexsync.service
systemctl --user daemon-reload
systemctl --user enable --now hexsync
# Verify
hx statusStop or disable the service:
# Stop the service (until next login or manual start)
systemctl --user stop hexsync
# Disable it so it won't start on login
systemctl --user disable hexsync
# Or stop + disable in one step
systemctl --user disable --now hexsync
# Re-enable and start
systemctl --user enable --now hexsyncOutput:
✓ hexsync daemon is running
socket: /tmp/hexsync.sock
RUST_LOG=hexsync=info hexsyncThe hx binary is your terminal interface to any running HexSync daemon.
# Check daemon is running
hx status
# Write a value to a document
hx set <doc> <key> <value>
hx set work project "hexsync"
hx set work mood "shipping"
# Read a document
hx get <doc>
hx get work
# doc: work
# ─────────────────────
# mood = "shipping"
# project = "hexsync"
# ─────────────────────
# 175 bytes total
# Watch a document for live updates (blocks, Ctrl+C to stop)
hx watch <doc>
hx watch work
# watching 'work' — Ctrl+C to stop
# ─────────────────────────────────────
# #1 [work] project="hexsync" mood="shipping" (175 bytes)
# #2 [work] status="live" (248 bytes)
# Use a custom socket path
hx --socket /run/user/1000/hexsync.sock statusOpen two terminals on the same machine.
Terminal A
hx watch demoTerminal B
hx set demo status "live"
hx set demo owner "andrew"Terminal A output
watching 'demo' — Ctrl+C to stop
─────────────────────────────────────
#1 [demo] status="live" (95 bytes)
#2 [demo] owner="andrew" status="live" (169 bytes)
Add to your project:
[dependencies]
hexsync-client = { git = "https://github.com/Andrew-velox/hexsync" }use hexsync_client::HexSyncClient;
use loro::{ExportMode, LoroDoc, LoroValue, ValueOrContainer};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut client = HexSyncClient::connect("/tmp/hexsync.sock").await?;
// Build a Loro document and push it (merge with existing state first)
let doc = LoroDoc::new();
if let Ok(existing) = client.get("my-doc").await {
doc.import(&existing)?;
}
doc.get_map("data").insert("name", "hexsync")?;
doc.commit();
let bytes = doc.export(ExportMode::all_updates())?;
client.update("my-doc", bytes).await?;
// Read it back
let state = client.get("my-doc").await?;
let doc = LoroDoc::new();
doc.import(&state)?;
let map = doc.get_map("data");
let name = match map.get("name") {
Some(ValueOrContainer::Value(LoroValue::String(s))) => s.as_str().to_string(),
Some(ValueOrContainer::Value(v)) => format!("{:?}", v),
Some(ValueOrContainer::Container(_)) => "<container>".to_string(),
None => "<missing>".to_string(),
};
println!("name = {}", name);
Ok(())
}HexSync uses a dead-simple length-prefixed binary protocol over a Unix socket. Every message is:
[4 bytes: big-endian message length][N bytes: bincode-serialized payload]
enum Request {
Get { doc_id: String },
Update { doc_id: String, payload: Vec<u8> },
Subscribe { doc_id: String },
Unsubscribe { doc_id: String },
}enum Response {
Ok,
State { doc_id: String, payload: Vec<u8> },
Update { doc_id: String, payload: Vec<u8> },
Error { message: String },
}The payload is always raw Loro export bytes. Import them into a LoroDoc to read the CRDT state.
1. Replay WAL → rebuild in-memory state from last run
2. Open WAL → ready to append new entries
3. Create Engine → Arc<RwLock<HashMap<String, DocEntry>>>
4. Load replayed entries into engine
5. Start Unix socket listener
6. Accept connections → spawn tokio task per client
client.update(doc_id, bytes)
→ handler receives request
→ engine.apply_update()
→ WAL.append(doc_id, bytes) ← disk first
→ LoroDoc.import(bytes) ← CRDT merge
→ broadcast to subscribers ← fan-out
→ Response::Ok
| Environment Variable | Default | Description |
|---|---|---|
HEXSYNC_SOCKET |
/tmp/hexsync.sock |
Unix socket path |
HEXSYNC_WAL |
/tmp/hexsync.wal |
Write-ahead log path |
RUST_LOG |
hexsync=info |
Log level (info, debug, warn) |
Example:
HEXSYNC_SOCKET=/run/user/1000/hexsync.sock \
HEXSYNC_WAL=~/.local/share/hexsync/hexsync.wal \
RUST_LOG=hexsync=debug \
hexsync# Add to ~/.bashrc
proj() {
hx set context project "$1"
hx set context cwd "$(pwd)"
echo "→ switched to $1"
}
# Terminal 1
proj hexsync
# Terminal 2, 3, 4 — all update instantly
hx watch context# Add to ~/.bashrc
note() { hx set notes "note-$(date +%s)" "$*"; }
notes() { hx get notes; }
note "fix WAL rotation"
note "buy oat milk"
notes# scripts/monitor.sh
while true; do
hx set sysmon cpu "$(top -bn1 | grep 'Cpu(s)' | awk '{print $2}')"
hx set sysmon ram "$(free -m | awk 'NR==2{print $3"/"$2"MB"}')"
sleep 1
done
# Watch from anywhere
hx watch sysmonAny process that subscribes to a document gets pushed the new state the moment it changes. Change your dev config once — all services update without restarting.
# Start the daemon in Terminal 1
RUST_LOG=hexsync=info cargo run
# Run all daemon-dependent tests in Terminal 2
cargo test --test basic_ipc -- --nocapture
cargo test --test stress_test -- --nocapture
cargo test --test subscribe_test -- --nocapture
# WAL test manages its own daemon — run standalone
cargo test --test wal_test -- --nocapture| Test | What it proves | Time |
|---|---|---|
basic_ipc |
CRDT round-trip: write → merge → read | ~0s |
stress_test |
1000 concurrent updates, no crash, no data loss | ~14s |
subscribe_test |
10/10 real-time updates delivered to subscriber | ~0.3s |
wal_test |
State byte-identical after daemon kill + restart | ~0.4s |
- Phase 1 — Unix socket IPC skeleton
- Phase 2 — Loro CRDT engine integration
- Phase 3 — Concurrent writers + pub/sub broadcast
- Phase 4 — Write-ahead log persistence
- Phase 5 — systemd service,
hxCLI, release binary - Phase 6 — LAN peer sync via TCP + mDNS discovery
- Phase 7 — WAL rotation + snapshot compaction
- Phase 8 — Python / Node client libraries
- Phase 9 — TLS peer authentication
| Component | Crate | Why |
|---|---|---|
| Async runtime | tokio |
Multi-connection handling without blocking |
| CRDT engine | loro |
Pure Rust, fastest available, deep JSON-like state |
| Serialization | bincode |
Compact binary, fast, zero-copy friendly |
| IPC transport | std::os::unix |
Native Unix sockets, no overhead |
| Persistence | custom WAL | Append-only, crash-safe, replay on boot |
| Logging | tracing |
Structured, async-aware |
| CLI | clap |
Derive-based, zero boilerplate |
This project is in active development. Issues and PRs welcome.
# Dev loop — auto-restart on code changes
cargo install cargo-watch
cargo watch -x run
# Lint
cargo clippy --workspace -- -D warnings
# Format
cargo fmt --all"Your machine. Your data. No cloud."