Skip to content

Epic: pluggable storage adapters for burn (server environments) #139

@willwashburn

Description

@willwashburn

Why

Today every piece of session data burn captures lands in ~/.relayburn/ on the local filesystem: ledger.jsonl for turn records, content/{sessionId}.jsonl sidecars, ledger.idx / ledger.content.idx dedup hashes, an archive.sqlite read cache, and per-host JSON state (hwm.json, cursors.json, config.json, plans.json). The write path uses fs.appendFile with file-existence locks; the read path streams JSONL with no index.

That works for a single user on a laptop. It breaks when Claude Code or Codex runs in a server environment — ephemeral pods, CI runners, dev sandboxes — where the local filesystem disappears at the end of the run, and where the natural next step is to aggregate usage across many hosts into one durable store.

Goal

Introduce a StorageAdapter seam in @relayburn/ledger so the ledger/content/lock layers become interchangeable, and ship four adapters:

  • file (default) — wraps the existing JSONL + sidecar implementation. Zero behavior change for current CLI users.
  • sqlite — single-file DB. Replaces the JSONL+archive duo for users with a durable volume.
  • postgres — server-shared. Multi-host aggregation is a first-class goal: N hosts ingesting the same Claude session converge to one canonical row each via content-addressed dedup keys (turnIdHash, etc.) declared as PRIMARY KEY.
  • http — points at a new @relayburn/server package that holds one of the durable adapters internally.

Adapter selection is env-driven (RELAYBURN_STORAGE=file|sqlite|postgres|http).

Scope boundary: shared vs host-local state

Two classes of state, treated differently:

  • Shared session data (turns, content, compactions, relationships, tool result events, user turns, stamps) → goes through the adapter, aggregates across hosts.
  • Host-local state (cursors.json, hwm.json, config.json, plans.json, models.dev.json) → stays as filesystem JSON in RELAYBURN_HOME regardless of adapter. Cursors reference per-host inode/mtime offsets and are meaningless on another machine.

Non-goals

  • Real-time push/replication between hosts (convergence happens via the shared store on write).
  • Automated migration of existing ~/.relayburn data into a remote DB (a separate import/export tool can come later).
  • Replacing the JSONL default for local CLI users.

Phases (sub-issues)

Each phase is independently shippable. Phase 1 is the prerequisite for everything else; phases 2–4 can be tackled in any order after that.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions