Skip to content

v9.0.0: Realtime Storage

Compare
Choose a tag to compare
@aboodman aboodman released this 04 Mar 03:12
· 3 commits to main since this release

Summary

Replicache v9 has a rewritten storage system we call Realtime Storage which is must faster than the previous implementation.

For example, the median time to write a Replicache mutation that updates five dependent subscriptions is now ~3ms with up to 64MB of local storage on common hardware.

This performance allows developers to build highly responsive user interfaces that react instantaneously to input.

See the Realtime Storage and Performance sections below for more details.

💡 Note
These release notes document changes in Replicache between v8.0.3 - the last stable release - and v9. They are largely duplicative of the v9.0.0-beta.* notes.

🎁 Other Features

  • Replicache is faster, often dramatically so, on every benchmark as compared to v8 (see Performance below).
  • Replicache now has an experimental new poke method. This enables directly programmatically adding data to Replicache without having to go through pull.
  • The mutation type sent to replicache-push now includes a timestamp property, which is the original (client-local) timestamp the mutation occurred at.
  • The size of replicache.min.mjs.br was reduced 28%, down to ~18kb.

🧰 Fixes

  • Replicache is no longer slow when dev tools is open (#634)

⚠️ Breaking Changes

  • The name parameter is now required. This is used to differentiate Replicache data within the same origin. For security, provide a value that includes a unique ID for the current user. This ensures each user sees and modifies only their own local data. This has always been recommended but is now required. See "Multiple Users" below for more information.
  • The semantics of the clientID changed as compared to when useMemstore was false in v8. See Realtime Storage, below.
  • Removed the pushAuth, getPushAuth, pullAuth, and getPullAuth features. They were deprecated in Replicache 6.4.0 and have been replaced with auth and getAuth.
  • The schemaVersion property of the Replicache class is now read-only. This field was previously mutable, but setting it had no effect.

Realtime Storage

In Replicache v8, there were two storage modes: memory, and persistent, controlled by the useMemstore constructor flag.

In persistent mode (useMemstore=false), each browser profile was a Replicache client, with a single clientID and storage area shared amongst all tabs over the lifetime of the profile. Accessing data directly from IDB is super slow — way too slow to back interactive experiences like mouse movement and typing — which forced developers to cache this data in memory on top of Replicache. This in turn created complexities keeping the in-memory and persistent state in sync. Additionally sharing a single storage area among many tabs created complexities versioning this storage — you can’t change the schema of storage that other tabs are using!

In contrast, in memory mode (useMemstore=true), each unique instance of the Replicache class was its own client, with its own unique clientID and in-memory storage that only lasted the lifetime of that instance (usually a single page load). Being in memory, this mode was much faster and could back mouse movement and keystrokes, but was only suitable for small amounts of data since you wouldn’t want to re-download tons of data on every startup!

Starting in Replicache v9, useMemstore goes away and there is only one unified storage model that mostly combines the best attributes of the old memory mode and persistent mode: it’s as fast (actually faster in most cases — see Performance) than the old memory mode, but also persists every few seconds to storage so that data can be reused across instances.

Just like the old memory model, every instance of the Replicache class (again, every individual page load) is its own unique client with its own unique clientID. And conceptually each such client has its own distinct storage area, separate from all other clients.

💡 Note
Internally, we heavily deduplicate storage amongst clients, so that in reality each client only stores what is unique to it.

When a new client is instantiated, Replicache forks the storage from some previous instance with the same name and schemaVersion (see schema versioning, below), so that the net effect is almost as if the storage was shared between the two tabs.

Importantly, though, changes in one tab do not show up immediately in other tabs because they don’t completely share storage. When online, it will appear as if storage is shared because changes in one tab will be synced rapidly to other tabs via the server. But when offline, that syncing will stop occurring and the tabs will proceed independently (see offline, below).

Versioning

A previous headache in persistent mode was versioning the local schema. We could not use the common strategy of migrating the schema on startup since other tabs might be using the storage at that moment. Also, writing migration code is difficult to do correctly and not a task our users reported being excited about.

With each client having its own logical storage, things are far simpler:

  • When you construct Replicache, optionally provide a schemaVersion which is the version of the data understood by the calling application code.
  • When you change the format of the client view in a backward incompatible way, change the schema version.
  • When Replicache forks to create a new storage area, it only forks from previous clients with the same schemaVersion . This does mean that when you change your schema version, clients will have to download a new copy of the data. But this is much more robust than trying to migrate data, and we think it’s the right tradeoff for almost all apps.
  • Other clients that haven’t yet upgraded proceed happily using the old schema in their own storage until they decide to upgrade.
  • Replicache also includes the schemaVersion in replicache-push and replicache-pull so that the server can respond appropriately.

Offline Support

In the old persistent model, Replicache’s offline features were simple to understand: all the data was stored locally first in one profile-wide storage area, then synced to the server. Thus, Replicache apps would transition perfectly well between online and offline, tabs would appear to sync with each other while offline, and apps could even start up offline (provided developers used e.g., ServiceWorker properly to enable that).

Part of the tradeoff for getting faster performance is that Replicache’s offline-support is no longer quite as simple or robust.

Specifically:

  • As with v8, a Replicache tab that is running online can go offline and continue working smoothly for some time (~hours to days depending on frequency of writes).
  • As with v8, Replicache saves changes locally every few seconds. Offline tabs can be switched away from or closed, and the computer can even shut down or crash without changes being lost. Any work done offline will be pushed to the server the next time the app is online using Replicache’s normal conflict resolution. For more information on how this works see "Mutation Recovery" in the v9.0.0-beta.1 Release Notes.
  • Unlike v8, when offline, tabs do not sync with each other. Each proceeds independently until the network is restored. Note that this also means that if a tab is closed offline, then a new tab opened offline, the new tab will not see the changes from the first tab until the network is restored.

We call this concept Local Acceleration, as opposed to Offline-First. In practice most modern web applications are not intended to be used for long periods offline, and can’t startup offline anyway. Local Acceleration captures the key benefits of offline-first for most applications — instant responsiveness and resilience against short periods of network loss — while optimizing for optimal online performance.

Multiple Users

Because Replicache reuses data persistently across tab sessions, it’s always been important to properly namespace data by user. If a single browser profile is shared by multiple users, or if a single user uses multiple user accounts within the same application, we would not want to read or modify data from account A when account B logs into the app.

Replicache provides the name constructor option for this purpose: Each named Replicache instance within an origin has its own separate namespace. Previously in Replicache name was optional, but given its security importance we started making it required in v9.

⚠️ Warning
Always provide a value for the name parameter that includes a unique user ID. This way each user will view and modify only their own data.

Compatibility

v9 will upgrade cleanly from earlier Replicache versions, including v9 betas.

However, it does not migrate any unsent mutations across versions. For example, if the user goes offline in v8, makes a change, then comes back to the app online and the app includes v9, the mutations made while offline in v8 will be lost. We will begin migrating such mutations across major versions beginning in our first General Availability Release, which we plan for v10.

Transitioning to v9

Despite the above lengthy notes, the transition to v9 should be fairly seamless. Basically:

  • Remove useMemstore from your Replicache constructor if present.
  • Ensure you provide a name parameter to Replicache, this is now required (generally the userID that is logged in).
  • Do not use the clientID as a parameter to generate the diff for replicache-pull from. Only the cookie should be used. This is because when Replicache forks to create a new client, it assigns a new clientID. If you are using the clientID as an input to replicache-pull you will find that in many cases the clientID is new and thus probably send reset patches to every new client.

Performance

Metric v9 v8 (persistent) v8 (mem)
Write/Sub/Read (1mb total storage) 2.8ms 72ms (+45x) 2.8ms (+0.0x)
Write/Sub/Read (16mb total storage) 3.2ms 267ms (+83x) 5.4ms (+1.7x)
Bulk Populate 1mb 45ms 183ms (+4x) 108ms (+2.4x)
Scan 1mb 3.1ms 77ms (+25x) 3.7ms (+1.2x)
Create Index (5mb total storage) 240ms 1150ms (+4.8x) 300ms (+1.25x)