Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple frontends per backend #384

Open
pvh opened this issue May 19, 2021 · 0 comments
Open

Multiple frontends per backend #384

pvh opened this issue May 19, 2021 · 0 comments

Comments

@pvh
Copy link
Member

pvh commented May 19, 2021

Problem statement

It would be convenient to support multiple frontends on a single Automerge backend. The most obvious reason would be to reduce overhead in the common situation of having the same document loaded in several locations, such as in multiple browser tabs or in multiple locations within the same application.

State of the art & basic proposal

Currently, the communication between an Automerge Frontend and Backend looks like this:

  [frontend, change] = Frontend.change(frontend, (doc) => {/* change something */}
  [backend, patch] = Backend.applyLocalChange(backend, change)
  frontend = Frontend.applyPatch(frontend, patch)

In the backend, Automerge uses the contents of the change to create the patch which gets applied on the frontend, but there is no API to re-create the patch later, or to create other patches.

If we assume that all frontends are created at the same time, progress in lockstep, and that there are no race conditions, then there's no reason we couldn't apply a patch to several frontends at once:

  [newFeZero, change] = Frontend.change(frontend[0], (doc) => {/* change something */}
  frontend[0] = newFeZero
  [backend, patch] = Backend.applyLocalChange(backend, change)
  frontends.map(frontend) { 
    frontend = Frontend.applyPatch(frontend, patch)
  }

Of course, in the real world this is too onerous a constraint, and we would like frontends to be able to come and go as they please.

What I propose is to follow the model of the syncMessage API and decouple the application of a localChange from the generation of a patch, something like this:

  [newFeZero, change] = Frontend.change(frontend[0], (doc) => {/* change something */}
  frontend[0] = newFeZero
  [backend] = Backend.applyLocalChange(backend, change)
  frontends.map(frontend) { 
    currentFrontendState = Frontend.getCurrentState(frontend)
    patch = Backend.getPatch(backend, frontendState)
    frontend = Frontend.applyPatch(frontend, patch)
  }

(There are a few ways we could consider designing this API -- for example, following the example of syncState, getPatch() could update the syncState locally and save a round-trip. We could also consider how a frontend might communicate with the backend if and when it actually wanted patches. This seems likely to be an application-design concern.)

A proposed implementation approach with comments & open questions

Martin suspects that the combination of an actorId & sequence number should be enough information to produce an arbitrary patch, and we could pass those as new optional arguments to getPatch(). Inside getPatch() we would calculate the diffs required to bring the frontend up to date.

Moving to a model like this would have a number of advantages. For example, we have had an ongoing conversation about when and how to emit patches to the frontend during synchronization; we certainly wouldn't want to force a browser to render every single intermediate state when replaying the history of a long document. This change would put the decision in the developer's hands: emit the patch when you decide you want one.

This would also allow the frontend to request patches when it needed them. When we were working on the hypermerge-vscode plugin, VSCode followed standard text editor protocol, and dirty buffers would not try to merge local edits with new data from "disk", but we had no way of "pausing" frontend updates and so the change we would generate on saving a file would undo the intervening edits by other users. If the diff could have been generated against the file as-it-was, then we would have had the desired behaviour.

I'm not sure yet how to generate the diff list in a new getPatch(), and I don't know if or how we need to enforce different actorIds for each frontend. (Perhaps frontends should not make this decision for themselves?) There may also be gnarly race conditions I am overlooking because don't have enough experience with this part of Automerge.

It also seems as though we will want to ensure the patches Frontends receive are applied without gaps and in order. It's possible that I missed it, but while there is a check to ensure at application time that a patch has a sequence number lower than the current seq value stored in the Frontend, I didn't see any checks to make sure the patch would not leave any holes in the history. This seems like it would be more important in a world where there may be many (or no) frontends out there.

Validation

We should be able to prove this functions usefully by updating automerge-demo to have a single browser sharedWorker that all open tabs could use. There may be other use cases: feel free to suggest them in the comments.

Future work

Multiple frontends per backend is an obvious first step along a path to being able to support divergence and branching of a single document with a single backend. I haven't given this any detailed thought yet, but I suspect it would be some form of configuration passed to getPatch() that would constrain which diffs were considered. I think we can defer deeper consideration of this for now, but it helps me feel like we're on an important path with this.

echarles pushed a commit to datalayer-externals/automerge-classic-arch that referenced this issue Feb 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants