Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement remote databinding / object synchronization #9781

Open
cboulanger opened this issue Aug 26, 2019 · 29 comments

Comments

@cboulanger
Copy link
Contributor

commented Aug 26, 2019

Taking this up from #9780, which concerns a more narrow issue.

Qooxdoo's databinding mechanism is very well architected and lends itself to be extended beyond the running application. Two use cases that have been discussed are 1) application state synchronization with a qooxdoo app running in a native browser window and 2) object synchronization with the server (both see #9780).

An adequate implementation of remote databinding, it seems to me, must have (at least) three layers:

  1. The application layer, which determines the peer to which to connect, but which doesn't care about the underlying implementation
  2. The protocol layer, which is event-based and is concerned with initialization and propery value synchronization - it exchanges messages with the peer without concerning itself with the transport details
  3. The transport layer, which is dependent on where the peer is: it could be a different browser window on the client, an application running on the server, or a remote application (think WebRTC)

Anything else to think of?

@johnspackman

This comment has been minimized.

Copy link
Member

commented Aug 26, 2019

For the protocol layer, it could be a good idea to implement it as a REST api because this would make it easily cross platform and having a special focus on creating a documentable and readable communication protocol will make implementing back ends straightforward. It would be very useful if the app (or the compiler) were able to output an API documentation compatible with https://app.swaggerhub.com/search for example.

The tricky parts to the protocol layer is establishing an object's identity (eg a UUID is perfect) and being able to resolve a UUID to an object, so that many-to-one and recursive references can be handled correctly. This introduces issues around garbage collection, because both sides have to have one big lookup that maps UUID to object.

This could be alleviated on the server side if the objects are being persisted and/or can be recreated on demand. Another possible solution is to use reference counting - if this was accompanied with changes to the Qooxdoo property mechanism so that when a property value supports some interface, then the framework automatically add/reduces the reference count it would be almost seamless.

When it comes to updates, there are two possible approaches: one is just a normal REST PUT or POST where the object's new properties are sent, or possibly just a subset given that watching change events will allow the sender to know what's changed. The obvious advantage is that it looks and acts just like any other REST method.

The alternative approach is to replay property changes in the order that they occurred, even if the value changes more than once. This is more correct from an object point of view and preserves side effects - this is how qooxdoo-server-objects works, but OTOH it adds complexity to implementation and the protocol, and I'm not convinced that it's ever been a benefit. IMHO if ordering property changes is significant, it would be more helpful to do something like add support for apply methods that can apply multiple property values at once (eg like the propertyGroups idea in #9432 (comment))

Another really useful aspect of the protocol layer is to automatically distribute changes around the clients - for example, if ClientA, ClientB, and ClientC all have some object, and ClientA updates a property on that object and passes it to the server, the server can then push those property changes to ClientB and ClientC. This means that the server has to remember which objects were sent to which clients, which is a cost but only a lookup. While server push is a feature of HTTP/2, the reality is that this will involve polling the server; clients have to make a round trip periodically anyway (eg when ClientA updated the property it scheduled a round trip) and updates can come back in the response, so polling isn't always an empty cost.

Distributing changes around the clients is really good for the user experience - people love the simplicity of changes just happening without having to refresh, and for SPAs like Qooxdoo Desktop apps it's almost essential.

This excludes firing events remotely - instead, if you need something to happen on the server, call a remote method.

Finally, everything should be asynchronous - synchronous calls aren't just difficult for responsive UIs, they make it very tricky to handle recursive data structures. EG because a property apply method can trigger a server round trip, the result from the second round trip might need to reference objects returned in the first round trip but not yet processed (because the first property apply is treated as synchronous).

@derrell

This comment has been minimized.

Copy link
Member

commented Aug 26, 2019

Property synchronization needn't be done with the "long poll" as described. A very clean way of doing that is with websockets. We use that extensively in our product for client/server communication. If the server keeps track of each of its clients' websocket, it can synchronize all of them when a change is made at any one, given the facilities being discussed here. (This of course makes it a stateful server, which isn't always desirable, but when that design is feasible, should work well.)

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Aug 26, 2019

I think the cool thing about cleanly separating the layers (as described above) is that the synchronization method can be implemented in whatever technical way is feasable/desirable. WebSockets are another modern way. The protocol layer should not be modeled on any particular transport (such as REST) but only define the syntax of the messages that are used to do the synchronization. The way the message objects are passed to the peer should be determined only by the transport layer.

@derrell

This comment has been minimized.

Copy link
Member

commented Aug 26, 2019

I agree completely. In fact, although we might provide some out-of-the-box transport layers that users could make use of, it's very possible that users will overlay the transport of these messages onto an existing channel they already have with their app architecture.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Aug 26, 2019

Some thoughts for an implementation

  • qx.io.channel[Rest|JsonRpc|WebSocket|Window|...](uri) could be the transport layer
  • qx.data.remote.MBinding could contain the protocol implementation. It would have an initialization method which would then setup the property synchronization with the peer, using the transport channel.
  • qx.data.remote.Object could serve a proxy object, it includes qx.data.remote.MBinding and is initialized with the transport channel and a globally unique ID, which both would be passed to the init method of the mixin.
@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Aug 28, 2019

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Aug 29, 2019

When I try to generalize the code in https://github.com/cboulanger/eventrecorder/blob/master/source/class/cboulanger/eventrecorder/window/MRemoteBinding.js , I see the following minimal set of interfaces (omitting any namespace here):

  • IChannel having the method sendMessage({Object} obj), which can serialize and send any JSON-compliant object to the peer which can be reached via the interface implementation, the implementation also fires the message event when a message arrives from the peer (or object on the peer) identified by the uri provided when instantiating the implementation class.
  • IProxy, which proxies the remote object and implements the syncProperties({IChannel}channel) method, usually by including the MRemoteBinding mixin, which implements the JSON-based remote databinding protocol. Both the protocol and the i/o are completely transparent to the databinding.

This seems to be elegant and simple - anything else that is strictly needed for a universal implementation of remote databinding that comes to your mind? I think the postMessage API is a good model to build on for the internal protocol.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Aug 31, 2019

Here's some first code, albeit in a purely conceptual stage (hasn't been run & tested) for your consideration.:

https://github.com/cboulanger/eventrecorder/tree/master/source/class/qx

I put it into the event recorder repo close to where it will be used because that's easier than working with a qooxdoo clone and thanks to @johnspackman's work, you can even add to the qx namespace in your own app.

Here's the conceptual idea, using three layers of abstraction: Proxy, Channel, Transport

  • A Proxy is an object that represents another qooxdoo object that lives in a different execution context (such as in another browser window, a worker, an application running on the server / a different server). The proxy replicates selected or all properties of the remote objects, including deeply nested objects. It will also synchronize property changes. It uses a dedicated Channel to synchronize state with the remote object. It abstracts away the implementation details of the protocol that does the property synchronization using the mixin qx.data.MRemoteBinding.

  • A Channel object serves as the dedicated connection between two objects living in different execution context. It relies on a Transport object to pass messages between the i/o endpoints. It abstracts away the implementation details of message passing.

  • A Transport is an object that can send and receive arbitrary JSON values to a remote target using an endpoint object that does the actual communication with the target. It can be used by one or more channels. It abstracts away the implementation details of the I/O. It can be shared between many channels (for example, in the case of a persistent HTTP connection). See, for example, the postMessage API transport, and qx.ui.window.RemoteApplication, which uses this transport to model an application running in a separate browser window (see this demo using an earlier implementation).

Ideas? Objections?

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Aug 31, 2019

Some more notes on the un-/serialization method used. Native JavaScript datatypes are transferred as they are as far as they can be converted to JSON (dates are currently not handled yet, but I need to add that). Qooxdoo class instances are transformed into simple objects that have a $$class property containing the class name. qx.data.IListData objects are transformed into objects that have a $$data property containing an array with the list items.

When unserializing the object on the receiving context, the object can be a) fully reconstructed using the class names, or b) as a data model which only contains the property values, not the original class instances. In the majority of cases, b) will be sufficient and reasonable, as reconstructing the original instances with only the property data is often not possible or has unintended side effects.

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 1, 2019

I think you may have misunderstood the examples I uploaded in #9780 (comment) - the de/serialisation code in qxl.cms is complete, working, and unit tested.

At the moment, the code only de/serialises to a database, but this is exactly the same as de/serialising to the other end of a network connection (or a postMessage transport etc).

Also, it includes solutions for some tricky scenarios that I think that your code may not handle, for example:

(1) Handle objects being sent by reference, multiple times:

/* on the Client */
let mother = new mypkg.Person();
mother.getChildren().push(new mypkg.Person());
mother.getChildren().push(new mypkg.Person());
let father = new mypkg.Person();
mother.getChildren().forEach(child => father.getChildren().push(child))
// send mother to server
// send father to server
/* on the Server */
let mother = // get from client;
let father = // get from client;
qx.core.Assert.assertTrue(mother.getChildren().getItem(0) === father.getChildren().getItem(0))

(2) Handle recursive objects:

let brother = new mypkg.Person();
let sister = new mypkg.Person();
brother.getSiblings().push(sister);
sister.getSiblings().push(brother);
// send brother to server

The CMS code using annotations to declare (a) which properties are to be serialised, and (b) provide guidance on how they are serialised; external code then inspects the class and decides how to perform the serialisation.

IMHO this declarative style with annotations is much better for separation of concerns, and makes it possible to radically change the serialisation without impacting the class. This could be because of evolution of code, or because there are multiple serialisation strategies - i.e. the developer can decide at runtime which serialisation code to use, rather than having to have it baked into the code of the class being de/serialised. Sometimes it's just easier to work outside of the class than from within it, especially when the class has chicken-and-egg problems like instantiating itself or when dealing with recursive references.

I don't think you need a one-to-one channel to tie between two objects - the transport just needs to be able to de/serialise an object and see it association with a UUID; at any point, any object can be re-serialised and sent to the other end, and if it already exists over there (because it's been sent before), the object just has it's properties updated.

The qxl.cms.data.io.Object is a class at the moment, but could easily become a mixin; it is not a requirement though, it's just an implementation of qxl.cms.data.io.IObject (which is equivalent to your Proxy interface). IObject only requires that the class supports a property called uuid in order for it to be able to be de/serialised.

The qxl.cms.data.io.Controller is very similar in purpose to your Channel class, although it is not tied to any particular object instance, it's a universal serialisation/deserialisation mechanism that will solve the qx.core.Assert.assertTrue(mother.getChildren().getItem(0) === father.getChildren().getItem(0)) example above

The qxl.cms.data.io.IDataSource is the equivalent of an interface for your Transport class - objects which implement IDataSource are able to take the JSON data from a Controller and send it or retrieve it.

The work to do is to (a) change qxl.cms.data.io.Object into a mixin; (b) implement IDataSource for postMessage/websockets/Xhr/etc.

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 1, 2019

PS for some working examples, please take a look at DemoReferences.js which is an example object that has a property called other which can refer to another instance of DemoReferences and also TestDatabase.js#L109-L170 which shows how instances of DemoReferences are created, serialised, and deserialised.

The test starts with:

let db = new qxl.cms.db.FileDatabase("test/website-db");
let ctlr = new qxl.cms.data.io.Controller(db);

db is an implementation of IDataSource which happens to write to a file based database, but this could be a websocket or Xhr implementation; the ctlr is configured to use that db.

Create a couple of objects and create a reference:

let ref1 = new qxl.cms.test.content.DemoReferences().set({ title: "One" });
let ref2 = new qxl.cms.test.content.DemoReferences().set({ title: "Two" });
ref1.setOther(ref2);

Sending the object to the other side is just this:

await ctlr.put(ref1);

Loading data is just:

ref1 = await ctlr.getByUuid(id1); // id1 is the UUID of the object to retrieve

Because this is modelled around a database, retrieving it is done by UUID - in a websocket/Xhr implementation this would actually be exactly the same, except that the data is delivered with the request instead of it being pulled on demand from a database.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 2, 2019

@johnspackman Thanks! That's very interesting stuff but it solves a somewhat different problem compared to what I am trying to achieve: my interest is in synchronizing qooxdoo objects across running applications (which might or might not need to be persisted). I think my use case is less relevant for the majority of users. In contrast, you are persisting object data on the server (usually in a database). This is probably the more common use case.

I am more than happy to use your serialization method (I have yet to grok the annotations stuff) - it was fun thinking about serialization anyhow. Could you move it into the qx namespace? It solves a more general problem so it shouldn't be just part of a package. But we might have a couple of different serialization methods so maybe its own namespace is in order, so that alternatives can be added later.

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 2, 2019

I'm not sure that it is so different - while postMessage between windows has some connection, it's so restrictive that it's effectively the same as communicating across a network divide, ie client-server. postMessage does have lower latency and it is possible (and practical) to keep properties up to date immediately, which I could imagine could be an advantage for some applications, but IMHO the majority of cases would mean that it's just as easy to treat it as a client/server application.

While the current implementation of IDataSource is for persisting to a database on the server, it's definitely intended that IDataSource (perhaps not the best name!) is also used for synchronizing qooxdoo objects across a network divide.

That's what I meant about it boiling down to serialisation and the identity problem - once you have a serialisation mechanism, and you can identify with obj1 === obj2, then you have the ability do "live" synchronisation across the network (or postMessage etc).

Solving the identity problem has other really cool side effects - like recursive references just falling into place.

Also, at this level, the concept of storing data in a database is has exactly the same persistence needs as client/server network IO - in both cases, you are synchronizing two "live" objects together, via a neutral data format like JSON; in database terms, it is between past and future, and in client/server network terms it is between two applications. Because the database can support things like object references, including recursive ones, the identity problem is exactly same too.

I have three use cases where this is needed in CMS:

  • Outer window to child/inner window (ie postMessage)
  • 2 x Qooxdoo Client app to node Qooxdoo Server app
  • Qooxdoo Server app to database

I'll update the code today, switch to the qx namespace, and try and get a prototype working so that you can see what I mean.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 2, 2019

@johnspackman Looking forward to this! Does your implementation deal with incremental updates such as changeBubble events? Because that was my main interest - to keep objects synchronized - serialization was just an, albeit very important side concern that needed to be solved in order to transmit the initial state ...

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 2, 2019

Can you give me an example of a bubble events use case? I've never needed to track those, instead I just track change events for individual properties; is it for changes to arrays etc?

In qooxdoo-server-objects, properties which are qx.data.Array instances are watched, and I have an explicit Map class which is also watched (although my original need was to synchronise a Java HashMap on the server with the same on the client, my Map class does for {} what qx.data.Array does for []).

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 2, 2019

I want to synchronize application state in two windows which should be almost instantaneous (i.e., without server roundtrip), for example if the child window should have the same menu entries as the parent window, or tree structures should be the same - as the user manipulates objects in one of the windows, they should be mirrored exactly in the other(s). This does not require to segment a deeply nested object into its components, each having its own ID, but it is enough to treat the object as one, updating individual parts of it - although of course this is also a way to do it that might be easier to integrate into a solution that also does persisting.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 2, 2019

https://cboulanger.github.io/eventrecorder/remote_binding_test/ is a very simple demonstration of the principle.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 2, 2019

@johnspackman I do see the advantage of treating each object individually, though, since it removes the need of event bubbling. You would need a special marshaler which would inject this object identity/change propagation mechanism into models, wouldn't you?

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 2, 2019

It does expect that everything is either a Qooxdoo object (of a class which supports IObject/Proxy) or is a native value (or is an Array or Map, which are the only special cases). In the design as it stands, this is essential in order that each object has it's own UUID which can be universal - but thinking about it, I guess that could be avoided because the transport mechanism could record a mapping between UUID and local hashcode.

Another thought here is that I wonder how many instances there will be where you want to replicate an object onto the other side, where that object does not know about proxying?

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 2, 2019

I think mapping local hashes and UUIDs is an excellent idea, which would allow to wrap existing data objects without having to inject anything. Although then you would need to listen to bubbled events, wouldn't you? I always think of a qx model that represtents deeply nested tree structure where somewhere deep down in the tree a property changes. From the point of view of a relational database, it totally makes sense to treat each node of the tree as its own object, since this is how it would be stored. From the point of view of a "mirrored" object in another window or a document database that stores a JSON object, looking at the top-level object and event bubbling would be more efficient.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 2, 2019

Anyways, I'll wait for your implementation so that I can comment in a more informed manner.

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 2, 2019

okies, im working on it now and I'll push an update as soon as I can.

As I'm working through this, I realise that you need to have a queuing mechanism even for postMessage, in order to resolve recursive structures; the receiver needs to be able to construct an object and populate properties except those which are Qooxdoo objects, and then after all objects are created it must go back through and populate properties which point at newly-created objects. EG for that brother/sister example:

let brother = new mypkg.Person();
let sister = new mypkg.Person();
brother.getSiblings().push(sister);
sister.getSiblings().push(brother);
// send brother to server

the receive has to receive both brother and sister in one go in order to be able to create the objects; but it cannot finish either until both are created.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 3, 2019

@johnspackman I am still thinking about whether we are really solving the same problem or are concerned with different things. At the same time, it would make sense to come up with a unified solution and not with two different, overlapping ones.

At the core, my implementation is concerned with message passing, and object synchronization is just the (initial) problem that is solved using the mechanism. So what I am really interested in is real-time message (==event) passing between different execution contexts. I think that is why I ended up with the idea of dedicated "channels" between objects that live at different i/o endpoints. This would allow to "namespace" events and reuse the message transport object in different channels.

The relevant point is that it would not matter whether the channel connects two windows on the client, a client with the server, or two applications running on different servers. The channel would also be transport-agnostic. It would have to have an ID which would be globally advertised in the execution context, for example using qx.event.message.Bus or some other dedicated mechanism. Then objects (for example, your CMS controller on the client and the server) can connect to the channel and start exchanging information. That is: there is no "client" and "server" as such, but peers.

This mechanism would be able to transport information of your object serialization, but also of any other solution that relies on almost-real-time communication between two contexts.

Does that make sense to you?

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 3, 2019

I still think we are on the same path :)

I'm also interested in object synchronisation, although I also want it to be the case that object identity works (ie if an object is transferred twice only one instance of that object actually goes across and on the other side there is obj1 === obj2 - this also means that recursive objects are possible).

Real time updates between executions contexts is kind of possible (subject to network/postMessage latency) because it just means flushing the queue ASAP after a property change.

There has to be a queue BTW (even if it only grows to 1 item long) because sometimes there are knock on effects, eg if you change a property to a different Qooxdoo object, that object has to be transferred before the property change event.

Namespacing also works, because events are only happening on specific objects.

I have completed the coding for the postMessage implementation, and it will allow alternative implementations for Xhr and websockets with minimal code. What have left to do is the ID mechanism and working/unit tests.

In QSO there was a server and a client, where the client can change properties on an object that was synchrionized onto the server) and call methods on the server (synchronously or asynchronously); the server can change properties on an object and they would be synchronized to the client but the server cannot call methods on the server.

In my current design, there isn't a client and server, just endpoints. The intention is that either side can update the other side's properties, and call methods on the other side's objects.

I made good progress yesterday (despite being first day back!) but not been able to progress it much; I'm hoping to have a working proof of concept tomorrow

Events work because when a property value is copied, the normal Qooxdoo mechanism fires a change event. This is the same for bubbling events.

I have to do the uniut tests and

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 8, 2019

I've just pushed a new update to https://github.com/johnspackman/qxl.cms that adds a working, bi directional synchronisation of Qooxdoo objects across process boundaries. At the moment, only postMessage is implemented as a transport, but it should be very trivial to implement other transports like Xhr and websockets.

For working examples, take a look at qx.test.data.remote.Person - this is a Qooxdoo object that supports being synchronised remotely. It is an entirely normal Qooxdoo object except that it extends qx.data.io.Object and has some annotations to show which properties should be synchronized (it is not necessary to extend qx.data.io.Object, but doing so just makes it easier to get started).

We've discussed an improvement which is to have an automatic UUID <> hashCode mapping, which would mean that it is not a requirement to have objects support a uuid property in order to be translated - this is not done, because I propose that toHashCode() is modified to return a UUID. toHashCode is after all supposed to be entire opaque, it's just a unique ID so making it unique across process boundaries should have no backwards compatibility issues that I can think of.

The test/demo code is fairly basic, but consists of two applications: qx.test.data.remote.PeerOne and qx.test.data.remote.PeerTwo.

To view the demo, run qx serve and then browse to http://localhost:8080/compiled/source/peerone/index.html.

PeerOne will create an iframe that loads PeerTwo, and it creates some instances of Person including arrays of children & siblings, recursive references, etc and associates one of the objects with an ID. When PeerTwo starts up, it connects to its parent window (ie PeerOne) and then asks for one of those objects by ID.

What is left to do is to detect changes on those properties and send incremental updates to the other side - this should be pretty easy to do and I'll make a start in a moment but I have a hugely busy week ahead so may not be ready to push a commit in the next few days.

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 8, 2019

The key things to note in the examples is that all apps that communicate have a controller and a datasource; PeerOne and PeerTwo have this code:

        // Data source represents the transport (but is transport agnostic)
        let datasource = new qx.data.remote.NetworkDataSource();
        
        // Controller manages the objects and their serialisation across the DataSource
        let ctlr = this.__controller = new qx.data.remote.NetworkController(datasource);

Communication is wrapped up in end points, and for postMessage based communication that means the qx.data.remote.WindowEndPoint class; if you know the window you want to connect to you create an instance of this directly - eg PeerTwo will connect to it's parent window, and so has this code:

        let endpoint = new qx.data.remote.WindowEndPoint(ctlr, window.parent);
        datasource.addEndPoint(endpoint);
        await endpoint.open();

That's fine for when "this" application is connecting, but the other side needs to listen for a connection, so PeerOne has this code instead:

      // Listener is specific to a given platform (postMessage, Xhr, etc)
      new qx.data.remote.WindowListener(ctlr);

Something I realised about client-server vs peer-to-peer is that synchronising properties bi-directionally fits in with being peer-to-peer, and there isn't necessarily a need to nominate one side a "server" and the other a "client".

However, when calling remote methods this changes because we only have one class definition and when we write a method we need to determine where the method exists - i.e. if you call myPersonObject.doSomething(), is doSomething() code that is executed on the far side or in this process?

My plan is that we'll use annotations to say that the Person.doSomething method is a "server" method, and therefore if you call myPersonObject.doSomething() on a client it will be executed remotely asynchronously, but if you call the same code on the server it just calls the method.

To make this work, we would nominate an application as either "client" or "server" application rather than just a peer and then the code will patch the Person.prototype to replace methods like doSomething with a remote method proxy stub.

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 8, 2019

Sounds absolutely fantastic! One thing: how do the objects know that they are the same? What is the well-known symbol/id (as opposed to the random UUID) that connects them? Is it "granddad"?
+1 for making toHashCode() use UUIDs, although that might create a small performance dent? Maybe it should be an optional feature switched on by an environment variable.

@johnspackman

This comment has been minimized.

Copy link
Member

commented Sep 9, 2019

One thing: how do the objects know that they are the same?

Each side has a mapping of UUID to object instance, so when a packet is received that refers to a UUID it can always locate the correct reference and translate the UUID back into an object.

The well-known symbol (eg "grandad"), and that's transferred to the other side with a mapping to a UUID; when the mapping is sent, it also sends the object (unless the object has been sent previously) as a separate packet, and the UUID<>Object mapping mechanism does the rest.

+1 for making toHashCode() use UUIDs, although that might create a small performance dent? Maybe it should be an optional feature switched on by an environment variable.

I'll do a performance test comparison for createUuid, but IMHO it would be ideal if it didn't have to be switched on when compiling

@cboulanger

This comment has been minimized.

Copy link
Contributor Author

commented Sep 9, 2019

Looking very much forward to using this in my apps! This has the potential to (almost) completely replace JSON-RPC - I will probably provide a Yii2 - PHP backend (which will let me throw out the unmaintained JSONRPC server implementation, the author of which refuses to even comment on my PRs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.