You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned in #121, to protect 0.9.0 (and older) clients which cannot tolerate duplicate peer messages, the rendezvous server should filter these out. Basically it should ignore anything but a single message per (side, phase) pair.
We'll need to deploy this to the server before we can ship a client that (via #42 reconnection/ack races) might post duplicate copies of any message to the rendezvous server.
The text was updated successfully, but these errors were encountered:
In addition to deduplication, I think I also want the server (now renamed as the "mailbox server") to retain ordering of messages. This will make the new Dilation protocol easier to implement. We don't need any particular ordering across connections, but every message coming from the same "side" value should be delivered in the same order as the server received them.
From the server side, this will require a sequence number to be recorded with each mesage. I don't know what would be the best way to manage this seqnum: should there be an extra DB table with a single integer, which gets read/updated in each message-adding transaction? Or does SQLite have an implicit row number that is guaranteed to be incrementing and can be read in normal queries?
warner
changed the title
rendezvous server should deduplicate messages
mailbox server should sort/deduplicate messages
Feb 23, 2018
As mentioned in #121, to protect 0.9.0 (and older) clients which cannot tolerate duplicate peer messages, the rendezvous server should filter these out. Basically it should ignore anything but a single message per
(side, phase)
pair.We'll need to deploy this to the server before we can ship a client that (via #42 reconnection/ack races) might post duplicate copies of any message to the rendezvous server.
The text was updated successfully, but these errors were encountered: