Skip to content
This repository has been archived by the owner on Mar 12, 2020. It is now read-only.

Room concensus #41

Closed
sanderpick opened this issue Apr 20, 2018 · 11 comments
Closed

Room concensus #41

sanderpick opened this issue Apr 20, 2018 · 11 comments
Labels
discussion Open discussion

Comments

@sanderpick
Copy link
Member

Trying to wrap my head around the best approach to maintaining our distributed photo albums. @andrewxhill @carsonfarmer one of you mentioned CRDTs, there's some good stuff out there: CRDTs. We've also discussed blockchain consensus techniques.

However, after drawing it out, I think our current solution might just work as is. A room can be thought of as a set of cumulative updates (adds / deletes). The problem we have is just how to get each peer every add and every update, considering very unstable connectivity?

We currently broadcast updates which are simply just the root hash of the added photo set (photo, thumb, metadata, last), where "last" is a file containing the hash of that node's last update's root hash. Our little blockchain of updates. #40 is important: each peer needs to continuously re-publish it's HEAD update, otherwise new comers won't be able to partake until a new update happens. When a peer gets an update, it handles it recursively, making sure it has locally (and has indexed) the contents of the update, and each prev. update by inspecting the last file.

Side note: Right now, every update is implicitly considered an "add", but we could change that via some scheme on the update itself: op:hash, e.g., add:QmY2nposvuCaikdazPNDc4vY4CAxZgioZ9Wf1JuamBmRQQ, del:QmXARYDDjf27tDNGpWYjCW9c8X6i5BrvUkLKKwrRotsuys.

That said, just considering adds for now, this is what happens with 2 peers:

n1
<--0:a
<--a:b
<--b:c

n2
0:a<--
a:b<--
b:c<--

The syntax here is <--last:new for sending and last:new<-- for receiving an update. So far n1 has been making updates and n2 has been receiving them just fine, easy peachy! Ok but now n2 goes for an airplane ride and looses the p2p network. The thing is, both peers are still taking photos:

n1
<--0:a
<--a:b
<--b:c
<--c:d
<--d:e

n2
0:a<--
a:b<--
b:c<--
<--c:d'
<--d':e'

When n2 lands, this happens:

n1
<--0:a
<--a:b
<--b:c
<--c:d
<--d:e
d':e'<--
c:d'<--

n2
0:a<--
a:b<--
b:c<--
<--c:d'
<--d':e'
d:e<--
c:d<--

Each node gets the latest update from the other, then back propagates on each to find its peer's older updates. Each will stop on the common ancestor, i.e., n2 will stop when it hits c:d because it already has c, and n1 will stop when it hits c:d', because it too already has c. Since each update is indexed locally by the time it was initially added to the system, we end up with the same exact set of data. The update order doesn't matter (esp. in the only add case, little more complex with deletes, but I think still in our current wheel house... we just have to reprocess deletes until the target content is available). So, different update chains lead to the same data set, same as there are different (infinite) ways to add to any number. I guess it's obvious in the end... :)

In an normal blockchain, you might throw out the shorter chain when they reconcile, but we don't want to ever throw out photos. Also, I don't think we care about POW / POS since in order to generate a valid update in the room, you need to encrypt / sign the payload with the rooms private key... so we can just say, if you're able to decrypt / verify it, it should be considered good. So, in our case, all peers are trusted (is that bad?)

Making sense? What am I missing? Seems too easy!? Maybe this is actually a simple CRDT protocol.

@sanderpick sanderpick added the discussion Open discussion label Apr 20, 2018
@carsonfarmer
Copy link
Member

This all seems reasonable sander. I think the ipfs folks were discussing these types of issues. And they did think of a reason why the single ‘last’ thing didn’t work. But for us, with this simple application to start, I say keep it simple 👍

@sanderpick
Copy link
Member Author

Okay, good deal. Do you remember what their reasoning was? Agreed, we don't need to solve the whole shabang right away... but we can discuss the right approach here for when we do.

@sanderpick
Copy link
Member Author

#47 fixes #40 (republishing latest updates). While doing that, I realized we only want to republish locally created updates, just as the original publish is only done by the creator. Otherwise, listeners wont be able to back propagate and find that peer's other older updates (the chain will have been broken).

@andrewxhill
Copy link
Member

i'll think about this more, but on first pass it seems fine. later optimizations could be made with blocks of transactions for example. i realized also that your delete issue is easily solvable by adding deletes as a new add transaction. same with say, a concept like a modification.

a delete would just add an action delete on an old hash, in reverse traverse you get the delete first so the world is easier actually.

a modification would be an add of an action replace, old for new. again, in reverse traverse you'd only ever get the newest modification and the older would be treated like a delete. so again for the node catching up the world is actually a bit easier since you just ignore everything about it prior.

@andrewxhill
Copy link
Member

this is getting at that comment i made to you a couple weeks ago. where you could prepend the hash with our own hash code, where perhaps the first character just indicates an action.

@sanderpick
Copy link
Member Author

Poked some holes in this while driving over the weekend:

  • Stemming from this comment: feat(core): re-publish latest update #47 (comment), I think this is actually a bit of a cheat. A real distributed ledger system is one where any peer can provide all transactions to any other peer (a p2p blockchain for example). That is not the case currently since each peer must only be responsible for broadcasting their own updates, so as to maintain (not break) their own update chain. e.g., if peer A drops out and then a new peer joins, that new peer won't ever get peer A's updates. We'll have to start relaying other peer's updates somehow, but I'm happy to charge on with this current setup (this feels solvable in the future).
  • Unless we make the rooms private (think that's on the roadmap?), or move to IPNS (assuming it's not super slow to publish still... even so this would require a whole diff structure), we're wide open to room spam. The spam could be ignored, the content of the updates would still have to be unpacked / downloaded before you could realize it was invalid... thereby slowing down the whole system/network. Again, just planting a flag here for later improvement.

@sanderpick
Copy link
Member Author

@andrewxhill re: hash structure, yeah for sure. I think actually this won't work though since the deletes will need to be part of a peer's update chain as well... so it'll need to be content addressed with a last hash, which means it's own root hash won't be the same as the root hash it's indicating to delete... so the advantage of the simplification in putting the operation in the publish payload itself falls away, since it can be equally implied, like an add operation.

re: blocks, totally, especially (or perhaps only) for high frequency data like location updates

@sanderpick
Copy link
Member Author

@gpestana ^

@gpestana
Copy link

gpestana commented May 3, 2018

Hey there! Really cool project, great job! I've talked with @sanderpick really briefly about what you are up to and textile.io. I've been really interested about CRDTs and decentralization in general so glad to see this happening. Anyways, I really haven't got much time to go through the code and get a good overview of the current implementation and ideas, but these are my 2 cents based on this thread. I'm sorry in advance if my comments are useless/out of context because of lack of understanding of the project.

Ok, first I assume that a "blockchain of updates" is a sort of set with all transactions made on an album level. A grow-only (no item removal) set. This blockchain makes it possible to traverse the whole album just by knowing the latest picture ID.

It seems to me, based on what @sanderpick explained when opening this thread is that you are using a grow-only set. Some concerns with GO-sets are tombstones and garbage collection - when an item is removed, it's hard to know when it can be really removed from the set without the risk of being added back by some node which didnt get the 'delete' update. On the other hand, keeping all the transactions might bring scalability issues in the future (not sure if I'm bringing up stuff that really doesn't matter much atm)

Another thing I'm thinking about is casual ordering, but it seems that in the album example it does not really matter? So a photo is either in 'added' or 'removed' based on the state of the blockchain. Is there any possibility that a node received a 'delete' update of a photo he never had in his local replica? I can't see this happening, so it should be fine. But then again, I might be missing something.

Oh, BTW, It'd be really cool if you guys would have some simple documentation on the technical decisions and whatnot so that newcomers could easily get up to speed. Or did it exist and I didn't find it? Anyways, I will create an issue about it.

I will be adding more comments here as I think about this. Keep it up!! :)

@sanderpick
Copy link
Member Author

The assumption about the "blockchain of updates" is spot on.

re: tombstones, good point, makes sense. At least for now, the only node that would be "adding it back" is the one that created it, which I suppose would be the only one that could delete it. However, we haven't seriously talked about deletion. Soon we'll have to figure out how to republish other peers' updates, at which point this will have to come up.

re: deletes that may not be local, I think this could happen in the current setup. However, in theory, it would be able to traverse back through those updates and find the add... more logic needed here since in this case it would result in an add. We'd have to save off the deletes as "pending" if it didn't yet exist locally.

We'll have to come back to this soon, after our current beta release sprint (next week). The biggest weakness, and I'd say deal breaker, is the fact that both the update sender and receiver have to be online at the same time for a sync to happen.

The updates are sent via IPFS pubsub (floodsub). Our current experience is one of "fire and forget", where the "forget" applies to the whole network. In other words, the updates are not persistent. If a node could come online and get all the updates it missed, we'd be gold! @andrewxhill found a very relevant issue... seems a lot of folks want this: libp2p/go-libp2p-pubsub#42

Thanks for your helpful thoughts!

@sanderpick
Copy link
Member Author

closing in favor of #123

This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
discussion Open discussion
Projects
None yet
Development

No branches or pull requests

4 participants