Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Synch blocking attack #444
This is a 3 party attack scenario. It involves A, B and C. A, B and C all sync the same DB between each other. So this means all three have rights to synch on each other.
In the attack A doesn't like B and wants to screw up its synching with C. The way A does this is by overwriting B's _Local/ file on C with a larger sequence number. This will make sure that B never syncs those documents with C.
Successfully launching this attack is tricky. The reason is that there is no way to enumerate all the _Local documents. One has to 'just know' what doc ID one is looking for. But in many cases it's not that hard to guess. In this case it's highly likely that A, B and C are all running the same Thali app. And in most of our cases so far replication typically is just straight forward pull replication with static options and no filters. So if one looks at PouchDB's replication ID generation algorithm one can see that A can probably successfully guess what uniqueid value B is going to use.
It's worth pointing out that this attack already exists on all real world CouchDB and PouchDB systems. In other words if the scenario was that A and B are synching with some central CouchDB/PouchDB server S on the same Database then clearly A could figure out B's uniqueid and launch this attack. The only mitigation is that if access is authenticated then there would potentially be a record of A (who had to authenticate to get to S or C for that matter) writing to a value that it didn't own (e.g. B's uniqueid). But who is really logging and investigating this sort of thing?
So Thali is no worse than all the shipping systems that now exist. But that doesn't feel like much comfort.
Probably the easiest way to fix this just for Thali systems to use their own more secure version of the uniqueid generator. The CouchDB REST API leaves how the ID is generated completely up to the client. We could do something really simple like generate a cryptographically secure 128 bit random number for each DB we use and then make that value part of our own uniqueid generator. Assuming we take PouchDB's and replace MD5 with something reasonably secure like SHA256 we should be good.
We need to talk with the PouchDB folks about giving us an easy way to plug in our own replication ID generator.
Another approach would be to change the structure of the replication ID to include the peer's identifier. Then we would only allow read/writes to that unique ID if the requester had the right authentication. That is probably the most ideal solution as it is robustly secure and doesn't add a new secret (in addition to whatever the authentication credentials are) to secure. But it would require changing the CouchDB protocol or at least reserving it enough to say "If a replication ID looks like X THEN it must be...".
So my guess is that just changing the replication ID algorithm to incorporate secure randomness is probably the best approach.
it's pretty easy to implement your own replicator. In fact I did it with pouchdb-full-sync, for which my process was:
@nolanlawson It's step 1 that is the problem. That is effectively a fork and signs us up to keep repeating step 1 with every release of Pouch. More ideal would be if there was a formal plug point where we could drop in our own replicationID generator. Even better than that would be to just change how PouchDB generates its replicationID so it's secure. This helps everyone but it potentially more complex because it requires some place to write the secret down.
@yaronyg I understand, but I've personally been dealing with exactly the scenario you describe (updating plugins when PouchDB updates). It's laborious, but it prevents the core from growing too unwieldy, and from bothering the other PouchDB maintainers with whatever I happen to be interested in at the time. If you can prove the benefit of an external replication module, then I think we would be very interested in integrating it into core. :)