New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synch blocking attack #444

Open
yaronyg opened this Issue Jan 14, 2016 · 6 comments

Comments

Projects
None yet
2 participants
@yaronyg
Member

yaronyg commented Jan 14, 2016

This is a 3 party attack scenario. It involves A, B and C. A, B and C all sync the same DB between each other. So this means all three have rights to synch on each other.

In the attack A doesn't like B and wants to screw up its synching with C. The way A does this is by overwriting B's _Local/ file on C with a larger sequence number. This will make sure that B never syncs those documents with C.

Successfully launching this attack is tricky. The reason is that there is no way to enumerate all the _Local documents. One has to 'just know' what doc ID one is looking for. But in many cases it's not that hard to guess. In this case it's highly likely that A, B and C are all running the same Thali app. And in most of our cases so far replication typically is just straight forward pull replication with static options and no filters. So if one looks at PouchDB's replication ID generation algorithm one can see that A can probably successfully guess what uniqueid value B is going to use.

It's worth pointing out that this attack already exists on all real world CouchDB and PouchDB systems. In other words if the scenario was that A and B are synching with some central CouchDB/PouchDB server S on the same Database then clearly A could figure out B's uniqueid and launch this attack. The only mitigation is that if access is authenticated then there would potentially be a record of A (who had to authenticate to get to S or C for that matter) writing to a value that it didn't own (e.g. B's uniqueid). But who is really logging and investigating this sort of thing?

So Thali is no worse than all the shipping systems that now exist. But that doesn't feel like much comfort.

Probably the easiest way to fix this just for Thali systems to use their own more secure version of the uniqueid generator. The CouchDB REST API leaves how the ID is generated completely up to the client. We could do something really simple like generate a cryptographically secure 128 bit random number for each DB we use and then make that value part of our own uniqueid generator. Assuming we take PouchDB's and replace MD5 with something reasonably secure like SHA256 we should be good.

We need to talk with the PouchDB folks about giving us an easy way to plug in our own replication ID generator.

Another approach would be to change the structure of the replication ID to include the peer's identifier. Then we would only allow read/writes to that unique ID if the requester had the right authentication. That is probably the most ideal solution as it is robustly secure and doesn't add a new secret (in addition to whatever the authentication credentials are) to secure. But it would require changing the CouchDB protocol or at least reserving it enough to say "If a replication ID looks like X THEN it must be...".

So my guess is that just changing the replication ID algorithm to incorporate secure randomness is probably the best approach.

@nolanlawson

This comment has been minimized.

Show comment
Hide comment
@nolanlawson

nolanlawson Jan 25, 2016

We need to talk with the PouchDB folks about giving us an easy way to plug in our own replication ID generator.

it's pretty easy to implement your own replicator. In fact I did it with pouchdb-full-sync, for which my process was:

  1. copy pouchdb/src/replicate/*
  2. make a module out of it
  3. modify it as desired

Since the replicate.js function takes two PouchDB objects as input, inside of that function you can do whatever you want to replicate between the two stores. Hope that helps! :)

nolanlawson commented Jan 25, 2016

We need to talk with the PouchDB folks about giving us an easy way to plug in our own replication ID generator.

it's pretty easy to implement your own replicator. In fact I did it with pouchdb-full-sync, for which my process was:

  1. copy pouchdb/src/replicate/*
  2. make a module out of it
  3. modify it as desired

Since the replicate.js function takes two PouchDB objects as input, inside of that function you can do whatever you want to replicate between the two stores. Hope that helps! :)

@yaronyg

This comment has been minimized.

Show comment
Hide comment
@yaronyg

yaronyg Jan 25, 2016

Member

@nolanlawson It's step 1 that is the problem. That is effectively a fork and signs us up to keep repeating step 1 with every release of Pouch. More ideal would be if there was a formal plug point where we could drop in our own replicationID generator. Even better than that would be to just change how PouchDB generates its replicationID so it's secure. This helps everyone but it potentially more complex because it requires some place to write the secret down.

Member

yaronyg commented Jan 25, 2016

@nolanlawson It's step 1 that is the problem. That is effectively a fork and signs us up to keep repeating step 1 with every release of Pouch. More ideal would be if there was a formal plug point where we could drop in our own replicationID generator. Even better than that would be to just change how PouchDB generates its replicationID so it's secure. This helps everyone but it potentially more complex because it requires some place to write the secret down.

@nolanlawson

This comment has been minimized.

Show comment
Hide comment
@nolanlawson

nolanlawson Jan 30, 2016

@yaronyg I understand, but I've personally been dealing with exactly the scenario you describe (updating plugins when PouchDB updates). It's laborious, but it prevents the core from growing too unwieldy, and from bothering the other PouchDB maintainers with whatever I happen to be interested in at the time. If you can prove the benefit of an external replication module, then I think we would be very interested in integrating it into core. :)

nolanlawson commented Jan 30, 2016

@yaronyg I understand, but I've personally been dealing with exactly the scenario you describe (updating plugins when PouchDB updates). It's laborious, but it prevents the core from growing too unwieldy, and from bothering the other PouchDB maintainers with whatever I happen to be interested in at the time. If you can prove the benefit of an external replication module, then I think we would be very interested in integrating it into core. :)

@yaronyg

This comment has been minimized.

Show comment
Hide comment
@yaronyg

yaronyg Feb 1, 2016

Member

I was wondering if we can see this as a bug in PouchDB. That we shouldn't be using IDs that can be predicted?

Member

yaronyg commented Feb 1, 2016

I was wondering if we can see this as a bug in PouchDB. That we shouldn't be using IDs that can be predicted?

@yaronyg yaronyg added the Icebox label Feb 9, 2016

@nolanlawson

This comment has been minimized.

Show comment
Hide comment
@nolanlawson

nolanlawson Feb 13, 2016

Predicting replication IDs is kind of relied upon right now by plugins like pouchdb-load.

nolanlawson commented Feb 13, 2016

Predicting replication IDs is kind of relied upon right now by plugins like pouchdb-load.

@yaronyg

This comment has been minimized.

Show comment
Hide comment
@yaronyg

yaronyg Feb 16, 2016

Member

That sounds rather dangerous and fragile. Certainly the CouchDB specs make no promises about being able to predict IDs and as we can see being able to predict IDs constitutes a security threat. Of course lots of things are easier if we can ignore security. :)

Member

yaronyg commented Feb 16, 2016

That sounds rather dangerous and fragile. Certainly the CouchDB specs make no promises about being able to predict IDs and as we can see being able to predict IDs constitutes a security threat. Of course lots of things are easier if we can ignore security. :)

@yaronyg yaronyg added this to the V1 milestone Aug 3, 2016

@yaronyg yaronyg added 1 - Backlog and removed 0 - Icebox labels Aug 4, 2016

@yaronyg yaronyg added Node and removed P2 labels Sep 26, 2016

@yaronyg yaronyg added bug and removed 1 - Backlog labels Oct 6, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment