Skip to content
This repository has been archived by the owner on Apr 16, 2020. It is now read-only.

CRDT: ACL #25

Open
pgte opened this issue May 13, 2018 · 4 comments
Open

CRDT: ACL #25

pgte opened this issue May 13, 2018 · 4 comments

Comments

@pgte
Copy link
Contributor

pgte commented May 13, 2018

An administrator of a shared document should be able to assign different capabilities to different actors and revoke them. Besides reading and write provileges, the admin should be able to delegate administration.

How should such the ACL be propagated and integrated in the CRDT in a way that is convergent and allows replicas to enforce these capabilities when receiving remote updates.

@pgte
Copy link
Contributor Author

pgte commented May 28, 2018

There are 2 related papers that may be interesting:

They talk about how an ACL needs to be causally consistent with the data.
For local operations, they must be marked as causally dependent on the local version of the ACL. This can be achieved by the ACL being causally consistent (admin operations should have causality information) and data operation message should have that dependency embedded.
When being replicated, the data operations are causally dependent, and so are required to be causally consistent with the ACL data at the time of their creation.

The ACL should also be a CRDT itself. When a conflict occurs (there are concurrent edits to the same entry of the ACL), the resulting permissions is the interception of both permissions.

This goes along with what has so far been discussed here, but I sense one possible attack, which I'll try to describe here:

A replica has access to a given resource, which allows it to send operations to other replicas to mutate that resource. Every operation has a reference to the current local permission at the time of creation-
So far, so good.
But that permission gets revoked by a remote node with permission to do so. That node does that by doing an operation on the ACL and propagating that operation to other nodes.
When receiving that operation, our replica decides to ignore it, not advancing the ACL. It keeps sending operations that causally point to the same ACL version, forcing other replicas to accept it.

This case is not different from when a replica has been offline and creating operations it thinks it has the right to.

Can this attack be avoided?

@pgte
Copy link
Contributor Author

pgte commented May 28, 2018

@mweberUKL @bieniusa could you offer an insight on this issue above? Thank you!!

@mweberUKL
Copy link

First of all, the problem pointed out depends on two factors: First, a revoke operation that does not get effective without the possibility to enforce the permission restriction and second, a malicious replica sending out operations which should not be valid according to your normal operation.

In our model, the replicas are assumed to be trustworthy. This rules out that malicious replicas send malicious requests. What we do not rule out is the fact that an attacker might try to isolate a replica by cutting the network connections.

An access control operation issued by a remote replica cannot be enforced on an isolated replica. This may lead to the scenario where an attacker isolates a replica and issues operations on this isolated replica with the permissions he/she has before the isolation. Only after the replica is connected to the network again can these permissions be revoked. The operations issued in isolation can afterwards flood the network.

The problem is somewhat moderated in our model because isolating a replica also means that no new data updates arrive. The attack remains valid, of course, and cannot easily be avoided without introducing synchronization measures and additional checks.

One possible counter-measure would be to force freshness of the policy state. With freshness, I mean that the policy state of one replica should not fall too far behind the policy state of remote replicas. This would require reading the policy state of all remote replica once-in-a-while and stopping operation if a majority of replica is unreachable or the policy state is getting too old. The replica has to wait until the other replicas are online again and the policy state has stabilized again.

@pgte
Copy link
Contributor Author

pgte commented May 28, 2018

@mweberUKL thank you for your insight!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants