You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a part of our Jepsen testing, we've demonstrated that etcd's locks aren't actually locks: like all "distributed locks" they cannot guarantee mutual exclusion when processes are allowed to run slow or fast, or crash, or messages are delayed, or their clocks are unstable, etc. For example, this workload uses etcd's locks to protect read-modify-write updates to perfect
shared state. Here, the shared state is stored in memory, so we don't have to
worry about latency or failures, but realistically, we'd be sharing state in
something like a filesystem, object store, third-party database, etc.
Instead of updating a file on disk, our workload adds unique integers to a set
by reading a mutable variable, waiting some random amount of time from 0 to 2
seconds, and setting the variable to the value that was read, plus the given
integer.
We use the same lock acquisition and release strategy as described before: we
grant a lease with a 2-second TTL, keep it alive indefinitely using a watchdog
thread, then acquire a lock with that lease:
When we partition away leader nodes every 10 seconds or so, this workload
exhibits both lost updates and stale reads (due to the in-memory state
"flickering" as competing lock holders overwrite each other). This 60-second
test lost 10/42 successfully completed writes:
This problem was exacerbated by #11456, but fundamentally cannot be fixed. Users cannot use etcd as a naive locking system: they must carefully couple a fencing token (e.g. the etcd lock key revision number) to any systems they interact with in order to preserve exclusion boundaries.
etcd could remove locks altogether, but I don't think that's strictly necessary: it's still useful to have something which is mostly a lock. For example, users could use locks to ensure that most of the time, one node, rather than dozens, is performing a specific computation. Instead, I'd like to suggest changing the documentation to make these risks, and the correct use of locks, explicit. In particular, I think these pages could be revised:
If you use lock to start a long running process, this child process will not even be terminated even if the lẹase is lost. As a result, etcd lock is only usable for short tasks.
As a part of our Jepsen testing, we've demonstrated that etcd's locks aren't actually locks: like all "distributed locks" they cannot guarantee mutual exclusion when processes are allowed to run slow or fast, or crash, or messages are delayed, or their clocks are unstable, etc. For example, this workload uses etcd's locks to protect read-modify-write updates to perfect
shared state. Here, the shared state is stored in memory, so we don't have to
worry about latency or failures, but realistically, we'd be sharing state in
something like a filesystem, object store, third-party database, etc.
https://github.com/jepsen-io/etcd/blob/c4787f4e71495584c276e998107ee811160dcea7/src/jepsen/etcd/lock.clj#L150-L163
This is directly adapted from the etcd 3.2 announcement, which demonstrates
using locks to increment a file on disk: https://coreos.com/blog/etcd-3.2-announcement:
Instead of updating a file on disk, our workload adds unique integers to a set
by reading a mutable variable, waiting some random amount of time from 0 to 2
seconds, and setting the variable to the value that was read, plus the given
integer.
We use the same lock acquisition and release strategy as described before: we
grant a lease with a 2-second TTL, keep it alive indefinitely using a watchdog
thread, then acquire a lock with that lease:
https://github.com/jepsen-io/etcd/blob/c4787f4e71495584c276e998107ee811160dcea7/src/jepsen/etcd/lock.clj#L33-L37
When we partition away leader nodes every 10 seconds or so, this workload
exhibits both lost updates and stale reads (due to the in-memory state
"flickering" as competing lock holders overwrite each other). This 60-second
test lost 10/42 successfully completed writes:
This problem was exacerbated by #11456, but fundamentally cannot be fixed. Users cannot use etcd as a naive locking system: they must carefully couple a fencing token (e.g. the etcd lock key revision number) to any systems they interact with in order to preserve exclusion boundaries.
etcd could remove locks altogether, but I don't think that's strictly necessary: it's still useful to have something which is mostly a lock. For example, users could use locks to ensure that most of the time, one node, rather than dozens, is performing a specific computation. Instead, I'd like to suggest changing the documentation to make these risks, and the correct use of locks, explicit. In particular, I think these pages could be revised:
https://coreos.com/blog/etcd-3.2-announcement
#10096
https://github.com/etcd-io/etcd/blob/master/Documentation/dev-guide/api_concurrency_reference_v3.md
https://github.com/etcd-io/etcd/tree/master/etcdctl#concurrency-commands
The text was updated successfully, but these errors were encountered: