Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document that locks aren't really locks #11457

aphyr opened this issue Dec 16, 2019 · 2 comments · Fixed by #11490

Document that locks aren't really locks #11457

aphyr opened this issue Dec 16, 2019 · 2 comments · Fixed by #11490


Copy link

@aphyr aphyr commented Dec 16, 2019

As a part of our Jepsen testing, we've demonstrated that etcd's locks aren't actually locks: like all "distributed locks" they cannot guarantee mutual exclusion when processes are allowed to run slow or fast, or crash, or messages are delayed, or their clocks are unstable, etc. For example, this workload uses etcd's locks to protect read-modify-write updates to perfect
shared state. Here, the shared state is stored in memory, so we don't have to
worry about latency or failures, but realistically, we'd be sharing state in
something like a filesystem, object store, third-party database, etc.

This is directly adapted from the etcd 3.2 announcement, which demonstrates
using locks to increment a file on disk:

Here’s a simple example that concurrently increments a file f in the shell;
the locking ensures everything adds up:

$ echo 0 >f
$ for `seq 1 100`; do
   ETCDCTL_API=3 etcdctl lock mylock -- bash -c 'expr 1 + $(cat f) > f' &
   pids="$pids $!"
$ wait $pids
$ cat f

Instead of updating a file on disk, our workload adds unique integers to a set
by reading a mutable variable, waiting some random amount of time from 0 to 2
seconds, and setting the variable to the value that was read, plus the given

We use the same lock acquisition and release strategy as described before: we
grant a lease with a 2-second TTL, keep it alive indefinitely using a watchdog
thread, then acquire a lock with that lease:

When we partition away leader nodes every 10 seconds or so, this workload
exhibits both lost updates and stale reads (due to the in-memory state
"flickering" as competing lock holders overwrite each other). This 60-second
test lost 10/42 successfully completed writes:

  {:valid? true,
   :count 103,
   :ok-count 83,
   :fail-count 20,
   :info-count 0,
    {:valid? true,
     :count 62,
     :ok-count 42,
     :fail-count 20,
     :info-count 0},
    {:valid? true,
     :count 41,
     :ok-count 41,
     :fail-count 0,
     :info-count 0}}},
  :exceptions {:valid? true},
    ({:element 50,
      :outcome :stable,
      :stable-latency 886,
      :lost-latency nil,
      {:type :ok,
       :f :add,
       :value 50,
       :process 0,
       :time 51688906389,
       :error [:not-found "etcdserver: requested lease not found"],
       :index 191},
      {:type :invoke,
       :f :read,
       :process 2,
       :time 52575661586,
       :index 200}}),
    :duplicated-count 0,
    :valid? false,
    :lost-count 10,
    :lost (21 22 30 32 33 41 42 51 52 53),
    :stable-count 31,
    :stale-count 1,
    :stale (50),
    :never-read-count 21,
    :stable-latencies {0 0, 0.5 0, 0.95 0, 0.99 886, 1 886},
    :attempt-count 62,
    :lost-latencies {0 0, 0.5 0, 0.95 119, 0.99 119, 1 119},
    (8 9 10 11 18 19 20 26 27 28 29 38 39 40 48 49 57 58 59 60 61),
    :duplicated {}},
   :timeline {:valid? true},
   :valid? false},

This problem was exacerbated by #11456, but fundamentally cannot be fixed. Users cannot use etcd as a naive locking system: they must carefully couple a fencing token (e.g. the etcd lock key revision number) to any systems they interact with in order to preserve exclusion boundaries.

etcd could remove locks altogether, but I don't think that's strictly necessary: it's still useful to have something which is mostly a lock. For example, users could use locks to ensure that most of the time, one node, rather than dozens, is performing a specific computation. Instead, I'd like to suggest changing the documentation to make these risks, and the correct use of locks, explicit. In particular, I think these pages could be revised:


Copy link

@maoling maoling commented Dec 17, 2019

fpj also had a great post of this related topic of zookeeper about Note on fencing and distributed locks

Copy link

@kien-truong kien-truong commented Jan 14, 2020

If you use lock to start a long running process, this child process will not even be terminated even if the lẹase is lost. As a result, etcd lock is only usable for short tasks.

mcassaniti added a commit to mcassaniti/msdha that referenced this issue Apr 15, 2021
The initial design for MSDHA expected that etcd locks would remain held
for the lifetime of the process run under the lock. It also expected the
process to be stopped and the lease to be dropped all at the same time
if the etcd lock was somehow lost. See
[here](etcd-io/etcd#11457 (comment))
for more information.

The new design instead relies on detecting that there is no current
master rather than using a lock to hold the master role. MSDHA will (in

  * wait MSDHA_TTL*2 seconds
  * Acquire the etcd lock
  * Determine if a master already exists by now otherwise stop
  * Promote the node
  * Set the state to master in etcd
  * Run a background job to wait MSDHA_MASTER_TTL seconds before
    terminating the container

The logic for restarting the etcd change detection was also re-written
to prevent losing events between restarts.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet

Successfully merging a pull request may close this issue.

3 participants