Skip to content

Commit

Permalink
UPSTREAM: wireguard: peerlookup: take lock before checking hash in re…
Browse files Browse the repository at this point in the history
…place operation

Eric's suggested fix for the previous commit's mentioned race condition
was to simply take the table->lock in wg_index_hashtable_replace(). The
table->lock of the hash table is supposed to protect the bucket heads,
not the entires, but actually, since all the mutator functions are
already taking it, it makes sense to take it too for the test to
hlist_unhashed, as a defense in depth measure, so that it no longer
races with deletions, regardless of what other locks are protecting
individual entries. This is sensible from a performance perspective
because, as Eric pointed out, the case of being unhashed is already the
unlikely case, so this won't add common contention. And comparing
instructions, this basically doesn't make much of a difference other
than pushing and popping %r13, used by the new `bool ret`. More
generally, I like the idea of locking consistency across table mutator
functions, and this might let me rest slightly easier at night.

Suggested-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/wireguard/20200908145911.4090480-1-edumazet@google.com/
Fixes: e7096c1 ("net: WireGuard secure network tunnel")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 6147f7b)
Bug: 152722841
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I1cbbeef2bbcfce20e4a0ba6f1137bbcf8c1516b6
  • Loading branch information
zx2c4 authored and gregkh committed Oct 14, 2020
1 parent 206b4c2 commit 03c277a
Showing 1 changed file with 8 additions and 3 deletions.
11 changes: 8 additions & 3 deletions drivers/net/wireguard/peerlookup.c
Original file line number Diff line number Diff line change
Expand Up @@ -167,9 +167,13 @@ bool wg_index_hashtable_replace(struct index_hashtable *table,
struct index_hashtable_entry *old,
struct index_hashtable_entry *new)
{
if (unlikely(hlist_unhashed(&old->index_hash)))
return false;
bool ret;

spin_lock_bh(&table->lock);
ret = !hlist_unhashed(&old->index_hash);
if (unlikely(!ret))
goto out;

new->index = old->index;
hlist_replace_rcu(&old->index_hash, &new->index_hash);

Expand All @@ -180,8 +184,9 @@ bool wg_index_hashtable_replace(struct index_hashtable *table,
* simply gets dropped, which isn't terrible.
*/
INIT_HLIST_NODE(&old->index_hash);
out:
spin_unlock_bh(&table->lock);
return true;
return ret;
}

void wg_index_hashtable_remove(struct index_hashtable *table,
Expand Down

0 comments on commit 03c277a

Please sign in to comment.