Skip to content

Commit

Permalink
bpf: Propagate error from htab_lock_bucket() to userspace
Browse files Browse the repository at this point in the history
[ Upstream commit 66a7a92 ]

In __htab_map_lookup_and_delete_batch() if htab_lock_bucket() returns
-EBUSY, it will go to next bucket. Going to next bucket may not only
skip the elements in current bucket silently, but also incur
out-of-bound memory access or expose kernel memory to userspace if
current bucket_cnt is greater than bucket_size or zero.

Fixing it by stopping batch operation and returning -EBUSY when
htab_lock_bucket() fails, and the application can retry or skip the busy
batch as needed.

Fixes: 20b6cc3 ("bpf: Avoid hashtab deadlock with map_locked")
Reported-by: Hao Sun <sunhao.th@gmail.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220831042629.130006-3-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
  • Loading branch information
Hou Tao authored and gregkh committed Oct 21, 2022
1 parent 9f48317 commit 4f1f39a
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions kernel/bpf/hashtab.c
Original file line number Diff line number Diff line change
Expand Up @@ -1704,8 +1704,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
/* do not grab the lock unless need it (bucket_cnt > 0). */
if (locked) {
ret = htab_lock_bucket(htab, b, batch, &flags);
if (ret)
goto next_batch;
if (ret) {
rcu_read_unlock();
bpf_enable_instrumentation();
goto after_loop;
}
}

bucket_cnt = 0;
Expand Down

0 comments on commit 4f1f39a

Please sign in to comment.