Skip to content
/ linux Public

Commit c41742d

Browse files
fdmananagregkh
authored andcommitted
btrfs: get used bytes while holding lock at btrfs_reclaim_bgs_work()
[ Upstream commit ba5d064 ] At btrfs_reclaim_bgs_work(), we are grabbing twice the used bytes counter of the block group while not holding the block group's spinlock. This can result in races, reported by KCSAN and similar tools, since a concurrent task can be updating that counter while at btrfs_update_block_group(). So avoid these races by grabbing the counter in a critical section delimited by the block group's spinlock after setting the block group to RO mode. This also avoids using two different values of the counter in case it changes in between each read. This silences KCSAN and is required for the next patch in the series too. Fixes: 243192b ("btrfs: report reclaim stats in sysfs") Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Stable-dep-of: 19eff93 ("btrfs: fix periodic reclaim condition") Signed-off-by: Sasha Levin <sashal@kernel.org>
1 parent be5ee30 commit c41742d

File tree

1 file changed

+16
-5
lines changed

1 file changed

+16
-5
lines changed

fs/btrfs/block-group.c

Lines changed: 16 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1877,7 +1877,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
18771877
list_sort(NULL, &fs_info->reclaim_bgs, reclaim_bgs_cmp);
18781878
while (!list_empty(&fs_info->reclaim_bgs)) {
18791879
u64 zone_unusable;
1880-
u64 reclaimed;
1880+
u64 used;
18811881
int ret = 0;
18821882

18831883
bg = list_first_entry(&fs_info->reclaim_bgs,
@@ -1973,19 +1973,30 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
19731973
if (ret < 0)
19741974
goto next;
19751975

1976+
/*
1977+
* Grab the used bytes counter while holding the block group's
1978+
* spinlock to prevent races with tasks concurrently updating it
1979+
* due to extent allocation and deallocation (running
1980+
* btrfs_update_block_group()) - we have set the block group to
1981+
* RO but that only prevents extent reservation, allocation
1982+
* happens after reservation.
1983+
*/
1984+
spin_lock(&bg->lock);
1985+
used = bg->used;
1986+
spin_unlock(&bg->lock);
1987+
19761988
btrfs_info(fs_info,
19771989
"reclaiming chunk %llu with %llu%% used %llu%% unusable",
19781990
bg->start,
1979-
div64_u64(bg->used * 100, bg->length),
1991+
div64_u64(used * 100, bg->length),
19801992
div64_u64(zone_unusable * 100, bg->length));
19811993
trace_btrfs_reclaim_block_group(bg);
1982-
reclaimed = bg->used;
19831994
ret = btrfs_relocate_chunk(fs_info, bg->start);
19841995
if (ret) {
19851996
btrfs_dec_block_group_ro(bg);
19861997
btrfs_err(fs_info, "error relocating chunk %llu",
19871998
bg->start);
1988-
reclaimed = 0;
1999+
used = 0;
19892000
spin_lock(&space_info->lock);
19902001
space_info->reclaim_errors++;
19912002
if (READ_ONCE(space_info->periodic_reclaim))
@@ -1994,7 +2005,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
19942005
}
19952006
spin_lock(&space_info->lock);
19962007
space_info->reclaim_count++;
1997-
space_info->reclaim_bytes += reclaimed;
2008+
space_info->reclaim_bytes += used;
19982009
spin_unlock(&space_info->lock);
19992010

20002011
next:

0 commit comments

Comments
 (0)