Skip to content
Permalink
Browse files
zsmalloc: replace per zpage lock with pool->migrate_lock
The zsmalloc has used a bit for spin_lock in zpage handle to keep
zpage object alive during several operations. However, it causes
the problem for PREEMPT_RT as well as introducing too complicated.

This patch replaces the bit spin_lock with pool->migrate_lock
rwlock. It could make the code simple as well as zsmalloc work
under PREEMPT_RT.

The drawback is the pool->migrate_lock is bigger granuarity than
per zpage lock so the contention would be higher than old when
both IO-related operations(i.e., zsmalloc, zsfree, zs_[map|unmap])
and compaction(page/zpage migration) are going in parallel(*,
the migrate_lock is rwlock and IO related functions are all read
side lock so there is no contention). However, the write-side
is fast enough(dominant overhead is just page copy) so it wouldn't
affect much. If the lock granurity becomes more problem later,
we could introduce table locks based on handle as a hash value.

Signed-off-by: Minchan Kim <minchan@kernel.org>
  • Loading branch information
minchank authored and intel-lab-lkp committed Nov 10, 2021
1 parent f608ddd commit f1a88e64864de6af4e2a560bcf57a8b5f9737404
Showing 1 changed file with 96 additions and 109 deletions.

0 comments on commit f1a88e6

Please sign in to comment.