Skip to content

Commit

Permalink
zsmalloc: allow only one active pool compaction context
Browse files Browse the repository at this point in the history
commit d2658f2 upstream.

zsmalloc pool can be compacted concurrently by many contexts,
e.g.

 cc1 handle_mm_fault()
      do_anonymous_page()
       __alloc_pages_slowpath()
        try_to_free_pages()
         do_try_to_free_pages(
          lru_gen_shrink_node()
           shrink_slab()
            do_shrink_slab()
             zs_shrinker_scan()
              zs_compact()

Pool compaction is currently (basically) single-threaded as
it is performed under pool->lock. Having multiple compaction
threads results in unnecessary contention, as each thread
competes for pool->lock. This, in turn, affects all zsmalloc
operations such as zs_malloc(), zs_map_object(), zs_free(), etc.

Introduce the pool->compaction_in_progress atomic variable,
which ensures that only one compaction context can run at a
time. This reduces overall pool->lock contention in (corner)
cases when many contexts attempt to shrink zspool simultaneously.

Link: https://lkml.kernel.org/r/20230418074639.1903197-1-senozhatsky@chromium.org
Fixes: c0547d0 ("zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks")
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  • Loading branch information
sergey-senozhatsky authored and gregkh committed Aug 23, 2023
1 parent d4008ea commit 5274bf1
Showing 1 changed file with 12 additions and 0 deletions.
12 changes: 12 additions & 0 deletions mm/zsmalloc.c
Expand Up @@ -246,6 +246,7 @@ struct zs_pool {
struct work_struct free_work;
#endif
spinlock_t lock;
atomic_t compaction_in_progress;
};

struct zspage {
Expand Down Expand Up @@ -2100,13 +2101,23 @@ unsigned long zs_compact(struct zs_pool *pool)
struct size_class *class;
unsigned long pages_freed = 0;

/*
* Pool compaction is performed under pool->lock so it is basically
* single-threaded. Having more than one thread in __zs_compact()
* will increase pool->lock contention, which will impact other
* zsmalloc operations that need pool->lock.
*/
if (atomic_xchg(&pool->compaction_in_progress, 1))
return 0;

for (i = ZS_SIZE_CLASSES - 1; i >= 0; i--) {
class = pool->size_class[i];
if (class->index != i)
continue;
pages_freed += __zs_compact(pool, class);
}
atomic_long_add(pages_freed, &pool->stats.pages_compacted);
atomic_set(&pool->compaction_in_progress, 0);

return pages_freed;
}
Expand Down Expand Up @@ -2193,6 +2204,7 @@ struct zs_pool *zs_create_pool(const char *name)

init_deferred_free(pool);
spin_lock_init(&pool->lock);
atomic_set(&pool->compaction_in_progress, 0);

pool->name = kstrdup(name, GFP_KERNEL);
if (!pool->name)
Expand Down

0 comments on commit 5274bf1

Please sign in to comment.