Skip to content

Commit

Permalink
mm, compaction: make testing mapping_unmovable() safe
Browse files Browse the repository at this point in the history
As Kirill pointed out, mapping can be removed under us due to
truncation. Test it under folio lock as already done for the async
compaction / dirty folio case. To prevent locking every folio with
mapping to do the test, do it only for unevictable folios, as we can
expect the unmovable mapping folios are also unevictable. To enforce
that expecation, make mapping_set_unmovable() also set AS_UNEVICTABLE.

Also incorporate comment update suggested by Matthew.

Fixes: 3424873 ("mm: Add AS_UNMOVABLE to mark mapping as completely unmovable")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Link: https://lore.kernel.org/r/20230908074222.28723-2-vbabka@suse.cz
Signed-off-by: Sean Christopherson <seanjc@google.com>
  • Loading branch information
tehcaster authored and sean-jc committed Sep 8, 2023
1 parent a1bf4cb commit 4876a35
Show file tree
Hide file tree
Showing 3 changed files with 39 additions and 18 deletions.
6 changes: 6 additions & 0 deletions include/linux/pagemap.h
Original file line number Diff line number Diff line change
Expand Up @@ -276,6 +276,12 @@ static inline int mapping_use_writeback_tags(struct address_space *mapping)

static inline void mapping_set_unmovable(struct address_space *mapping)
{
/*
* It's expected unmovable mappings are also unevictable. Compaction
* migrate scanner (isolate_migratepages_block()) relies on this to
* reduce page locking.
*/
set_bit(AS_UNEVICTABLE, &mapping->flags);
set_bit(AS_UNMOVABLE, &mapping->flags);
}

Expand Down
49 changes: 32 additions & 17 deletions mm/compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -862,6 +862,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,

/* Time to isolate some pages for migration */
for (; low_pfn < end_pfn; low_pfn++) {
bool is_dirty, is_unevictable;

if (skip_on_failure && low_pfn >= next_skip_pfn) {
/*
Expand Down Expand Up @@ -1047,10 +1048,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (!mapping && (folio_ref_count(folio) - 1) > folio_mapcount(folio))
goto isolate_fail_put;

/* The mapping truly isn't movable. */
if (mapping && mapping_unmovable(mapping))
goto isolate_fail_put;

/*
* Only allow to migrate anonymous pages in GFP_NOFS context
* because those do not depend on fs locks.
Expand All @@ -1062,8 +1059,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (!folio_test_lru(folio))
goto isolate_fail_put;

is_unevictable = folio_test_unevictable(folio);

/* Compaction might skip unevictable pages but CMA takes them */
if (!(mode & ISOLATE_UNEVICTABLE) && folio_test_unevictable(folio))
if (!(mode & ISOLATE_UNEVICTABLE) && is_unevictable)
goto isolate_fail_put;

/*
Expand All @@ -1075,26 +1074,42 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_writeback(folio))
goto isolate_fail_put;

if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_dirty(folio)) {
bool migrate_dirty;
is_dirty = folio_test_dirty(folio);

if (((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty)
|| (mapping && is_unevictable)) {
bool migrate_dirty = true;
bool is_unmovable;

/*
* Only pages without mappings or that have a
* ->migrate_folio callback are possible to migrate
* without blocking. However, we can be racing with
* truncation so it's necessary to lock the page
* to stabilise the mapping as truncation holds
* the page lock until after the page is removed
* from the page cache.
* Only folios without mappings or that have
* a ->migrate_folio callback are possible to migrate
* without blocking.
*
* Folios from unmovable mappings are not migratable.
*
* However, we can be racing with truncation, which can
* free the mapping that we need to check. Truncation
* holds the folio lock until after the folio is removed
* from the page so holding it ourselves is sufficient.
*
* To avoid this folio locking to inspect every folio
* with mapping for being unmovable, we assume every
* such folio is also unevictable, which is a cheaper
* test. If our assumption goes wrong, it's not a bug,
* just potentially wasted cycles.
*/
if (!folio_trylock(folio))
goto isolate_fail_put;

mapping = folio_mapping(folio);
migrate_dirty = !mapping ||
mapping->a_ops->migrate_folio;
if ((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) {
migrate_dirty = !mapping ||
mapping->a_ops->migrate_folio;
}
is_unmovable = mapping && mapping_unmovable(mapping);
folio_unlock(folio);
if (!migrate_dirty)
if (!migrate_dirty || is_unmovable)
goto isolate_fail_put;
}

Expand Down
2 changes: 1 addition & 1 deletion virt/kvm/guest_mem.c
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags,
inode->i_size = size;
mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
mapping_set_large_folios(inode->i_mapping);
mapping_set_unevictable(inode->i_mapping);
/* this also sets the mapping as unevictable */
mapping_set_unmovable(inode->i_mapping);

fd = get_unused_fd_flags(0);
Expand Down

0 comments on commit 4876a35

Please sign in to comment.