Skip to content

Commit fde4770

Browse files
x-y-zakpm00
authored andcommitted
mm/huge_memory: refactor after-split (page) cache code
Smatch/coverity checkers report NULL mapping referencing issues[1][2][3] every time the code is modified, because they do not understand that mapping cannot be NULL when a folio is in page cache in the code. Refactor the code to make it explicit. Remove "end = -1" for anonymous folios, since after code refactoring, end is no longer used by anonymous folio handling code. No functional change is intended. Link: https://lkml.kernel.org/r/20250718023000.4044406-7-ziy@nvidia.com Link: https://lore.kernel.org/linux-mm/2afe3d59-aca5-40f7-82a3-a6d976fb0f4f@stanley.mountain/ [1] Link: https://lore.kernel.org/oe-kbuild/64b54034-f311-4e7d-b935-c16775dbb642@suswa.mountain/ [2] Link: https://lore.kernel.org/linux-mm/20250716145804.4836-1-antonio@mandelbit.com/ [3] Link: https://lkml.kernel.org/r/20250718183720.4054515-7-ziy@nvidia.com Signed-off-by: Zi Yan <ziy@nvidia.com> Suggested-by: David Hildenbrand <david@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Balbir Singh <balbirs@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dan Carpenter <dan.carpenter@linaro.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Kirill A. Shutemov <k.shutemov@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Mathew Brost <matthew.brost@intel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent a387156 commit fde4770

File tree

1 file changed

+28
-16
lines changed

1 file changed

+28
-16
lines changed

mm/huge_memory.c

Lines changed: 28 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3640,7 +3640,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
36403640
ret = -EBUSY;
36413641
goto out;
36423642
}
3643-
end = -1;
36443643
mapping = NULL;
36453644
anon_vma_lock_write(anon_vma);
36463645
} else {
@@ -3793,32 +3792,45 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
37933792
*/
37943793
for (new_folio = folio_next(folio); new_folio != end_folio;
37953794
new_folio = next) {
3795+
unsigned long nr_pages = folio_nr_pages(new_folio);
3796+
37963797
next = folio_next(new_folio);
37973798

37983799
expected_refs = folio_expected_ref_count(new_folio) + 1;
37993800
folio_ref_unfreeze(new_folio, expected_refs);
38003801

38013802
lru_add_split_folio(folio, new_folio, lruvec, list);
38023803

3803-
/* Some pages can be beyond EOF: drop them from cache */
3804-
if (new_folio->index >= end) {
3805-
if (shmem_mapping(mapping))
3806-
nr_shmem_dropped += folio_nr_pages(new_folio);
3807-
else if (folio_test_clear_dirty(new_folio))
3808-
folio_account_cleaned(
3809-
new_folio,
3810-
inode_to_wb(mapping->host));
3811-
__filemap_remove_folio(new_folio, NULL);
3812-
folio_put_refs(new_folio,
3813-
folio_nr_pages(new_folio));
3814-
} else if (mapping) {
3815-
__xa_store(&mapping->i_pages, new_folio->index,
3816-
new_folio, 0);
3817-
} else if (swap_cache) {
3804+
/*
3805+
* Anonymous folio with swap cache.
3806+
* NOTE: shmem in swap cache is not supported yet.
3807+
*/
3808+
if (swap_cache) {
38183809
__xa_store(&swap_cache->i_pages,
38193810
swap_cache_index(new_folio->swap),
38203811
new_folio, 0);
3812+
continue;
3813+
}
3814+
3815+
/* Anonymous folio without swap cache */
3816+
if (!mapping)
3817+
continue;
3818+
3819+
/* Add the new folio to the page cache. */
3820+
if (new_folio->index < end) {
3821+
__xa_store(&mapping->i_pages, new_folio->index,
3822+
new_folio, 0);
3823+
continue;
38213824
}
3825+
3826+
/* Drop folio beyond EOF: ->index >= end */
3827+
if (shmem_mapping(mapping))
3828+
nr_shmem_dropped += nr_pages;
3829+
else if (folio_test_clear_dirty(new_folio))
3830+
folio_account_cleaned(
3831+
new_folio, inode_to_wb(mapping->host));
3832+
__filemap_remove_folio(new_folio, NULL);
3833+
folio_put_refs(new_folio, nr_pages);
38223834
}
38233835
/*
38243836
* Unfreeze @folio only after all page cache entries, which

0 commit comments

Comments
 (0)