Skip to content

Commit a387156

Browse files
x-y-zakpm00
authored andcommitted
mm/huge_memory: get frozen folio refcount with folio_expected_ref_count()
Instead of open coding the refcount calculation, use folio_expected_ref_count() to calculate frozen folio refcount. Because: 1. __folio_split() does not split a folio with PG_private, so no elevated refcount from PG_private; 2. a frozen folio in __folio_split() is fully unmapped, so folio_mapcount() in folio_expected_ref_count() is always 0; 3. (mapping || swap_cache) ? folio_nr_pages(folio) is taken care of by folio_expected_ref_count() too. Link: https://lkml.kernel.org/r/20250718023000.4044406-6-ziy@nvidia.com Link: https://lkml.kernel.org/r/20250718183720.4054515-6-ziy@nvidia.com Signed-off-by: Zi Yan <ziy@nvidia.com> Suggested-by: David Hildenbrand <david@redhat.com> Acked-by: Balbir Singh <balbirs@nvidia.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Antonio Quartulli <antonio@mandelbit.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dan Carpenter <dan.carpenter@linaro.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Kirill A. Shutemov <k.shutemov@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Mathew Brost <matthew.brost@intel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 714b056 commit a387156

File tree

1 file changed

+5
-7
lines changed

1 file changed

+5
-7
lines changed

mm/huge_memory.c

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3731,6 +3731,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
37313731
if (folio_ref_freeze(folio, 1 + extra_pins)) {
37323732
struct address_space *swap_cache = NULL;
37333733
struct lruvec *lruvec;
3734+
int expected_refs;
37343735

37353736
if (folio_order(folio) > 1 &&
37363737
!list_empty(&folio->_deferred_list)) {
@@ -3794,11 +3795,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
37943795
new_folio = next) {
37953796
next = folio_next(new_folio);
37963797

3797-
folio_ref_unfreeze(
3798-
new_folio,
3799-
1 + ((mapping || swap_cache) ?
3800-
folio_nr_pages(new_folio) :
3801-
0));
3798+
expected_refs = folio_expected_ref_count(new_folio) + 1;
3799+
folio_ref_unfreeze(new_folio, expected_refs);
38023800

38033801
lru_add_split_folio(folio, new_folio, lruvec, list);
38043802

@@ -3828,8 +3826,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
38283826
* Otherwise, a parallel folio_try_get() can grab @folio
38293827
* and its caller can see stale page cache entries.
38303828
*/
3831-
folio_ref_unfreeze(folio, 1 +
3832-
((mapping || swap_cache) ? folio_nr_pages(folio) : 0));
3829+
expected_refs = folio_expected_ref_count(folio) + 1;
3830+
folio_ref_unfreeze(folio, expected_refs);
38333831

38343832
unlock_page_lruvec(lruvec);
38353833

0 commit comments

Comments
 (0)