Skip to content

Commit deccd93

Browse files
name2965gregkh
authored andcommitted
mm/hugetlb: add missing hugetlb_lock in __unmap_hugepage_range()
[ Upstream commit 21cc2b5 ] When restoring a reservation for an anonymous page, we need to check to freeing a surplus. However, __unmap_hugepage_range() causes data race because it reads h->surplus_huge_pages without the protection of hugetlb_lock. And adjust_reservation is a boolean variable that indicates whether reservations for anonymous pages in each folio should be restored. Therefore, it should be initialized to false for each round of the loop. However, this variable is not initialized to false except when defining the current adjust_reservation variable. This means that once adjust_reservation is set to true even once within the loop, reservations for anonymous pages will be restored unconditionally in all subsequent rounds, regardless of the folio's state. To fix this, we need to add the missing hugetlb_lock, unlock the page_table_lock earlier so that we don't lock the hugetlb_lock inside the page_table_lock lock, and initialize adjust_reservation to false on each round within the loop. Link: https://lkml.kernel.org/r/20250823182115.1193563-1-aha310510@gmail.com Fixes: df7a6d1 ("mm/hugetlb: restore the reservation if needed") Signed-off-by: Jeongjun Park <aha310510@gmail.com> Reported-by: syzbot+417aeb05fd190f3a6da9@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=417aeb05fd190f3a6da9 Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Breno Leitao <leitao@debian.org> Cc: David Hildenbrand <david@redhat.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Oscar Salvador <osalvador@suse.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ Page vs folio differences ] Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 5d6eeb3 commit deccd93

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

mm/hugetlb.c

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5512,7 +5512,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
55125512
struct page *page;
55135513
struct hstate *h = hstate_vma(vma);
55145514
unsigned long sz = huge_page_size(h);
5515-
bool adjust_reservation = false;
5515+
bool adjust_reservation;
55165516
unsigned long last_addr_mask;
55175517
bool force_flush = false;
55185518

@@ -5604,21 +5604,24 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
56045604
sz);
56055605
hugetlb_count_sub(pages_per_huge_page(h), mm);
56065606
hugetlb_remove_rmap(page_folio(page));
5607+
spin_unlock(ptl);
56075608

56085609
/*
56095610
* Restore the reservation for anonymous page, otherwise the
56105611
* backing page could be stolen by someone.
56115612
* If there we are freeing a surplus, do not set the restore
56125613
* reservation bit.
56135614
*/
5615+
adjust_reservation = false;
5616+
5617+
spin_lock_irq(&hugetlb_lock);
56145618
if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
56155619
folio_test_anon(page_folio(page))) {
56165620
folio_set_hugetlb_restore_reserve(page_folio(page));
56175621
/* Reservation to be adjusted after the spin lock */
56185622
adjust_reservation = true;
56195623
}
5620-
5621-
spin_unlock(ptl);
5624+
spin_unlock_irq(&hugetlb_lock);
56225625

56235626
/*
56245627
* Adjust the reservation for the region that will have the

0 commit comments

Comments
 (0)