Skip to content
Browse files

mm: try_to_unmap_cluster() should lock_page() before mlocking

A BUG_ON(!PageLocked) was triggered in mlock_vma_page() by Sasha Levin
fuzzing with trinity.  The call site try_to_unmap_cluster() does not lock
the pages other than its check_page parameter (which is already locked).

The BUG_ON in mlock_vma_page() is not documented and its purpose is
somewhat unclear, but apparently it serializes against page migration,
which could otherwise fail to transfer the PG_mlocked flag.  This would
not be fatal, as the page would be eventually encountered again, but
NR_MLOCK accounting would become distorted nevertheless.  This patch adds
a comment to the BUG_ON in mlock_vma_page() and munlock_vma_page() to that

The call site try_to_unmap_cluster() is fixed so that for page !=
check_page, trylock_page() is attempted (to avoid possible deadlocks as we
already have check_page locked) and mlock_vma_page() is performed only
upon success.  If the page lock cannot be obtained, the page is left
without PG_mlocked, which is again not a problem in the whole unevictable
memory design.

Signed-off-by: Vlastimil Babka <>
Signed-off-by: Bob Liu <>
Reported-by: Sasha Levin <>
Cc: Wanpeng Li <>
Cc: Michel Lespinasse <>
Cc: KOSAKI Motohiro <>
Acked-by: Rik van Riel <>
Cc: David Rientjes <>
Cc: Mel Gorman <>
Cc: Hugh Dickins <>
Cc: Joonsoo Kim <>
Cc: <>
Signed-off-by: Andrew Morton <>
Signed-off-by: Linus Torvalds <>
  • Loading branch information...
tehcaster authored and torvalds committed Apr 7, 2014
1 parent 3a02576 commit 57e68e9cd65b4b8eb4045a1e0d0746458502554c
Showing with 14 additions and 2 deletions.
  1. +2 −0 mm/mlock.c
  2. +12 −2 mm/rmap.c
@@ -79,6 +79,7 @@ void clear_page_mlock(struct page *page)
void mlock_vma_page(struct page *page)
/* Serialize with page migration */

if (!TestSetPageMlocked(page)) {
@@ -174,6 +175,7 @@ unsigned int munlock_vma_page(struct page *page)
unsigned int nr_pages;
struct zone *zone = page_zone(page);

/* For try_to_munlock() and to serialize with page migration */

@@ -1332,9 +1332,19 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
BUG_ON(!page || PageAnon(page));

if (locked_vma) {
mlock_vma_page(page); /* no-op if already mlocked */
if (page == check_page)
if (page == check_page) {
/* we know we have check_page locked */
} else if (trylock_page(page)) {
* If we can lock the page, perform mlock.
* Otherwise leave the page alone, it will be
* eventually encountered again later.
continue; /* don't unmap */

0 comments on commit 57e68e9

Please sign in to comment.
You can’t perform that action at this time.