Skip to content

Commit 83e5bd4

Browse files
committed
mm/migrate: fix wrongly apply write bit after mkdirty on sparc64
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168392 This patch is a backport of the following upstream commit: commit 96a9c28 Author: Peter Xu <peterx@redhat.com> Date: Thu Feb 16 10:30:59 2023 -0500 mm/migrate: fix wrongly apply write bit after mkdirty on sparc64 Nick Bowler reported another sparc64 breakage after the young/dirty persistent work for page migration (per "Link:" below). That's after a similar report [2]. It turns out page migration was overlooked, and it wasn't failing before because page migration was not enabled in the initial report test environment. David proposed another way [2] to fix this from sparc64 side, but that patch didn't land somehow. Neither did I check whether there's any other arch that has similar issues. Let's fix it for now as simple as moving the write bit handling to be after dirty, like what we did before. Note: this is based on mm-unstable, because the breakage was since 6.1 and we're at a very late stage of 6.2 (-rc8), so I assume for this specific case we should target this at 6.3. [1] https://lore.kernel.org/all/20221021160603.GA23307@u164.east.ru/ [2] https://lore.kernel.org/all/20221212130213.136267-1-david@redhat.com/ Link: https://lkml.kernel.org/r/20230216153059.256739-1-peterx@redhat.com Fixes: 2e34687 ("mm: remember young/dirty bit for page migrations") Link: https://lore.kernel.org/all/CADyTPExpEqaJiMGoV+Z6xVgL50ZoMJg49B10LcZ=8eg19u34BA@mail.gmail.com/ Signed-off-by: Peter Xu <peterx@redhat.com> Reported-by: Nick Bowler <nbowler@draconx.ca> Acked-by: David Hildenbrand <david@redhat.com> Tested-by: Nick Bowler <nbowler@draconx.ca> Cc: <regressions@lists.linux.dev> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Rafael Aquini <aquini@redhat.com>
1 parent 832d31b commit 83e5bd4

File tree

2 files changed

+6
-2
lines changed

2 files changed

+6
-2
lines changed

mm/huge_memory.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3292,15 +3292,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
32923292
pmde = mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot));
32933293
if (pmd_swp_soft_dirty(*pvmw->pmd))
32943294
pmde = pmd_mksoft_dirty(pmde);
3295-
if (is_writable_migration_entry(entry))
3296-
pmde = maybe_pmd_mkwrite(pmde, vma);
32973295
if (pmd_swp_uffd_wp(*pvmw->pmd))
32983296
pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde));
32993297
if (!is_migration_entry_young(entry))
33003298
pmde = pmd_mkold(pmde);
33013299
/* NOTE: this may contain setting soft-dirty on some archs */
33023300
if (PageDirty(new) && is_migration_entry_dirty(entry))
33033301
pmde = pmd_mkdirty(pmde);
3302+
if (is_writable_migration_entry(entry))
3303+
pmde = maybe_pmd_mkwrite(pmde, vma);
3304+
else
3305+
pmde = pmd_wrprotect(pmde);
33043306

33053307
if (PageAnon(new)) {
33063308
rmap_t rmap_flags = RMAP_COMPOUND;

mm/migrate.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -214,6 +214,8 @@ static bool remove_migration_pte(struct folio *folio,
214214
pte = maybe_mkwrite(pte, vma);
215215
else if (pte_swp_uffd_wp(*pvmw.pte))
216216
pte = pte_mkuffd_wp(pte);
217+
else
218+
pte = pte_wrprotect(pte);
217219

218220
if (folio_test_anon(folio) && !is_readable_migration_entry(entry))
219221
rmap_flags |= RMAP_EXCLUSIVE;

0 commit comments

Comments
 (0)