Skip to content

Commit 1c4664f

Browse files
committed
xtensa: define update_mmu_tlb function
Before the commit f9ce0be ("mm: Cleanup faultaround and finish_fault() codepaths") there was a call to update_mmu_cache in alloc_set_pte that used to invalidate TLB entry caching invalid PTE that caused a page fault. That commit removed that call so now invalid TLB entry survives causing repetitive page faults on the CPU that took the initial fault until that TLB entry is occasionally evicted. This issue is spotted by the xtensa TLB sanity checker. Fix this issue by defining update_mmu_tlb function that flushes TLB entry for the faulting address. Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
1 parent a3d0245 commit 1c4664f

File tree

2 files changed

+10
-0
lines changed

2 files changed

+10
-0
lines changed

arch/xtensa/include/asm/pgtable.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -411,6 +411,10 @@ extern void update_mmu_cache(struct vm_area_struct * vma,
411411

412412
typedef pte_t *pte_addr_t;
413413

414+
void update_mmu_tlb(struct vm_area_struct *vma,
415+
unsigned long address, pte_t *ptep);
416+
#define __HAVE_ARCH_UPDATE_MMU_TLB
417+
414418
#endif /* !defined (__ASSEMBLY__) */
415419

416420
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG

arch/xtensa/mm/tlb.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,12 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
162162
}
163163
}
164164

165+
void update_mmu_tlb(struct vm_area_struct *vma,
166+
unsigned long address, pte_t *ptep)
167+
{
168+
local_flush_tlb_page(vma, address);
169+
}
170+
165171
#ifdef CONFIG_DEBUG_TLB_SANITY
166172

167173
static unsigned get_pte_for_vaddr(unsigned vaddr)

0 commit comments

Comments
 (0)