Skip to content
Permalink
Browse files
WIP: powerpc: Fix kfence for book3s64 radix mmu
The existing patch uses the DEBUG_PAGEALLOC logic to implement kfence's
mapping/unmapping callback. However DEBUG_PAGEALLOC is only available
for hash MMU on book3s64 systems, leading to a crash when running under
radix.

The ppc32 implementation calls generic page handling code that appears
to be ok to call for hash. If we let hash use this code we can pass the
kfence kunit test suite.

However we sometimes crash soon after (under qemu), so something is not
quite right.

Signed-off-by: Joel Stanley <joel@jms.id.au>
  • Loading branch information
shenki committed Mar 12, 2021
1 parent 05885be commit 916a8df9850da81d89e53f0b4b91bbb41dc2923f
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 5 deletions.
@@ -15,18 +15,20 @@
#define ARCH_FUNC_PREFIX "."
#endif

bool hash__kfence_protect_page(unsigned long addr, bool protect);

static inline bool arch_kfence_init_pool(void)
{
return true;
}

#ifdef CONFIG_PPC64
bool kfence_protect_page(unsigned long addr, bool protect);
#else
static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
pte_t *kpte = virt_to_kpte(addr);

if (IS_ENABLED(CONFIG_BOOK3S_64) && !radix_enabled())
return hash__kfence_protect_page(addr, protect);

if (protect) {
pte_update(&init_mm, addr, kpte, _PAGE_PRESENT, 0, 0);
flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
@@ -36,6 +38,5 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)

return true;
}
#endif

#endif /* __ASM_POWERPC_KFENCE_H */
@@ -1983,7 +1983,7 @@ static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
}

#ifdef CONFIG_KFENCE
bool kfence_protect_page(unsigned long addr, bool protect)
bool hash__kfence_protect_page(unsigned long addr, bool protect)
{
unsigned long lmi = __pa(addr) >> PAGE_SHIFT;

0 comments on commit 916a8df

Please sign in to comment.