Skip to content

Commit

Permalink
arm64/hugetlb: Implement arm64 specific hugetlb_mask_last_hp
Browse files Browse the repository at this point in the history
The HugeTLB address ranges are linearly scanned during fork, unmap and
remap operations, and the linear scan can skip to the end of range mapped
by the page table page if hitting a non-present entry, which can help
to speed linear scanning of the HugeTLB address ranges.

So hugetlb_mask_last_hp() is introduced to help to update the address in
the loop of HugeTLB linear scanning with getting the last huge page mapped
by the associated page table page[1], when a non-present entry is encountered.

Considering ARM64 specific cont-pte/pmd size HugeTLB, this patch implemented
an ARM64 specific hugetlb_mask_last_hp() to help this case.

[1] https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.kravetz@oracle.com/

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
  • Loading branch information
Baolin Wang authored and intel-lab-lkp committed Jun 16, 2022
1 parent 78c09c0 commit f1309df
Showing 1 changed file with 20 additions and 0 deletions.
20 changes: 20 additions & 0 deletions arch/arm64/mm/hugetlbpage.c
Expand Up @@ -368,6 +368,26 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
return NULL;
}

unsigned long hugetlb_mask_last_hp(struct hstate *h)
{
unsigned long hp_size = huge_page_size(h);

switch (hp_size) {
case PUD_SIZE:
return PGDIR_SIZE - PUD_SIZE;
case CONT_PMD_SIZE:
return PUD_SIZE - CONT_PMD_SIZE;
case PMD_SIZE:
return PUD_SIZE - PMD_SIZE;
case CONT_PTE_SIZE:
return PMD_SIZE - CONT_PTE_SIZE;
default:
break;
}

return ~0UL;
}

pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
{
size_t pagesize = 1UL << shift;
Expand Down

0 comments on commit f1309df

Please sign in to comment.