Skip to content

Commit

Permalink
kernel/fork: use maple tree for dup_mmap() during forking
Browse files Browse the repository at this point in the history
The maple tree was already tracking VMAs in this function by an earlier
commit, but the rbtree iterator was being used to iterate the list.
Change the iterator to use a maple tree native iterator and switch to the
maple tree advanced API to avoid multiple walks of the tree during insert
operations.  Unexport the now-unused vma_store() function.

For performance reasons we bulk allocate the maple tree nodes.  The node
calculations are done internally to the tree and use the VMA count and
assume the worst-case node requirements.  The VM_DONT_COPY flag does not
allow for the most efficient copy method of the tree and so a bulk loading
algorithm is used.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Alexandre Frade <kernel@xanmod.org>
  • Loading branch information
howlett authored and xanmod committed Oct 3, 2022
1 parent 4a99e8a commit 17d2ccb
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 4 deletions.
2 changes: 0 additions & 2 deletions include/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -2575,8 +2575,6 @@ extern bool arch_has_descending_max_zone_pfns(void);
/* nommu.c */
extern atomic_long_t mmap_pages_allocated;
extern int nommu_shrink_inode_mappings(struct inode *, size_t, size_t);
/* mmap.c */
void vma_mas_store(struct vm_area_struct *vma, struct ma_state *mas);

/* interval_tree.c */
void vma_interval_tree_insert(struct vm_area_struct *node,
Expand Down
15 changes: 13 additions & 2 deletions kernel/fork.c
Original file line number Diff line number Diff line change
Expand Up @@ -588,8 +588,9 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
struct vm_area_struct *mpnt, *tmp, *prev, **pprev;
struct rb_node **rb_link, *rb_parent;
int retval;
unsigned long charge;
unsigned long charge = 0;
LIST_HEAD(uf);
MA_STATE(old_mas, &oldmm->mm_mt, 0, 0);
MA_STATE(mas, &mm->mm_mt, 0, 0);

uprobe_start_dup_mmap();
Expand Down Expand Up @@ -625,7 +626,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
goto out;

prev = NULL;
for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) {

retval = mas_expected_entries(&mas, oldmm->map_count);
if (retval)
goto out;

mas_for_each(&old_mas, mpnt, ULONG_MAX) {
struct file *file;

if (mpnt->vm_flags & VM_DONTCOPY) {
Expand Down Expand Up @@ -708,6 +714,8 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
mas.index = tmp->vm_start;
mas.last = tmp->vm_end - 1;
mas_store(&mas, tmp);
if (mas_is_err(&mas))
goto fail_nomem_mas_store;

mm->map_count++;
if (!(tmp->vm_flags & VM_WIPEONFORK))
Expand All @@ -731,6 +739,9 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
fail_uprobe_end:
uprobe_end_dup_mmap();
return retval;

fail_nomem_mas_store:
unlink_anon_vmas(tmp);
fail_nomem_anon_vma_fork:
mpol_put(vma_policy(tmp));
fail_nomem_policy:
Expand Down

0 comments on commit 17d2ccb

Please sign in to comment.