Skip to content

Commit

Permalink
net: page pool: rework ppiov life cycle
Browse files Browse the repository at this point in the history
Page pool is tracking how many pages were allocated and returned, which
serves for refcounting the pool, and so every page/frag allocated should
eventually come back to the page pool via appropriate ways, e.g. by
calling page_pool_put_page().

When it comes to normal page pools (i.e. without memory providers
attached), it's fine to return a page when it's still refcounted by
somewhat in the stack, in which case we'll "detach" the page from the
pool and rely on page refcount for it to return back to the kernel.

Memory providers are different, at least ppiov based ones, they need
all their buffers to eventually return back, so apart from custom pp
->release handlers, we'll catch when someone puts down a ppiov and call
its memory provider to handle it, i.e. __page_pool_iov_free().

The first problem is that __page_pool_iov_free() hard coded devmem
handling, and other providers need a flexible way to specify their own
callbacks.

The second problem is that it doesn't go through the generic page pool
paths and so can't do the mentioned pp accounting right. And we can't
even safely rely on page_pool_put_page() to be called somewhere before
to do the pp refcounting, because then the page pool might get destroyed
and ppiov->pp would point to garbage.

The solution is to make the pp ->release callback to be responsible for
properly recycling its buffers, e.g. calling what was
__page_pool_iov_free() before in case of devmem.
page_pool_iov_put_many() will be returning buffers to the page pool.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
  • Loading branch information
isilence committed Dec 8, 2023
1 parent 7707acb commit 14bd566
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 26 deletions.
13 changes: 9 additions & 4 deletions include/net/page_pool/helpers.h
Original file line number Diff line number Diff line change
Expand Up @@ -137,15 +137,20 @@ static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov,
refcount_add(count, &ppiov->refcount);
}

void __page_pool_iov_free(struct page_pool_iov *ppiov);
static inline bool page_pool_iov_sub_and_test(struct page_pool_iov *ppiov,
unsigned int count)
{
return refcount_sub_and_test(count, &ppiov->refcount);
}

static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov,
unsigned int count)
{
if (!refcount_sub_and_test(count, &ppiov->refcount))
return;
if (count > 1)
WARN_ON_ONCE(page_pool_iov_sub_and_test(ppiov, count - 1));

__page_pool_iov_free(ppiov);
page_pool_put_defragged_page(ppiov->pp, page_pool_mangle_ppiov(ppiov),
-1, false);
}

/* page pool mm helpers */
Expand Down
46 changes: 24 additions & 22 deletions net/core/page_pool.c
Original file line number Diff line number Diff line change
Expand Up @@ -599,6 +599,16 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
page_pool_set_dma_addr(page, 0);
}

static void page_pool_return_provider(struct page_pool *pool, struct page *page)
{
int count;

if (pool->mp_ops->release_page(pool, page)) {
count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
trace_page_pool_state_release(pool, page, count);
}
}

/* Disconnects a page (from a page_pool). API users can have a need
* to disconnect a page (from a page_pool), to allow it to be used as
* a regular page (that will eventually be returned to the normal
Expand All @@ -607,24 +617,22 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
void page_pool_return_page(struct page_pool *pool, struct page *page)
{
int count;
bool put;

put = true;
if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
put = pool->mp_ops->release_page(pool, page);
else
__page_pool_release_page_dma(pool, page);
if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) {
page_pool_return_provider(pool, page);
return;
}

__page_pool_release_page_dma(pool, page);

/* This may be the last page returned, releasing the pool, so
* it is not safe to reference pool afterwards.
*/
count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
trace_page_pool_state_release(pool, page, count);

if (put) {
page_pool_clear_pp_info(page);
put_page(page);
}
page_pool_clear_pp_info(page);
put_page(page);
/* An optimization would be to call __free_pages(page, pool->p.order)
* knowing page is not part of page-cache (thus avoiding a
* __page_cache_release() call).
Expand Down Expand Up @@ -1034,15 +1042,6 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
}
EXPORT_SYMBOL(page_pool_update_nid);

void __page_pool_iov_free(struct page_pool_iov *ppiov)
{
if (ppiov->pp->mp_ops != &dmabuf_devmem_ops)
return;

netdev_free_devmem(ppiov);
}
EXPORT_SYMBOL_GPL(__page_pool_iov_free);

/*** "Dmabuf devmem memory provider" ***/

static int mp_dmabuf_devmem_init(struct page_pool *pool)
Expand Down Expand Up @@ -1093,9 +1092,12 @@ static bool mp_dmabuf_devmem_release_page(struct page_pool *pool,
return false;

ppiov = page_to_page_pool_iov(page);
page_pool_iov_put_many(ppiov, 1);
/* We don't want the page pool put_page()ing our page_pool_iovs. */
return false;

if (!page_pool_iov_sub_and_test(ppiov, 1))
return false;
netdev_free_devmem(ppiov);
/* tell page_pool that the ppiov is released */
return true;
}

const struct pp_memory_provider_ops dmabuf_devmem_ops = {
Expand Down

0 comments on commit 14bd566

Please sign in to comment.