Skip to content

Commit dd64b23

Browse files
committed
page_pool: unlink from napi during destroy
Jesper points out that we must prevent recycling into cache after page_pool_destroy() is called, because page_pool_destroy() is not synchronized with recycling (some pages may still be outstanding when destroy() gets called). I assumed this will not happen because NAPI can't be scheduled if its page pool is being destroyed. But I missed the fact that NAPI may get reused. For instance when user changes ring configuration driver may allocate a new page pool, stop NAPI, swap, start NAPI, and then destroy the old pool. The NAPI is running so old page pool will think it can recycle to the cache, but the consumer at that point is the destroy() path, not NAPI. To avoid extra synchronization let the drivers do "unlinking" during the "swap" stage while NAPI is indeed disabled. Fixes: 8c48eea ("page_pool: allow caching from safely localized NAPI") Reported-by: Jesper Dangaard Brouer <jbrouer@redhat.com> Link: https://lore.kernel.org/all/e8df2654-6a5b-3c92-489d-2fe5e444135f@redhat.com/ Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/r/20230419182006.719923-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
1 parent 4bb7aac commit dd64b23

File tree

2 files changed

+22
-1
lines changed

2 files changed

+22
-1
lines changed

include/net/page_pool.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -247,13 +247,18 @@ struct page_pool *page_pool_create(const struct page_pool_params *params);
247247
struct xdp_mem_info;
248248

249249
#ifdef CONFIG_PAGE_POOL
250+
void page_pool_unlink_napi(struct page_pool *pool);
250251
void page_pool_destroy(struct page_pool *pool);
251252
void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
252253
struct xdp_mem_info *mem);
253254
void page_pool_release_page(struct page_pool *pool, struct page *page);
254255
void page_pool_put_page_bulk(struct page_pool *pool, void **data,
255256
int count);
256257
#else
258+
static inline void page_pool_unlink_napi(struct page_pool *pool)
259+
{
260+
}
261+
257262
static inline void page_pool_destroy(struct page_pool *pool)
258263
{
259264
}

net/core/page_pool.c

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -839,6 +839,21 @@ void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
839839
pool->xdp_mem_id = mem->id;
840840
}
841841

842+
void page_pool_unlink_napi(struct page_pool *pool)
843+
{
844+
if (!pool->p.napi)
845+
return;
846+
847+
/* To avoid races with recycling and additional barriers make sure
848+
* pool and NAPI are unlinked when NAPI is disabled.
849+
*/
850+
WARN_ON(!test_bit(NAPI_STATE_SCHED, &pool->p.napi->state) ||
851+
READ_ONCE(pool->p.napi->list_owner) != -1);
852+
853+
WRITE_ONCE(pool->p.napi, NULL);
854+
}
855+
EXPORT_SYMBOL(page_pool_unlink_napi);
856+
842857
void page_pool_destroy(struct page_pool *pool)
843858
{
844859
if (!pool)
@@ -847,6 +862,7 @@ void page_pool_destroy(struct page_pool *pool)
847862
if (!page_pool_put(pool))
848863
return;
849864

865+
page_pool_unlink_napi(pool);
850866
page_pool_free_frag(pool);
851867

852868
if (!page_pool_release(pool))
@@ -900,7 +916,7 @@ bool page_pool_return_skb_page(struct page *page, bool napi_safe)
900916
* in the same context as the consumer would run, so there's
901917
* no possible race.
902918
*/
903-
napi = pp->p.napi;
919+
napi = READ_ONCE(pp->p.napi);
904920
allow_direct = napi_safe && napi &&
905921
READ_ONCE(napi->list_owner) == smp_processor_id();
906922

0 commit comments

Comments
 (0)