Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
mm: take most of the "maybe" out of folio_maybe_dma_pinned() for anon…
…ymous pages Let's optimize folio_maybe_dma_pinned() for order-0 pages where we mangle the pincount into the refcount. We only allow pinning anonymous pages that are marked exclusive and disallow clearing the exclusive marker while they are pinned. Consequently, there is no valid scenario where a shared anonymous page could be pinned. With this change, shared anonymous order-0 pages will never get detected as "maybe pinned", not even when concurrent GUP-fast temporarily marks them pinned. After this change, we might only get false positives for anonymous pages if: (1) Concurrent GUP-fast temporarily increased the pin counter, before decreasing it again: applies to exclusive anonymous order-0 pages and compound anonymous pages. Rare. (2) We have more than 1024 references on an exclusive order-0 page. While possible in theory, this should be highly unlikely. What could go wrong? Not much. Assuming we'd have pinned a shared anonymous page (bug), folio_maybe_dma_pinned() would now return "false". However, page_try_dup_anon_rmap() and page_try_share_anon_rmap() essentially do nothing if the exclusive marker is not set already. We'd already have a potential memory corruption. So this change primarily only affects mm/vmscan:shrink_page_list() right now, whereby we can now swapout pages that have more than 1024 references: for example, simply when many processes share an order-0 page. Note that once an anonymous page gets unmapped to eventually get freed, the page remains anonymous until actually freed: when the last reference is gone. Similarly, PageAnonExclusive() is only cleared when actually freeing the page. Signed-off-by: David Hildenbrand <david@redhat.com>
- Loading branch information