Skip to content
Permalink
Browse files
mm/pagemap: Clenaup PREEMPT_COUNT leftovers
CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
  • Loading branch information
Thomas Gleixner authored and intel-lab-lkp committed Sep 14, 2020
1 parent 1033b04 commit a4a0f54fdd08d95dfe20d684b405db8a47fb61d8
Showing 1 changed file with 1 addition and 3 deletions.
@@ -168,9 +168,7 @@ void release_pages(struct page **pages, int nr);
static inline int __page_cache_add_speculative(struct page *page, int count)
{
#ifdef CONFIG_TINY_RCU
# ifdef CONFIG_PREEMPT_COUNT
VM_BUG_ON(!in_atomic() && !irqs_disabled());
# endif
VM_BUG_ON(preemptible())
/*
* Preempt must be disabled here - we rely on rcu_read_lock doing
* this for us.

0 comments on commit a4a0f54

Please sign in to comment.