Skip to content

Commit cdd1636

Browse files
committed
genirq: Provide irq_fixup_move_pending()
If an CPU goes offline, the interrupts are migrated away, but a eventually pending interrupt move, which has not yet been made effective is kept pending even if the outgoing CPU is the sole target of the pending affinity mask. What's worse is, that the pending affinity mask is discarded even if it would contain a valid subset of the online CPUs. Implement a helper function which allows to avoid these issues. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235444.691345468@linutronix.de
1 parent 1bb0401 commit cdd1636

File tree

2 files changed

+35
-0
lines changed

2 files changed

+35
-0
lines changed

include/linux/irq.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -491,9 +491,14 @@ extern void irq_migrate_all_off_this_cpu(void);
491491
#if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_PENDING_IRQ)
492492
void irq_move_irq(struct irq_data *data);
493493
void irq_move_masked_irq(struct irq_data *data);
494+
bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear);
494495
#else
495496
static inline void irq_move_irq(struct irq_data *data) { }
496497
static inline void irq_move_masked_irq(struct irq_data *data) { }
498+
static inline bool irq_fixup_move_pending(struct irq_desc *desc, bool fclear)
499+
{
500+
return false;
501+
}
497502
#endif
498503

499504
extern int no_irq_affinity;

kernel/irq/migration.c

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,36 @@
44

55
#include "internals.h"
66

7+
/**
8+
* irq_fixup_move_pending - Cleanup irq move pending from a dying CPU
9+
* @desc: Interrupt descpriptor to clean up
10+
* @force_clear: If set clear the move pending bit unconditionally.
11+
* If not set, clear it only when the dying CPU is the
12+
* last one in the pending mask.
13+
*
14+
* Returns true if the pending bit was set and the pending mask contains an
15+
* online CPU other than the dying CPU.
16+
*/
17+
bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear)
18+
{
19+
struct irq_data *data = irq_desc_get_irq_data(desc);
20+
21+
if (!irqd_is_setaffinity_pending(data))
22+
return false;
23+
24+
/*
25+
* The outgoing CPU might be the last online target in a pending
26+
* interrupt move. If that's the case clear the pending move bit.
27+
*/
28+
if (cpumask_any_and(desc->pending_mask, cpu_online_mask) >= nr_cpu_ids) {
29+
irqd_clr_move_pending(data);
30+
return false;
31+
}
32+
if (force_clear)
33+
irqd_clr_move_pending(data);
34+
return true;
35+
}
36+
737
void irq_move_masked_irq(struct irq_data *idata)
838
{
939
struct irq_desc *desc = irq_data_to_desc(idata);

0 commit comments

Comments
 (0)