Skip to content

Commit 8e7b632

Browse files
committed
x86/irq: Cleanup pending irq move in fixup_irqs()
If an CPU goes offline, the interrupts are migrated away, but a eventually pending interrupt move, which has not yet been made effective is kept pending even if the outgoing CPU is the sole target of the pending affinity mask. What's worse is, that the pending affinity mask is discarded even if it would contain a valid subset of the online CPUs. Use the newly introduced helper to: - Discard a pending move when the outgoing CPU is the only target in the pending mask. - Use the pending mask instead of the affinity mask to find a valid target for the CPU if the pending mask intersects with the online CPUs. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235444.774068557@linutronix.de
1 parent cdd1636 commit 8e7b632

File tree

1 file changed

+21
-4
lines changed

1 file changed

+21
-4
lines changed

arch/x86/kernel/irq.c

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -440,9 +440,9 @@ void fixup_irqs(void)
440440
int ret;
441441

442442
for_each_irq_desc(irq, desc) {
443+
const struct cpumask *affinity;
443444
int break_affinity = 0;
444445
int set_affinity = 1;
445-
const struct cpumask *affinity;
446446

447447
if (!desc)
448448
continue;
@@ -454,19 +454,36 @@ void fixup_irqs(void)
454454

455455
data = irq_desc_get_irq_data(desc);
456456
affinity = irq_data_get_affinity_mask(data);
457+
457458
if (!irq_has_action(irq) || irqd_is_per_cpu(data) ||
458459
cpumask_subset(affinity, cpu_online_mask)) {
460+
irq_fixup_move_pending(desc, false);
459461
raw_spin_unlock(&desc->lock);
460462
continue;
461463
}
462464

463465
/*
464-
* Complete the irq move. This cpu is going down and for
465-
* non intr-remapping case, we can't wait till this interrupt
466-
* arrives at this cpu before completing the irq move.
466+
* Complete an eventually pending irq move cleanup. If this
467+
* interrupt was moved in hard irq context, then the
468+
* vectors need to be cleaned up. It can't wait until this
469+
* interrupt actually happens and this CPU was involved.
467470
*/
468471
irq_force_complete_move(desc);
469472

473+
/*
474+
* If there is a setaffinity pending, then try to reuse the
475+
* pending mask, so the last change of the affinity does
476+
* not get lost. If there is no move pending or the pending
477+
* mask does not contain any online CPU, use the current
478+
* affinity mask.
479+
*/
480+
if (irq_fixup_move_pending(desc, true))
481+
affinity = desc->pending_mask;
482+
483+
/*
484+
* If the mask does not contain an offline CPU, break
485+
* affinity and use cpu_online_mask as fall back.
486+
*/
470487
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
471488
break_affinity = 1;
472489
affinity = cpu_online_mask;

0 commit comments

Comments
 (0)