Skip to content
Permalink
Robert-Hancock…
Switch branches/tags

Commits on Apr 16, 2021

  1. irqchip/xilinx: Expose Kconfig option

    Previously the XILINX_INTC config option was hidden and only
    auto-selected on the MicroBlaze platform. However, this IP can also be
    used on other platforms. Allow this option to be user-enabled.
    
    Signed-off-by: Robert Hancock <robert.hancock@calian.com>
    robhancocksed authored and intel-lab-lkp committed Apr 16, 2021

Commits on Apr 10, 2021

  1. genirq: Reduce irqdebug cacheline bouncing

    note_interrupt() increments desc->irq_count for each interrupt even for
    percpu interrupt handlers, even when they are handled successfully. This
    causes cacheline bouncing and limits scalability.
    
    Instead of incrementing irq_count every time, only start incrementing it
    after seeing an unhandled irq, which should avoid the cache line
    bouncing in the common path.
    
    This actually should give better consistency in handling misbehaving
    irqs too, because instead of the first unhandled irq arriving at an
    arbitrary point in the irq_count cycle, its arrival will begin the
    irq_count cycle.
    
    Cédric reports the result of his IPI throughput test:
    
                   Millions of IPIs/s
     -----------   --------------------------------------
                   upstream   upstream   patched
     chips  cpus   default    noirqdebug default (irqdebug)
     -----------   -----------------------------------------
     1      0-15     4.061      4.153      4.084
            0-31     7.937      8.186      8.158
            0-47    11.018     11.392     11.233
            0-63    11.460     13.907     14.022
     2      0-79     8.376     18.105     18.084
            0-95     7.338     22.101     22.266
            0-111    6.716     25.306     25.473
            0-127    6.223     27.814     28.029
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Link: https://lore.kernel.org/r/20210402132037.574661-1-npiggin@gmail.com
    npiggin authored and Thomas Gleixner committed Apr 10, 2021
  2. kernel: Initialize cpumask before parsing

    KMSAN complains that new_value at cpumask_parse_user() from
    write_irq_affinity() from irq_affinity_proc_write() is uninitialized.
    
      [  148.133411][ T5509] =====================================================
      [  148.135383][ T5509] BUG: KMSAN: uninit-value in find_next_bit+0x325/0x340
      [  148.137819][ T5509]
      [  148.138448][ T5509] Local variable ----new_value.i@irq_affinity_proc_write created at:
      [  148.140768][ T5509]  irq_affinity_proc_write+0xc3/0x3d0
      [  148.142298][ T5509]  irq_affinity_proc_write+0xc3/0x3d0
      [  148.143823][ T5509] =====================================================
    
    Since bitmap_parse() from cpumask_parse_user() calls find_next_bit(),
    any alloc_cpumask_var() + cpumask_parse_user() sequence has possibility
    that find_next_bit() accesses uninitialized cpu mask variable. Fix this
    problem by replacing alloc_cpumask_var() with zalloc_cpumask_var().
    
    Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    Link: https://lore.kernel.org/r/20210401055823.3929-1-penguin-kernel@I-love.SAKURA.ne.jp
    Tetsuo Handa authored and Thomas Gleixner committed Apr 10, 2021

Commits on Mar 30, 2021

  1. genirq/irq_sim: Shrink devm_irq_domain_create_sim()

    The custom devres structure manages only a single pointer which can
    can be achieved by using devm_add_action_or_reset() as well which
    makes the code simpler.
    
    [ tglx: Fixed return value handling - found by smatch ]
    
    Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Link: https://lore.kernel.org/r/20210301142659.8971-1-brgl@bgdev.pl
    brgl authored and Thomas Gleixner committed Mar 30, 2021

Commits on Mar 25, 2021

  1. drm/i915: Use tasklet_unlock_spin_wait() in __tasklet_disable_sync_on…

    …ce()
    
    The i915 driver has its own tasklet interface which was overseen in the
    tasklet rework. __tasklet_disable_sync_once() is a wrapper around
    tasklet_unlock_wait(). tasklet_unlock_wait() might sleep, but the i915
    wrappers invokes it from non-preemtible contexts with bottom halves disabled.
    
    Use tasklet_unlock_spin_wait() instead which can be invoked from
    non-preemptible contexts.
    
    Fixes: da04474 ("tasklets: Replace spin wait in tasklet_unlock_wait()")
    Reported-by: kernel test robot <oliver.sang@intel.com>
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Link: https://lore.kernel.org/r/20210323092221.awq7g5b2muzypjw3@flow
    Sebastian Andrzej Siewior authored and Thomas Gleixner committed Mar 25, 2021

Commits on Mar 22, 2021

  1. irq: Fix typos in comments

    Fix ~36 single-word typos in the IRQ, irqchip and irqdomain code comments.
    
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Marc Zyngier <maz@kernel.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Ingo Molnar committed Mar 22, 2021

Commits on Mar 19, 2021

  1. genirq/matrix: Prevent allocation counter corruption

    When irq_matrix_free() is called for an unallocated vector the
    managed_allocated and total_allocated counters get out of sync with the
    real state of the matrix. Later, when the last interrupt is freed, these
    counters will underflow resulting in UINTMAX because the counters are
    unsigned.
    
    While this is certainly a problem of the calling code, this can be catched
    in the allocator by checking the allocation bit for the to be freed vector
    which simplifies debugging.
    
    An example of the problem described above:
    https://lore.kernel.org/lkml/20210318192819.636943062@linutronix.de/
    
    Add the missing sanity check and emit a warning when it triggers.
    
    Suggested-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Link: https://lore.kernel.org/r/20210319111823.1105248-1-vkuznets@redhat.com
    vittyvk authored and Thomas Gleixner committed Mar 19, 2021

Commits on Mar 17, 2021

  1. irq: Simplify condition in irq_matrix_reserve()

    The if condition in irq_matrix_reserve() can be much simpler.
    
    While at it fix a typo in the comment.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Link: https://lore.kernel.org/r/20210211070953.5914-1-jgross@suse.com
    jgross1 authored and Thomas Gleixner committed Mar 17, 2021
  2. rcu: Prevent false positive softirq warning on RT

    Soft interrupt disabled sections can legitimately be preempted or schedule
    out when blocking on a lock on RT enabled kernels so the RCU preempt check
    warning has to be disabled for RT kernels.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Tested-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309085727.626304079@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  3. tick/sched: Prevent false positive softirq pending warnings on RT

    On RT a task which has soft interrupts disabled can block on a lock and
    schedule out to idle while soft interrupts are pending. This triggers the
    warning in the NOHZ idle code which complains about going idle with pending
    soft interrupts. But as the task is blocked soft interrupt processing is
    temporarily blocked as well which means that such a warning is a false
    positive.
    
    To prevent that check the per CPU state which indicates that a scheduled
    out task has soft interrupts disabled.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Tested-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309085727.527563866@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  4. softirq: Make softirq control and processing RT aware

    Provide a local lock based serialization for soft interrupts on RT which
    allows the local_bh_disabled() sections and servicing soft interrupts to be
    preemptible.
    
    Provide the necessary inline helpers which allow to reuse the bulk of the
    softirq processing code.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Tested-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309085727.426370483@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  5. softirq: Move various protections into inline helpers

    To allow reuse of the bulk of softirq processing code for RT and to avoid
    #ifdeffery all over the place, split protections for various code sections
    out into inline helpers so the RT variant can just replace them in one go.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Tested-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309085727.310118772@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  6. irqtime: Make accounting correct on RT

    vtime_account_irq and irqtime_account_irq() base checks on preempt_count()
    which fails on RT because preempt_count() does not contain the softirq
    accounting which is seperate on RT.
    
    These checks do not need the full preempt count as they only operate on the
    hard and softirq sections.
    
    Use irq_count() instead which provides the correct value on both RT and non
    RT kernels. The compiler is clever enough to fold the masking for !RT:
    
           99b:	65 8b 05 00 00 00 00 	mov    %gs:0x0(%rip),%eax
     -     9a2:	25 ff ff ff 7f       	and    $0x7fffffff,%eax
     +     9a2:	25 00 ff ff 00       	and    $0xffff00,%eax
    
    Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Tested-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309085727.153926793@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  7. softirq: Add RT specific softirq accounting

    RT requires the softirq processing and local bottomhalf disabled regions to
    be preemptible. Using the normal preempt count based serialization is
    therefore not possible because this implicitely disables preemption.
    
    RT kernels use a per CPU local lock to serialize bottomhalfs. As
    local_bh_disable() can nest the lock can only be acquired on the outermost
    invocation of local_bh_disable() and released when the nest count becomes
    zero. Tasks which hold the local lock can be preempted so its required to
    keep track of the nest count per task.
    
    Add a RT only counter to task struct and adjust the relevant macros in
    preempt.h.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Tested-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309085726.983627589@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  8. tasklets: Switch tasklet_disable() to the sleep wait variant

     -- NOT FOR IMMEDIATE MERGING --
    
    Now that all users of tasklet_disable() are invoked from sleepable context,
    convert it to use tasklet_unlock_wait() which might sleep.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084242.726452321@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  9. firewire: ohci: Use tasklet_disable_in_atomic() where required

    tasklet_disable() is invoked in several places. Some of them are in atomic
    context which prevents a conversion of tasklet_disable() to a sleepable
    function.
    
    The atomic callchains are:
    
     ar_context_tasklet()
       ohci_cancel_packet()
         tasklet_disable()
    
     ...
       ohci_flush_iso_completions()
         tasklet_disable()
    
    The invocation of tasklet_disable() from at_context_flush() is always in
    preemptible context.
    
    Use tasklet_disable_in_atomic() for the two invocations in
    ohci_cancel_packet() and ohci_flush_iso_completions().
    
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084242.616379058@linutronix.de
    Sebastian Andrzej Siewior authored and Thomas Gleixner committed Mar 17, 2021
  10. PCI: hv: Use tasklet_disable_in_atomic()

    The hv_compose_msi_msg() callback in irq_chip::irq_compose_msi_msg is
    invoked via irq_chip_compose_msi_msg(), which itself is always invoked from
    atomic contexts from the guts of the interrupt core code.
    
    There is no way to change this w/o rewriting the whole driver, so use
    tasklet_disable_in_atomic() which allows to make tasklet_disable()
    sleepable once the remaining atomic users are addressed.
    
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Wei Liu <wei.liu@kernel.org>
    Acked-by: Bjorn Helgaas <bhelgaas@google.com>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084242.516519290@linutronix.de
    Sebastian Andrzej Siewior authored and Thomas Gleixner committed Mar 17, 2021
  11. atm: eni: Use tasklet_disable_in_atomic() in the send() callback

    The atmdev_ops::send callback which calls tasklet_disable() is invoked with
    bottom halfs disabled from net_device_ops::ndo_start_xmit(). All other
    invocations of tasklet_disable() in this driver happen in preemptible
    context.
    
    Change the send() call to use tasklet_disable_in_atomic() which allows
    tasklet_disable() to be made sleepable once the remaining atomic context
    usage sites are cleaned up.
    
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084242.415583839@linutronix.de
    Sebastian Andrzej Siewior authored and Thomas Gleixner committed Mar 17, 2021
  12. ath9k: Use tasklet_disable_in_atomic()

    All callers of ath9k_beacon_ensure_primary_slot() are preemptible /
    acquire a mutex except for this callchain:
    
      spin_lock_bh(&sc->sc_pcu_lock);
      ath_complete_reset()
      -> ath9k_calculate_summary_state()
         -> ath9k_beacon_ensure_primary_slot()
    
    It's unclear how that can be distangled, so use tasklet_disable_in_atomic()
    for now. This allows tasklet_disable() to become sleepable once the
    remaining atomic users are cleaned up.
    
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Kalle Valo <kvalo@codeaurora.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084242.313899703@linutronix.de
    Sebastian Andrzej Siewior authored and Thomas Gleixner committed Mar 17, 2021
  13. net: sundance: Use tasklet_disable_in_atomic().

    tasklet_disable() is used in the timer callback. This might be distangled,
    but without access to the hardware that's a bit risky.
    
    Replace it with tasklet_disable_in_atomic() so tasklet_disable() can be
    changed to a sleep wait once all remaining atomic users are converted.
    
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084242.209110861@linutronix.de
    Sebastian Andrzej Siewior authored and Thomas Gleixner committed Mar 17, 2021
  14. net: jme: Replace link-change tasklet with work

    The link change tasklet disables the tasklets for tx/rx processing while
    upating hw parameters and then enables the tasklets again.
    
    This update can also be pushed into a workqueue where it can be performed
    in preemptible context. This allows tasklet_disable() to become sleeping.
    
    Replace the linkch_task tasklet with a work.
    
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084242.106288922@linutronix.de
    Sebastian Andrzej Siewior authored and Thomas Gleixner committed Mar 17, 2021
  15. tasklets: Prevent tasklet_unlock_spin_wait() deadlock on RT

    tasklet_unlock_spin_wait() spin waits for the TASKLET_STATE_SCHED bit in
    the tasklet state to be cleared. This works on !RT nicely because the
    corresponding execution can only happen on a different CPU.
    
    On RT softirq processing is preemptible, therefore a task preempting the
    softirq processing thread can spin forever.
    
    Prevent this by invoking local_bh_disable()/enable() inside the loop. In
    case that the softirq processing thread was preempted by the current task,
    current will block on the local lock which yields the CPU to the preempted
    softirq processing thread. If the tasklet is processed on a different CPU
    then the local_bh_disable()/enable() pair is just a waste of processor
    cycles.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084241.988908275@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  16. tasklets: Replace spin wait in tasklet_kill()

    tasklet_kill() spin waits for TASKLET_STATE_SCHED to be cleared invoking
    yield() from inside the loop. yield() is an ill defined mechanism and the
    result might still be wasting CPU cycles in a tight loop which is
    especially painful in a guest when the CPU running the tasklet is scheduled
    out.
    
    tasklet_kill() is used in teardown paths and not performance critical at
    all. Replace the spin wait with wait_var_event().
    
    Signed-off-by: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084241.890532921@linutronix.de
    Peter Zijlstra authored and Thomas Gleixner committed Mar 17, 2021
  17. tasklets: Replace spin wait in tasklet_unlock_wait()

    tasklet_unlock_wait() spin waits for TASKLET_STATE_RUN to be cleared. This
    is wasting CPU cycles in a tight loop which is especially painful in a
    guest when the CPU running the tasklet is scheduled out.
    
    tasklet_unlock_wait() is invoked from tasklet_kill() which is used in
    teardown paths and not performance critical at all. Replace the spin wait
    with wait_var_event().
    
    There are no users of tasklet_unlock_wait() which are invoked from atomic
    contexts. The usage in tasklet_disable() has been replaced temporarily with
    the spin waiting variant until the atomic users are fixed up and will be
    converted to the sleep wait variant later.
    
    Signed-off-by: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084241.783936921@linutronix.de
    Peter Zijlstra authored and Thomas Gleixner committed Mar 17, 2021
  18. tasklets: Use spin wait in tasklet_disable() temporarily

    To ease the transition use spin waiting in tasklet_disable() until all
    usage sites from atomic context have been cleaned up.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084241.685352806@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  19. tasklets: Provide tasklet_disable_in_atomic()

    Replacing the spin wait loops in tasklet_unlock_wait() with
    wait_var_event() is not possible as a handful of tasklet_disable()
    invocations are happening in atomic context. All other invocations are in
    teardown paths which can sleep.
    
    Provide tasklet_disable_in_atomic() and tasklet_unlock_spin_wait() to
    convert the few atomic use cases over, which allows to change
    tasklet_disable() and tasklet_unlock_wait() in a later step.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084241.563164193@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  20. tasklets: Use static inlines for stub implementations

    Inlines exist for a reason.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084241.407702697@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  21. tasklets: Replace barrier() with cpu_relax() in tasklet_unlock_wait()

    A barrier() in a tight loop which waits for something to happen on a remote
    CPU is a pointless exercise. Replace it with cpu_relax() which allows HT
    siblings to make progress.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210309084241.249343366@linutronix.de
    Thomas Gleixner committed Mar 17, 2021
  22. softirq: s/BUG/WARN_ONCE/ on tasklet SCHED state not set

    Replace BUG() with WARN_ONCE() on wrong tasklet state, in order to:
    
     - increase the verbosity / aid in debugging
     - avoid fatal/unrecoverable state
    
    Suggested-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
    Signed-off-by: Eugeniu Rosca <erosca@de.adit-jv.com>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Link: https://lore.kernel.org/r/20210317102012.32399-1-erosca@de.adit-jv.com
    dirkbehme authored and Ingo Molnar committed Mar 17, 2021

Commits on Mar 16, 2021

  1. genirq: Fix typos and misspellings in comments

    No functional change.
    
    Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Link: https://lore.kernel.org/r/20210316100205.23492-1-krzysztof.kozlowski@canonical.com
    krzk authored and Thomas Gleixner committed Mar 16, 2021
  2. tasklet: Remove tasklet_kill_immediate

    Ever since RCU was converted to softirq, it has no users.
    
    Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Link: https://lore.kernel.org/r/20210306213658.12862-1-dave@stgolabs.net
    Davidlohr Bueso authored and Thomas Gleixner committed Mar 16, 2021

Commits on Mar 6, 2021

  1. genirq: Add IRQF_NO_AUTOEN for request_irq/nmi()

    Many drivers don't want interrupts enabled automatically via request_irq().
    So they are handling this issue by either way of the below two:
    
    (1)
      irq_set_status_flags(irq, IRQ_NOAUTOEN);
      request_irq(dev, irq...);
    
    (2)
      request_irq(dev, irq...);
      disable_irq(irq);
    
    The code in the second way is silly and unsafe. In the small time gap
    between request_irq() and disable_irq(), interrupts can still come.
    
    The code in the first way is safe though it's subobtimal.
    
    Add a new IRQF_NO_AUTOEN flag which can be handed in by drivers to
    request_irq() and request_nmi(). It prevents the automatic enabling of the
    requested interrupt/nmi in the same safe way as #1 above. With that the
    various usage sites of #1 and #2 above can be simplified and corrected.
    
    Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Cc: dmitry.torokhov@gmail.com
    Link: https://lore.kernel.org/r/20210302224916.13980-2-song.bao.hua@hisilicon.com
    Barry Song authored and Ingo Molnar committed Mar 6, 2021
  2. Linux 5.12-rc2

    torvalds committed Mar 6, 2021
  3. Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi…

    …t/rdma/rdma
    
    Pull rdma fixes from Jason Gunthorpe:
     "Nothing special here, though Bob's regression fixes for rxe would have
      made it before the rc cycle had there not been such strong winter
      weather!
    
       - Fix corner cases in the rxe reference counting cleanup that are
         causing regressions in blktests for SRP
    
       - Two kdoc fixes so W=1 is clean
    
       - Missing error return in error unwind for mlx5
    
       - Wrong lock type nesting in IB CM"
    
    * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
      RDMA/rxe: Fix errant WARN_ONCE in rxe_completer()
      RDMA/rxe: Fix extra deref in rxe_rcv_mcast_pkt()
      RDMA/rxe: Fix missed IB reference counting in loopback
      RDMA/uverbs: Fix kernel-doc warning of _uverbs_alloc
      RDMA/mlx5: Set correct kernel-doc identifier
      IB/mlx5: Add missing error code
      RDMA/rxe: Fix missing kconfig dependency on CRYPTO
      RDMA/cm: Fix IRQ restore in ib_send_cm_sidr_rep
    torvalds committed Mar 6, 2021
  4. Merge tag 'gcc-plugins-v5.12-rc2' of git://git.kernel.org/pub/scm/lin…

    …ux/kernel/git/kees/linux
    
    Pull gcc-plugins fixes from Kees Cook:
     "Tiny gcc-plugin fixes for v5.12-rc2. These issues are small but have
      been reported a couple times now by static analyzers, so best to get
      them fixed to reduce the noise. :)
    
       - Fix coding style issues (Jason Yan)"
    
    * tag 'gcc-plugins-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
      gcc-plugins: latent_entropy: remove unneeded semicolon
      gcc-plugins: structleak: remove unneeded variable 'ret'
    torvalds committed Mar 6, 2021
Older