Skip to content
Permalink
Shay-Agroskin/…
Switch branches/tags

Commits on Jun 8, 2021

  1. net: ena: re-organize code to improve readability

    Restructure some ethtool to a switch-case blocks to make it more uniform
    with other similar functions.
    Also restructure variable declaration to create reversed x-mas tree.
    
    Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  2. net: ena: Use dev_alloc() in RX buffer allocation

    Use dev_alloc() when allocating RX buffers instead of specifying the
    allocation flags explicitly. This result in same behaviour with less
    code.
    
    Also move the page allocation and its DMA mapping into a function. This
    creates a logical block, which may help understanding the code.
    
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  3. net: ena: aggregate doorbell common operations into a function

    The ena_ring_tx_doorbell() is introduced to call the doorbell and
    increase the driver's corresponding stat.
    
    Signed-off-by: Ido Segev <idose@amazon.com>
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  4. net: ena: fix RST format in ENA documentation file

    The documentation file used to be written in markdown format but was
    converted to reStructuredText (rst).
    
    The converted file doesn't keep up with rst format requirements which
    results in hard-to-read text.
    
    This patch fixes the formatting of the file. The patch also
    * Highlights and emphasizes some lines to improve readability
    * Rephrases some hard-to-understand text
    * Updates outdated function descriptions.
    * Removes TSO description which falsely claims the driver supports it
    
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  5. net: ena: Remove module param and change message severity

    Remove the module param 'debug' which allows to specify the message
    level of the driver. This value can be specified using ethtool command.
    Also reduce the message level of LLQ support to be a warning since it is
    not an indication of an error.
    
    Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  6. net: ena: add jiffies of last napi call to stats

    There are instances when we want to know when the last napi was
    called for debugging.
    
    On stuck / heavy loaded CPUs, the ena napi handler might not be
    called for a long period of time. This stat can help us to
    determine how much time passed since the last execution of napi.
    
    Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  7. net: ena: use build_skb() in RX path

    This patch converts the RX path to use build_skb() for packets larger
    than copybreak (set to 256 by default). This function makes the first
    descriptor's page to be the linear part of the sk_buff struct buffer.
    
    Also remove the SKB description from the README since most of it no
    longer relevant and the parts that are left don't add information.
    
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  8. net: ena: Improve error logging in driver

    Add prints to improve logging of driver's errors.
    
    Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  9. net: ena: Remove unused code

    The ENA_DEFAULT_MIN_RX_BUFF_ALLOC_SIZE macro,
    ena_xdp_queues_present() function and SUSPEND_RESUME enums aren't used
    in the driver, and so not needed.
    
    Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
    Signed-off-by: Gal Pressman <galpress@amazon.com>
    Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021
  10. net: ena: optimize data access in fast-path code

    This tweaks several small places to improve the data access in fast
    path:
    
    * Remove duplicates of first_interrupt flag and surround it with
      WRITE/READ_ONCE macros:
    
      The flag is used to detect HW disorders in its
      interrupt communication with the driver. The flag is set when an
      interrupt is received and used in the health check function
      (ena_timer_service()) to help it find irregularities.
    
    * Reorder some fields in ena_napi struct to take better advantage of
      cache access pattern.
    
    * Move XDP TX queue number to a variable to save its calculation for
      every packet.
    
    * Use likely in a condition to improve branch prediction
    
    The 'first_interrupt' and 'interrupt_masked' flags were moved to reside
    in the same cache line as the first fields of 'napi' struct. This
    placement ensures that all memory accessed during upper-half handler
    reside in the same cacheline (napi_schedule_irqoff() only accesses
    'state' and 'poll_list' fields which are at the beginning of napi
    struct).
    
    Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
    Signed-off-by: Shay Agroskin <shayagr@amazon.com>
    ShayAgros authored and intel-lab-lkp committed Jun 8, 2021

Commits on Jun 7, 2021

  1. Merge branch 'page_pool-recycling'

    Matteo Croce says:
    
    ====================
    page_pool: recycle buffers
    
    This is a respin of [1]
    
    This patchset shows the plans for allowing page_pool to handle and
    maintain DMA map/unmap of the pages it serves to the driver. For this
    to work a return hook in the network core is introduced.
    
    The overall purpose is to simplify drivers, by providing a page
    allocation API that does recycling, such that each driver doesn't have
    to reinvent its own recycling scheme. Using page_pool in a driver
    does not require implementing XDP support, but it makes it trivially
    easy to do so. Instead of allocating buffers specifically for SKBs
    we now allocate a generic buffer and either wrap it on an SKB
    (via build_skb) or create an XDP frame.
    The recycling code leverages the XDP recycle APIs.
    
    The Marvell mvpp2 and mvneta drivers are used in this patchset to
    demonstrate how to use the API, and tested on a MacchiatoBIN
    and EspressoBIN boards respectively.
    
    Please let this going in on a future -rc1 so to allow enough time
    to have wider tests.
    
    v7 -> v8:
    - use page->lru.next instead of page->index for pfmemalloc
    - remove conditional include
    - rework page_pool_return_skb_page() so to have less conversions
      between page and addresses, and call compound_head() only once
    - move some code from skb_free_head() to a new helper skb_pp_recycle()
    - misc fixes
    
    v6 -> v7:
    - refresh patches against net-next
    - remove a redundant call to virt_to_head_page()
    - update mvneta benchmarks
    
    v5 -> v6:
    - preserve pfmemalloc bit when setting signature
    - fix typo in mvneta
    - rebase on next-next with the new cache
    - don't clear the skb->pp_recycle in pskb_expand_head()
    
    v4 -> v5:
    - move the signature so it doesn't alias with page->mapping
    - use an invalid pointer as magic
    - incorporate Matthew Wilcox's changes for pfmemalloc pages
    - move the __skb_frag_unref() changes to a preliminary patch
    - refactor some cpp directives
    - only attempt recycling if skb->head_frag
    - clear skb->pp_recycle in pskb_expand_head()
    
    v3 -> v4:
    - store a pointer to page_pool instead of xdp_mem_info
    - drop a patch which reduces xdp_mem_info size
    - do the recycling in the page_pool code instead of xdp_return
    - remove some unused headers include
    - remove some useless forward declaration
    
    v2 -> v3:
    - added missing SOBs
    - CCed the MM people
    
    v1 -> v2:
    - fix a commit message
    - avoid setting pp_recycle multiple times on mvneta
    - squash two patches to avoid breaking bisect
    ====================
    
    Signed-off-by: David S. Miller <davem@davemloft.net>
    davem330 committed Jun 7, 2021
  2. mvneta: recycle buffers

    Use the new recycling API for page_pool.
    In a drop rate test, the packet rate increased by 10%,
    from 296 Kpps to 326 Kpps.
    
    perf top on a stock system shows:
    
    Overhead  Shared Object     Symbol
      23.66%  [kernel]          [k] __pi___inval_dcache_area
      22.85%  [mvneta]          [k] mvneta_rx_swbm
       7.54%  [kernel]          [k] kmem_cache_alloc
       6.49%  [kernel]          [k] eth_type_trans
       3.94%  [kernel]          [k] dev_gro_receive
       3.91%  [kernel]          [k] __netif_receive_skb_core
       3.91%  [kernel]          [k] kmem_cache_free
       3.76%  [kernel]          [k] page_pool_release_page
       3.56%  [kernel]          [k] free_unref_page
       2.40%  [kernel]          [k] build_skb
       1.49%  [kernel]          [k] skb_release_data
       1.45%  [kernel]          [k] __alloc_pages_bulk
       1.30%  [kernel]          [k] page_frag_free
    
    And this is the same output with recycling enabled:
    
    Overhead  Shared Object     Symbol
      26.41%  [kernel]          [k] __pi___inval_dcache_area
      25.00%  [mvneta]          [k] mvneta_rx_swbm
       8.14%  [kernel]          [k] kmem_cache_alloc
       6.84%  [kernel]          [k] eth_type_trans
       4.44%  [kernel]          [k] __netif_receive_skb_core
       4.38%  [kernel]          [k] kmem_cache_free
       4.16%  [kernel]          [k] dev_gro_receive
       3.21%  [kernel]          [k] page_pool_put_page
       2.41%  [kernel]          [k] build_skb
       1.82%  [kernel]          [k] skb_release_data
       1.61%  [kernel]          [k] napi_gro_receive
       1.25%  [kernel]          [k] page_pool_refill_alloc_cache
       1.16%  [kernel]          [k] __netif_receive_skb_list_core
    
    We can see that page_pool_release_page(), free_unref_page() and
    __alloc_pages_bulk() are no longer on top of the list when receiving
    traffic.
    
    The test was done with mausezahn on the TX side with 64 byte raw
    ethernet frames.
    
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    teknoraver authored and davem330 committed Jun 7, 2021
  3. mvpp2: recycle buffers

    Use the new recycling API for page_pool.
    In a drop rate test, the packet rate is almost doubled,
    from 1110 Kpps to 2128 Kpps.
    
    perf top on a stock system shows:
    
    Overhead  Shared Object     Symbol
      34.88%  [kernel]          [k] page_pool_release_page
       8.06%  [kernel]          [k] free_unref_page
       6.42%  [mvpp2]           [k] mvpp2_rx
       6.07%  [kernel]          [k] eth_type_trans
       5.18%  [kernel]          [k] __netif_receive_skb_core
       4.95%  [kernel]          [k] build_skb
       4.88%  [kernel]          [k] kmem_cache_free
       3.97%  [kernel]          [k] kmem_cache_alloc
       3.45%  [kernel]          [k] dev_gro_receive
       2.73%  [kernel]          [k] page_frag_free
       2.07%  [kernel]          [k] __alloc_pages_bulk
       1.99%  [kernel]          [k] arch_local_irq_save
       1.84%  [kernel]          [k] skb_release_data
       1.20%  [kernel]          [k] netif_receive_skb_list_internal
    
    With packet rate stable at 1100 Kpps:
    
    tx: 0 bps 0 pps rx: 532.7 Mbps 1110 Kpps
    tx: 0 bps 0 pps rx: 532.6 Mbps 1110 Kpps
    tx: 0 bps 0 pps rx: 532.4 Mbps 1109 Kpps
    tx: 0 bps 0 pps rx: 532.1 Mbps 1109 Kpps
    tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps
    tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps
    
    And this is the same output with recycling enabled:
    
    Overhead  Shared Object     Symbol
      12.91%  [kernel]          [k] eth_type_trans
      12.54%  [mvpp2]           [k] mvpp2_rx
       9.67%  [kernel]          [k] build_skb
       9.63%  [kernel]          [k] __netif_receive_skb_core
       8.44%  [kernel]          [k] page_pool_put_page
       8.07%  [kernel]          [k] kmem_cache_free
       7.79%  [kernel]          [k] kmem_cache_alloc
       6.86%  [kernel]          [k] dev_gro_receive
       3.19%  [kernel]          [k] skb_release_data
       2.41%  [kernel]          [k] netif_receive_skb_list_internal
       2.18%  [kernel]          [k] page_pool_refill_alloc_cache
       1.76%  [kernel]          [k] napi_gro_receive
       1.61%  [kernel]          [k] kfree_skb
       1.20%  [kernel]          [k] dma_sync_single_for_device
       1.16%  [mvpp2]           [k] mvpp2_poll
       1.12%  [mvpp2]           [k] mvpp2_read
    
    With packet rate above 2100 Kpps:
    
    tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
    tx: 0 bps 0 pps rx: 1021 Mbps 2127 Kpps
    tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
    tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
    tx: 0 bps 0 pps rx: 1022 Mbps 2128 Kpps
    tx: 0 bps 0 pps rx: 1022 Mbps 2129 Kpps
    
    The major performance increase is explained by the fact that the most CPU
    consuming functions (page_pool_release_page, page_frag_free and
    free_unref_page) are no longer called on a per packet basis.
    
    The test was done by sending to the macchiatobin 64 byte ethernet frames
    with an invalid ethertype, so the packets are dropped early in the RX path.
    
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    teknoraver authored and davem330 committed Jun 7, 2021
  4. page_pool: Allow drivers to hint on SKB recycling

    Up to now several high speed NICs have custom mechanisms of recycling
    the allocated memory they use for their payloads.
    Our page_pool API already has recycling capabilities that are always
    used when we are running in 'XDP mode'. So let's tweak the API and the
    kernel network stack slightly and allow the recycling to happen even
    during the standard operation.
    The API doesn't take into account 'split page' policies used by those
    drivers currently, but can be extended once we have users for that.
    
    The idea is to be able to intercept the packet on skb_release_data().
    If it's a buffer coming from our page_pool API recycle it back to the
    pool for further usage or just release the packet entirely.
    
    To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
    a field in struct page (page->pp) to store the page_pool pointer.
    Storing the information in page->pp allows us to recycle both SKBs and
    their fragments.
    We could have skipped the skb bit entirely, since identical information
    can bederived from struct page. However, in an effort to affect the free path
    as less as possible, reading a single bit in the skb which is already
    in cache, is better that trying to derive identical information for the
    page stored data.
    
    The driver or page_pool has to take care of the sync operations on it's own
    during the buffer recycling since the buffer is, after opting-in to the
    recycling, never unmapped.
    
    Since the gain on the drivers depends on the architecture, we are not
    enabling recycling by default if the page_pool API is used on a driver.
    In order to enable recycling the driver must call skb_mark_for_recycle()
    to store the information we need for recycling in page->pp and
    enabling the recycling bit, or page_pool_store_mem_info() for a fragment.
    
    Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
    Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
    Co-developed-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    apalos authored and davem330 committed Jun 7, 2021
  5. skbuff: add a parameter to __skb_frag_unref

    This is a prerequisite patch, the next one is enabling recycling of
    skbs and fragments. Add an extra argument on __skb_frag_unref() to
    handle recycling, and update the current users of the function with that.
    
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    teknoraver authored and davem330 committed Jun 7, 2021
  6. mm: add a signature in struct page

    This is needed by the page_pool to avoid recycling a page not allocated
    via page_pool.
    
    The page->signature field is aliased to page->lru.next and
    page->compound_head, but it can't be set by mistake because the
    signature value is a bad pointer, and can't trigger a false positive
    in PageTail() because the last bit is 0.
    
    Co-developed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    teknoraver authored and davem330 committed Jun 7, 2021
  7. net: moxa: Use devm_platform_get_and_ioremap_resource()

    Use devm_platform_get_and_ioremap_resource() to simplify
    code and avoid a null-ptr-deref by checking 'res' in it.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  8. l2tp: Fix spelling mistakes

    Fix some spelling mistakes in comments:
    negociated  ==> negotiated
    dont  ==> don't
    
    Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Zheng Yongjun authored and davem330 committed Jun 7, 2021
  9. net/ncsi: Fix spelling mistakes

    Fix some spelling mistakes in comments:
    constuct  ==> construct
    chanels  ==> channels
    Detination  ==> Destination
    
    Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Zheng Yongjun authored and davem330 committed Jun 7, 2021
  10. ipv4: Fix spelling mistakes

    Fix some spelling mistakes in comments:
    Dont  ==> Don't
    timout  ==> timeout
    incomming  ==> incoming
    necesarry  ==> necessary
    substract  ==> subtract
    
    Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Zheng Yongjun authored and davem330 committed Jun 7, 2021
  11. netlabel: Fix spelling mistakes

    Fix some spelling mistakes in comments:
    Interate  ==> Iterate
    sucess  ==> success
    
    Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
    Acked-by: Paul Moore <paul@paul-moore.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Zheng Yongjun authored and davem330 committed Jun 7, 2021
  12. net: micrel: check return value after calling platform_get_resource()

    It will cause null-ptr-deref if platform_get_resource() returns NULL,
    we need check the return value.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  13. net: mvpp2: check return value after calling platform_get_resource()

    It will cause null-ptr-deref if platform_get_resource() returns NULL,
    we need check the return value.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  14. net: ethernet: bgmac: Use devm_platform_ioremap_resource_byname

    Use the devm_platform_ioremap_resource_byname() helper instead of
    calling platform_get_resource_byname() and devm_ioremap_resource()
    separately.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  15. net: enetc: Use devm_platform_get_and_ioremap_resource()

    Use devm_platform_get_and_ioremap_resource() to simplify
    code.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  16. net: macb: Use devm_platform_get_and_ioremap_resource()

    Use devm_platform_get_and_ioremap_resource() to simplify
    code.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  17. net: bcmgenet: check return value after calling platform_get_resource()

    It will cause null-ptr-deref if platform_get_resource() returns NULL,
    we need check the return value.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Acked-by: Florian Fainelli <f.fainelli@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  18. net: tulip: Remove the repeated declaration

    Function 'pnic2_lnk_change' is declared twice, so remove the
    repeated declaration.
    
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: Jakub Kicinski <kuba@kernel.org>
    Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    zhangshk authored and davem330 committed Jun 7, 2021
  19. net: mscc: ocelot: check return value after calling platform_get_reso…

    …urce()
    
    It will cause null-ptr-deref if platform_get_resource() returns NULL,
    we need check the return value.
    
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yang Yingliang authored and davem330 committed Jun 7, 2021
  20. Merge branch 'hns3-error-handling'

    Guangbin Huang says:
    
    ====================
    net: hns3: refactors and decouples the error handling logic
    
    This patchset refactors and decouples the error handling logic from reset
    logic, it is the preset patch of the RAS feature. It mainly implements the
    function that reset logic remains independent of the error handling logic,
    this will ensure that common misellaneous MSI-X interrupt are re-enabled
    quickly.
    ====================
    
    Signed-off-by: David S. Miller <davem@davemloft.net>
    davem330 committed Jun 7, 2021
  21. net: hns3: remove now redundant logic related to HNAE3_UNKNOWN_RESET

    Earlier patches have decoupled the MSI-X conveyed error handling
    and recovery logic. This earlier concept code is no longer required.
    
    Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
    Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
    Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com>
    Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yufeng Mo authored and davem330 committed Jun 7, 2021
  22. net: hns3: add scheduling logic for error handling task

    Error handling & recovery is done in context of reset task which
    gets scheduled from misc interrupt handler in existing code. But
    since error handling has been moved to new task, it should get
    scheduled instead of the reset task from the interrupt handler.
    
    Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com>
    Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
    Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
    Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    zhangjiaran authored and davem330 committed Jun 7, 2021
  23. net: hns3: add a separate error handling task

    Error handling and recovery logic are intertwined. Error handling (i.e.
    error identification, clearing error sources and initiation of recovery)
    is done in context of reset task. If certain hardware errors get
    delivered during driver init time, which can cause driver init/loading
    to fail.
    
    Introduce a separate error handling task to ensure below:
    
    1. Reset logic remains independent of the error handling logic.
    2. Add the hclge_errhand_task_schedule to schedule error recovery
    tasks, This will ensure that common misellaneous MSI-X interrupt are
    re-enabled quickly.
    
    Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com>
    Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
    Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
    Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    zhangjiaran authored and davem330 committed Jun 7, 2021
  24. qed: Fix duplicate included linux/kernel.h

    Clean up the following includecheck warning:
    
    ./drivers/net/ethernet/qlogic/qed/qed_nvmetcp_fw_funcs.h: linux/kernel.h
    is included more than once.
    
    No functional change.
    
    Reported-by: Abaci Robot <abaci@linux.alibaba.com>
    Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Jiapeng Chong authored and davem330 committed Jun 7, 2021
  25. Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/gi…

    …t/tnguy/next-queue
    
    Tony Nguyen says:
    
    ====================
    100GbE Intel Wired LAN Driver Updates 2021-06-07
    
    This series contains updates to virtchnl header file and ice driver.
    
    Brett adds capability bits to virtchnl to specify whether a primary or
    secondary MAC address is being requested and adds the implementation to
    ice. He also adds storing of VF MAC address so that it will be preserved
    across reboots of VM and refactors VF queue configuration to remove the
    expectation that configuration be done all at once.
    
    Krzysztof refactors ice_setup_rx_ctx() to remove configuration not
    related to Rx context into a new function, ice_vsi_cfg_rxq().
    
    Liwei Song extends the wait time for the global config timeout.
    
    Salil Mehta refactors code in ice_vsi_set_num_qs() to remove an
    unnecessary call when the user has requested specific number of Rx or Tx
    queues.
    
    Jesse converts define macros to static inlines for NOP configurations.
    
    Jake adds messaging when devlink fails to read device capabilities and
    when pldmfw cannot find the requested firmware. Adds a wait for reset
    completion when reporting devlink info and reinitializes NVM during
    rebuild to ensure values are current.
    
    Ani adds detection and reporting of modules exceeding supported power
    levels and changes an error message to a debug message.
    
    Paul fixes a clang warning for deadcode.DeadStores.
    ====================
    
    Signed-off-by: David S. Miller <davem@davemloft.net>
    davem330 committed Jun 7, 2021
Older