Aya Levin and Brian Maly net/mlx5e: Fix page DMA map/unmap attributes
e83ca34 Feb 22, 2022
net/mlx5e: Fix page DMA map/unmap attributes
Driver initiates DMA sync, hence it may skip CPU sync. Add
DMA_ATTR_SKIP_CPU_SYNC as input attribute both to dma_map_page and
dma_unmap_page to avoid redundant sync with the CPU.
When forcing the device to work with SWIOTLB, the extra sync might cause
data corruption. The driver unmaps the whole page while the hardware
used just a part of the bounce buffer. So syncing overrides the entire
page with bounce buffer that only partially contains real data.

Fixes: bc77b24 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE")
Fixes: db05815 ("net/mlx5e: Add XSK zero-copy support")
Signed-off-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
(cherry picked from commit 0b7cfa4)

Conflicts:
    drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c

The corresponding file is not present in the UEK6 code base. The
related file available in UEK6 should be
    drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.c

The XDP path will be worked on as a follow-up patch later on due to
urgency for fixing the panic. It was verified that the XSK zero-copy
mode can work well with SWIOTLB (no panic is seen with XSK workload)
without the DMA_ATTR_SKIP_CPU_SYNC flag, although with some slight
cost on CPU cycles in syncing bounce buffers.

Unlike the XDP path, the data corruption is only seen in the normal skb
path because a page may be shared by multiple skb frags that belong to
different network packets. If one of the packets sharing the same page
happens to be consumed and then recycled ahead of another, the data
corruption would occur in the SWIOTLB mode, as the dma_unmap_page() call
that syncs the IOTLB bounce buffers would overwrite the memory for other
packets sharing the same page. In the case of XDP umem, it's a single
provider-single consumer case, and a page is exclusively used by only
one umem buffer for a single packet. There's no page sharing between
multiple packet buffers, the dma_unmap_page() call takes place only when
the XDP socket is closed. The dma_map_page() is not relevant either as
it's just used to prepare the buffer.

Orabug: 33382242

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Qing Huang <qing.huang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
e83ca34