Skip to content

Commit

Permalink
ethdev: new Rx/Tx offloads API
Browse files Browse the repository at this point in the history
This patch check if a input requested offloading is valid or not.
Any reuqested offloading must be supported in the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
dev_conf->[rt]xmode.offloads to rte_eth_dev_configure() and
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If any offloading is enabled in rte_eth_dev_configure() by application,
it is enabled on all queues no matter whether it is per-queue or
per-port type and no matter whether it is set or cleared in
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If a per-queue offloading hasn't be enabled in rte_eth_dev_configure(),
it can be enabled or disabled for individual queue in
ret_eth_[rt]x_queue_setup().
A new added offloading is the one which hasn't been enabled in
rte_eth_dev_configure() and is reuqested to be enabled in
rte_eth_[rt]x_queue_setup(), it must be per-queue type,
otherwise trigger an error log.
The underlying PMD must be aware that the requested offloadings
to PMD specific queue_setup() function only carries those
new added offloadings of per-queue type.

This patch can make above such checking in a common way in rte_ethdev
layer to avoid same checking in underlying PMD.

This patch assumes that all PMDs in 18.05-rc2 have already
converted to offload API defined in 17.11 . It also assumes
that all PMDs can return correct offloading capabilities
in rte_eth_dev_infos_get().

In the beginning of [rt]x_queue_setup() of underlying PMD,
add offloads = [rt]xconf->offloads |
dev->data->dev_conf.[rt]xmode.offloads; to keep same as offload API
defined in 17.11 to avoid upper application broken due to offload
API change.
PMD can use the info that input [rt]xconf->offloads only carry
the new added per-queue offloads to do some optimization or some
code change on base of this patch.

Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
  • Loading branch information
Davyalwaysin authored and Ferruh Yigit committed May 14, 2018
1 parent df428ce commit a4996bd
Show file tree
Hide file tree
Showing 33 changed files with 261 additions and 1,396 deletions.
34 changes: 27 additions & 7 deletions doc/guides/prog_guide/poll_mode_drv.rst
Expand Up @@ -296,17 +296,37 @@ described in the mbuf API documentation and in the in :ref:`Mbuf Library
Per-Port and Per-Queue Offloads
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In the DPDK offload API, offloads are divided into per-port and per-queue offloads.
In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows:

* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time.
* A pure per-port offload is the one supported by device but not per-queue type.
* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time.
* A pure per-port offloading must be enabled or disabled on all queues at the same time.
* Any offloading is per-queue or pure per-port type, but can't be both types at same devices.
* Port capabilities = per-queue capabilities + pure per-port capabilities.
* Any supported offloading can be enabled on all queues.

The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities.
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.

Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
Per-port offload configuration is set using ``rte_eth_dev_configure``.
Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
To enable per-port offload, the offload should be set on both device configuration and queue setup.
In case of a mixed configuration the queue setup shall return with an error.
To enable per-queue offload, the offload can be set only on the queue setup.
Offloads which are not enabled are disabled by default.
Any requested offloading by an application must be within the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()``.

If any offloading is enabled in ``rte_eth_dev_configure()`` by an application,
it is enabled on all queues no matter whether it is per-queue or
per-port type and no matter whether it is set or cleared in
``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()``.

If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``,
it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue.
A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application
is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.

For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
In such cases it is not required to set other flags in ``txq_flags``.
Expand Down
8 changes: 8 additions & 0 deletions doc/guides/rel_notes/release_18_05.rst
Expand Up @@ -369,6 +369,14 @@ API Changes
* ``rte_flow_create()`` API count action now requires the ``struct rte_flow_action_count``.
* ``rte_flow_query()`` API parameter changed from action type to action structure.

* ethdev: changes to offload API

A pure per-port offloading isn't requested to be repeated in [rt]x_conf->offloads to
``rte_eth_[rt]x_queue_setup()``. Now any offloading enabled in ``rte_eth_dev_configure()``
can't be disabled by ``rte_eth_[rt]x_queue_setup()``. Any new added offloading which has
not been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled in
``rte_eth_[rt]x_queue_setup()`` must be per-queue type, otherwise trigger an error log.


ABI Changes
-----------
Expand Down
5 changes: 4 additions & 1 deletion drivers/net/avf/avf_rxtx.c
Expand Up @@ -435,9 +435,12 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
uint16_t i, base, bsf, tc_mapping;
uint64_t offloads;

PMD_INIT_FUNC_TRACE();

offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;

if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
nb_desc > AVF_MAX_RING_DESC ||
nb_desc < AVF_MIN_RING_DESC) {
Expand Down Expand Up @@ -474,7 +477,7 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->free_thresh = tx_free_thresh;
txq->queue_id = queue_idx;
txq->port_id = dev->data->port_id;
txq->offloads = tx_conf->offloads;
txq->offloads = offloads;
txq->tx_deferred_start = tx_conf->tx_deferred_start;

/* Allocate software ring */
Expand Down
17 changes: 0 additions & 17 deletions drivers/net/bnxt/bnxt_ethdev.c
Expand Up @@ -501,25 +501,8 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
uint64_t tx_offloads = eth_dev->data->dev_conf.txmode.offloads;
uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;

if (tx_offloads != (tx_offloads & BNXT_DEV_TX_OFFLOAD_SUPPORT)) {
PMD_DRV_LOG
(ERR,
"Tx offloads requested 0x%" PRIx64 " supported 0x%x\n",
tx_offloads, BNXT_DEV_TX_OFFLOAD_SUPPORT);
return -ENOTSUP;
}

if (rx_offloads != (rx_offloads & BNXT_DEV_RX_OFFLOAD_SUPPORT)) {
PMD_DRV_LOG
(ERR,
"Rx offloads requested 0x%" PRIx64 " supported 0x%x\n",
rx_offloads, BNXT_DEV_RX_OFFLOAD_SUPPORT);
return -ENOTSUP;
}

bp->rx_queues = (void *)eth_dev->data->rx_queues;
bp->tx_queues = (void *)eth_dev->data->tx_queues;

Expand Down
50 changes: 5 additions & 45 deletions drivers/net/cxgbe/cxgbe_ethdev.c
Expand Up @@ -366,31 +366,15 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
{
struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
struct adapter *adapter = pi->adapter;
uint64_t unsupported_offloads, configured_offloads;
uint64_t configured_offloads;
int err;

CXGBE_FUNC_TRACE();
configured_offloads = eth_dev->data->dev_conf.rxmode.offloads;
if (!(configured_offloads & DEV_RX_OFFLOAD_CRC_STRIP)) {
dev_info(adapter, "can't disable hw crc strip\n");
configured_offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
}

unsupported_offloads = configured_offloads & ~CXGBE_RX_OFFLOADS;
if (unsupported_offloads) {
dev_err(adapter, "Rx offloads 0x%" PRIx64 " are not supported. "
"Supported:0x%" PRIx64 "\n",
unsupported_offloads, (uint64_t)CXGBE_RX_OFFLOADS);
return -ENOTSUP;
}

configured_offloads = eth_dev->data->dev_conf.txmode.offloads;
unsupported_offloads = configured_offloads & ~CXGBE_TX_OFFLOADS;
if (unsupported_offloads) {
dev_err(adapter, "Tx offloads 0x%" PRIx64 " are not supported. "
"Supported:0x%" PRIx64 "\n",
unsupported_offloads, (uint64_t)CXGBE_TX_OFFLOADS);
return -ENOTSUP;
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_CRC_STRIP;
}

if (!(adapter->flags & FW_QUEUE_BOUND)) {
Expand Down Expand Up @@ -440,23 +424,14 @@ int cxgbe_dev_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id)
int cxgbe_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
uint16_t queue_idx, uint16_t nb_desc,
unsigned int socket_id,
const struct rte_eth_txconf *tx_conf)
const struct rte_eth_txconf *tx_conf __rte_unused)
{
struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
struct adapter *adapter = pi->adapter;
struct sge *s = &adapter->sge;
struct sge_eth_txq *txq = &s->ethtxq[pi->first_qset + queue_idx];
int err = 0;
unsigned int temp_nb_desc;
uint64_t unsupported_offloads;

unsupported_offloads = tx_conf->offloads & ~CXGBE_TX_OFFLOADS;
if (unsupported_offloads) {
dev_err(adapter, "Tx offloads 0x%" PRIx64 " are not supported. "
"Supported:0x%" PRIx64 "\n",
unsupported_offloads, (uint64_t)CXGBE_TX_OFFLOADS);
return -ENOTSUP;
}

dev_debug(adapter, "%s: eth_dev->data->nb_tx_queues = %d; queue_idx = %d; nb_desc = %d; socket_id = %d; pi->first_qset = %u\n",
__func__, eth_dev->data->nb_tx_queues, queue_idx, nb_desc,
Expand Down Expand Up @@ -553,7 +528,7 @@ int cxgbe_dev_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
uint16_t queue_idx, uint16_t nb_desc,
unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
Expand All @@ -565,21 +540,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
unsigned int temp_nb_desc;
struct rte_eth_dev_info dev_info;
unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
uint64_t unsupported_offloads, configured_offloads;

configured_offloads = rx_conf->offloads;
if (!(configured_offloads & DEV_RX_OFFLOAD_CRC_STRIP)) {
dev_info(adapter, "can't disable hw crc strip\n");
configured_offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
}

unsupported_offloads = configured_offloads & ~CXGBE_RX_OFFLOADS;
if (unsupported_offloads) {
dev_err(adapter, "Rx offloads 0x%" PRIx64 " are not supported. "
"Supported:0x%" PRIx64 "\n",
unsupported_offloads, (uint64_t)CXGBE_RX_OFFLOADS);
return -ENOTSUP;
}

dev_debug(adapter, "%s: eth_dev->data->nb_rx_queues = %d; queue_idx = %d; nb_desc = %d; socket_id = %d; mp = %p\n",
__func__, eth_dev->data->nb_rx_queues, queue_idx, nb_desc,
Expand Down
16 changes: 0 additions & 16 deletions drivers/net/dpaa/dpaa_ethdev.c
Expand Up @@ -177,14 +177,6 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();

/* Rx offloads validation */
if (~(dev_rx_offloads_sup | dev_rx_offloads_nodis) & rx_offloads) {
DPAA_PMD_ERR(
"Rx offloads non supported - requested 0x%" PRIx64
" supported 0x%" PRIx64,
rx_offloads,
dev_rx_offloads_sup | dev_rx_offloads_nodis);
return -ENOTSUP;
}
if (dev_rx_offloads_nodis & ~rx_offloads) {
DPAA_PMD_WARN(
"Rx offloads non configurable - requested 0x%" PRIx64
Expand All @@ -193,14 +185,6 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}

/* Tx offloads validation */
if (~(dev_tx_offloads_sup | dev_tx_offloads_nodis) & tx_offloads) {
DPAA_PMD_ERR(
"Tx offloads non supported - requested 0x%" PRIx64
" supported 0x%" PRIx64,
tx_offloads,
dev_tx_offloads_sup | dev_tx_offloads_nodis);
return -ENOTSUP;
}
if (dev_tx_offloads_nodis & ~tx_offloads) {
DPAA_PMD_WARN(
"Tx offloads non configurable - requested 0x%" PRIx64
Expand Down
16 changes: 0 additions & 16 deletions drivers/net/dpaa2/dpaa2_ethdev.c
Expand Up @@ -309,14 +309,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();

/* Rx offloads validation */
if (~(dev_rx_offloads_sup | dev_rx_offloads_nodis) & rx_offloads) {
DPAA2_PMD_ERR(
"Rx offloads non supported - requested 0x%" PRIx64
" supported 0x%" PRIx64,
rx_offloads,
dev_rx_offloads_sup | dev_rx_offloads_nodis);
return -ENOTSUP;
}
if (dev_rx_offloads_nodis & ~rx_offloads) {
DPAA2_PMD_WARN(
"Rx offloads non configurable - requested 0x%" PRIx64
Expand All @@ -325,14 +317,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}

/* Tx offloads validation */
if (~(dev_tx_offloads_sup | dev_tx_offloads_nodis) & tx_offloads) {
DPAA2_PMD_ERR(
"Tx offloads non supported - requested 0x%" PRIx64
" supported 0x%" PRIx64,
tx_offloads,
dev_tx_offloads_sup | dev_tx_offloads_nodis);
return -ENOTSUP;
}
if (dev_tx_offloads_nodis & ~tx_offloads) {
DPAA2_PMD_WARN(
"Tx offloads non configurable - requested 0x%" PRIx64
Expand Down
19 changes: 0 additions & 19 deletions drivers/net/e1000/em_ethdev.c
Expand Up @@ -454,29 +454,10 @@ eth_em_configure(struct rte_eth_dev *dev)
{
struct e1000_interrupt *intr =
E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct rte_eth_dev_info dev_info;
uint64_t rx_offloads;
uint64_t tx_offloads;

PMD_INIT_FUNC_TRACE();
intr->flags |= E1000_FLAG_NEED_LINK_UPDATE;

eth_em_infos_get(dev, &dev_info);
rx_offloads = dev->data->dev_conf.rxmode.offloads;
if ((rx_offloads & dev_info.rx_offload_capa) != rx_offloads) {
PMD_DRV_LOG(ERR, "Some Rx offloads are not supported "
"requested 0x%" PRIx64 " supported 0x%" PRIx64,
rx_offloads, dev_info.rx_offload_capa);
return -ENOTSUP;
}
tx_offloads = dev->data->dev_conf.txmode.offloads;
if ((tx_offloads & dev_info.tx_offload_capa) != tx_offloads) {
PMD_DRV_LOG(ERR, "Some Tx offloads are not supported "
"requested 0x%" PRIx64 " supported 0x%" PRIx64,
tx_offloads, dev_info.tx_offload_capa);
return -ENOTSUP;
}

PMD_INIT_FUNC_TRACE();

return 0;
Expand Down

0 comments on commit a4996bd

Please sign in to comment.