Skip to content

Commit

Permalink
dpif-netdev: Allow PMD auto load balance with cross-numa.
Browse files Browse the repository at this point in the history
Previously auto load balance did not trigger a reassignment when
there was any cross-numa polling as an rxq could be polled from a
different numa after reassign and it could impact estimates.

In the case where there is only one numa with pmds available, the
same numa will always poll before and after reassignment, so estimates
are valid. Allow PMD auto load balance to trigger a reassignment in
this case.

Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: 0-day Robot <robot@bytheb.org>
  • Loading branch information
kevintraynor authored and ovsrobot committed Mar 15, 2021
1 parent cdaa7e0 commit bc91edd
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 6 deletions.
9 changes: 6 additions & 3 deletions Documentation/topics/dpdk/pmd.rst
Expand Up @@ -237,9 +237,12 @@ If not set, the default variance improvement threshold is 25%.

.. note::

PMD Auto Load Balancing doesn't currently work if queues are assigned
cross NUMA as actual processing load could get worse after assignment
as compared to what dry run predicts.
PMD Auto Load Balancing doesn't request a reassignment if queues are
assigned cross NUMA and there are multiple NUMA nodes available for
reassignment. This is because reassignment to a different NUMA node could
lead to an unpredictable change in processing cycles required for a queue.
However, if there is only one cross NUMA node available then a dry run and
possible request to reassign may continue as normal.

The minimum time between 2 consecutive PMD auto load balancing iterations can
also be configured by::
Expand Down
16 changes: 13 additions & 3 deletions lib/dpif-netdev.c
Expand Up @@ -4887,6 +4887,12 @@ struct rr_numa {
bool idx_inc;
};

static size_t
rr_numa_list_count(struct rr_numa_list *rr)
{
return hmap_count(&rr->numas);
}

static struct rr_numa *
rr_numa_list_lookup(struct rr_numa_list *rr, int numa_id)
{
Expand Down Expand Up @@ -5599,10 +5605,14 @@ get_dry_run_variance(struct dp_netdev *dp, uint32_t *core_list,
for (int i = 0; i < n_rxqs; i++) {
int numa_id = netdev_get_numa_id(rxqs[i]->port->netdev);
numa = rr_numa_list_lookup(&rr, numa_id);
/* If there is no available pmd on the local numa but there is only one
* numa for cross-numa polling, we can estimate the dry run. */
if (!numa && rr_numa_list_count(&rr) == 1) {
numa = rr_numa_list_next(&rr, NULL);
}
if (!numa) {
/* Abort if cross NUMA polling. */
VLOG_DBG("PMD auto lb dry run."
" Aborting due to cross-numa polling.");
VLOG_DBG("PMD auto lb dry run. Aborting due to "
"multiple numa nodes available for cross-numa polling.");
goto cleanup;
}

Expand Down

0 comments on commit bc91edd

Please sign in to comment.