Skip to content

Commit

Permalink
hv_netvsc: common detach logic
Browse files Browse the repository at this point in the history
Make common function for detaching internals of device
during changes to MTU and RSS. Make sure no more packets
are transmitted and all packets have been received before
doing device teardown.

Change the wait logic to be common and use usleep_range().

Changes transmit enabling logic so that transmit queues are disabled
during the period when lower device is being changed. And enabled
only after sub channels are setup. This avoids issue where it could
be that a packet was being sent while subchannel was not initialized.

Fixes: 8195b13 ("hv_netvsc: fix deadlock on hotplug")
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
shemminger authored and davem330 committed Mar 22, 2018
1 parent 0ef58b0 commit 7b2ee50
Show file tree
Hide file tree
Showing 4 changed files with 173 additions and 143 deletions.
1 change: 0 additions & 1 deletion drivers/net/hyperv/hyperv_net.h
Expand Up @@ -212,7 +212,6 @@ void netvsc_channel_cb(void *context);
int netvsc_poll(struct napi_struct *napi, int budget);

void rndis_set_subchannel(struct work_struct *w);
bool rndis_filter_opened(const struct netvsc_device *nvdev);
int rndis_filter_open(struct netvsc_device *nvdev);
int rndis_filter_close(struct netvsc_device *nvdev);
struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
Expand Down
20 changes: 11 additions & 9 deletions drivers/net/hyperv/netvsc.c
Expand Up @@ -555,8 +555,6 @@ void netvsc_device_remove(struct hv_device *device)
= rtnl_dereference(net_device_ctx->nvdev);
int i;

cancel_work_sync(&net_device->subchan_work);

netvsc_revoke_buf(device, net_device);

RCU_INIT_POINTER(net_device_ctx->nvdev, NULL);
Expand Down Expand Up @@ -643,14 +641,18 @@ static void netvsc_send_tx_complete(struct netvsc_device *net_device,
queue_sends =
atomic_dec_return(&net_device->chan_table[q_idx].queue_sends);

if (net_device->destroy && queue_sends == 0)
wake_up(&net_device->wait_drain);
if (unlikely(net_device->destroy)) {
if (queue_sends == 0)
wake_up(&net_device->wait_drain);
} else {
struct netdev_queue *txq = netdev_get_tx_queue(ndev, q_idx);

if (netif_tx_queue_stopped(netdev_get_tx_queue(ndev, q_idx)) &&
(hv_ringbuf_avail_percent(&channel->outbound) > RING_AVAIL_PERCENT_HIWATER ||
queue_sends < 1)) {
netif_tx_wake_queue(netdev_get_tx_queue(ndev, q_idx));
ndev_ctx->eth_stats.wake_queue++;
if (netif_tx_queue_stopped(txq) &&
(hv_ringbuf_avail_percent(&channel->outbound) > RING_AVAIL_PERCENT_HIWATER ||
queue_sends < 1)) {
netif_tx_wake_queue(txq);
ndev_ctx->eth_stats.wake_queue++;
}
}
}

Expand Down

0 comments on commit 7b2ee50

Please sign in to comment.