Skip to content

Commit dcfe4d1

Browse files
Chaitanya Kulkarnigregkh
authored andcommitted
nvmet-tcp: fix race between ICReq handling and queue teardown
commit 5293a88 upstream. nvmet_tcp_handle_icreq() updates queue->state after sending an Initialization Connection Response (ICResp), but it does so without serializing against target-side queue teardown. If an NVMe/TCP host sends an Initialization Connection Request (ICReq) and immediately closes the connection, target-side teardown may start in softirq context before io_work drains the already buffered ICReq. In that case, nvmet_tcp_schedule_release_queue() sets queue->state to NVMET_TCP_Q_DISCONNECTING and drops the queue reference under state_lock. If io_work later processes that ICReq, nvmet_tcp_handle_icreq() can still overwrite the state back to NVMET_TCP_Q_LIVE. That defeats the DISCONNECTING-state guard in nvmet_tcp_schedule_release_queue() and allows a later socket state change to re-enter teardown and issue a second kref_put() on an already released queue. The ICResp send failure path has the same problem. If teardown has already moved the queue to DISCONNECTING, a send error can still overwrite the state with NVMET_TCP_Q_FAILED, again reopening the window for a second teardown path to drop the queue reference. Fix this by serializing both post-send state transitions with state_lock and bailing out if teardown has already started. Use -ESHUTDOWN as an internal sentinel for that bail-out path rather than propagating it as a transport error like -ECONNRESET. Keep nvmet_tcp_socket_error() setting rcv_state to NVMET_TCP_RECV_ERR before honoring that sentinel so receive-side parsing stays quiesced until the existing release path completes. Fixes: c46a6465bac2 ("nvmet-tcp: add NVMe over TCP target driver") Cc: stable@vger.kernel.org Reported-by: Shivam Kumar <skumar47@syr.edu> Tested-by: Shivam Kumar <kumar.shivam43666@gmail.com> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent e835249 commit dcfe4d1

1 file changed

Lines changed: 26 additions & 0 deletions

File tree

drivers/nvme/target/tcp.c

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -398,6 +398,19 @@ static void nvmet_tcp_build_pdu_iovec(struct nvmet_tcp_cmd *cmd)
398398

399399
static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue)
400400
{
401+
/*
402+
* Keep rcv_state at RECV_ERR even for the internal -ESHUTDOWN path.
403+
* nvmet_tcp_handle_icreq() can return -ESHUTDOWN after the ICReq has
404+
* already been consumed and queue teardown has started.
405+
*
406+
* If nvmet_tcp_data_ready() or nvmet_tcp_write_space() queues
407+
* nvmet_tcp_io_work() again before nvmet_tcp_release_queue_work()
408+
* cancels it, the queue must not keep that old receive state.
409+
* Otherwise the next nvmet_tcp_io_work() run can reach
410+
* nvmet_tcp_done_recv_pdu() and try to handle the same ICReq again.
411+
*
412+
* That is why queue->rcv_state needs to be updated before we return.
413+
*/
401414
queue->rcv_state = NVMET_TCP_RECV_ERR;
402415
if (queue->nvme_sq.ctrl)
403416
nvmet_ctrl_fatal_error(queue->nvme_sq.ctrl);
@@ -922,11 +935,24 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
922935
iov.iov_len = sizeof(*icresp);
923936
ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
924937
if (ret < 0) {
938+
spin_lock_bh(&queue->state_lock);
939+
if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
940+
spin_unlock_bh(&queue->state_lock);
941+
return -ESHUTDOWN;
942+
}
925943
queue->state = NVMET_TCP_Q_FAILED;
944+
spin_unlock_bh(&queue->state_lock);
926945
return ret; /* queue removal will cleanup */
927946
}
928947

948+
spin_lock_bh(&queue->state_lock);
949+
if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
950+
spin_unlock_bh(&queue->state_lock);
951+
/* Tell nvmet_tcp_socket_error() teardown is in progress. */
952+
return -ESHUTDOWN;
953+
}
929954
queue->state = NVMET_TCP_Q_LIVE;
955+
spin_unlock_bh(&queue->state_lock);
930956
nvmet_prepare_receive_pdu(queue);
931957
return 0;
932958
}

0 commit comments

Comments
 (0)