Skip to content

Commit

Permalink
net/rds: RDS connection does not reconnect after CQ access violation …
Browse files Browse the repository at this point in the history
…error

The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 29180452

Reviewed-by: Rama Nichanamatlu <rama.nichanamatlu@oracle.com>
Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  • Loading branch information
Venkat Venkatsubra authored and gerd-rausch committed Jan 23, 2020
1 parent fbbedf3 commit 964cad6
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 6 deletions.
1 change: 1 addition & 0 deletions net/rds/ib.h
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@
#define NUM_RDS_RECV_SG (PAGE_ALIGN(RDS_MAX_FRAG_SIZE) / PAGE_SIZE)

#define RDS_IB_CLEAN_CACHE 1
#define RDS_IB_CQ_ERR 2

#define RDS_IB_DEFAULT_FREG_PORT_NUM 1
#define RDS_CM_RETRY_SEQ_EN BIT(7)
Expand Down
25 changes: 19 additions & 6 deletions net/rds/ib_cm.c
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,7 @@ void rds_ib_cm_connect_complete(struct rds_connection *conn, struct rdma_cm_even

ic->i_sl = ic->i_cm_id->route.path_rec->sl;
atomic_set(&ic->i_cq_quiesce, 0);
ic->i_flags &= ~RDS_IB_CQ_ERR;

/*
* Init rings and fill recv. this needs to wait until protocol negotiation
Expand Down Expand Up @@ -444,8 +445,15 @@ static void rds_ib_cm_fill_conn_param(struct rds_connection *conn,

static void rds_ib_cq_event_handler(struct ib_event *event, void *data)
{
rdsdebug("event %u (%s) data %p\n",
struct rds_connection *conn = data;
struct rds_ib_connection *ic = conn->c_transport_data;

pr_info("RDS/IB: event %u (%s) data %p\n",
event->event, rds_ib_event_str(event->event), data);

ic->i_flags |= RDS_IB_CQ_ERR;
if (waitqueue_active(&rds_ib_ring_empty_wait))
wake_up(&rds_ib_ring_empty_wait);
}

static void rds_ib_cq_comp_handler_fastreg(struct ib_cq *cq, void *context)
Expand Down Expand Up @@ -1452,11 +1460,15 @@ void rds_ib_conn_path_shutdown(struct rds_conn_path *cp)

/* quiesce tx and rx completion before tearing down */
while (!wait_event_timeout(rds_ib_ring_empty_wait,
rds_ib_ring_empty(&ic->i_recv_ring) &&
(atomic_read(&ic->i_signaled_sends) == 0) &&
(atomic_read(&ic->i_fastreg_wrs) ==
RDS_IB_DEFAULT_FREG_WR),
msecs_to_jiffies(5000))) {
(rds_ib_ring_empty(&ic->i_recv_ring) &&
(atomic_read(&ic->i_signaled_sends) == 0) &&
(atomic_read(&ic->i_fastreg_wrs) ==
RDS_IB_DEFAULT_FREG_WR)) ||
(ic->i_flags & RDS_IB_CQ_ERR),
msecs_to_jiffies(5000))) {

if (ic->i_flags & RDS_IB_CQ_ERR)
break;

/* Try to reap pending RX completions every 5 secs */
if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
Expand All @@ -1470,6 +1482,7 @@ void rds_ib_conn_path_shutdown(struct rds_conn_path *cp)
tasklet_kill(&ic->i_rtasklet);

atomic_set(&ic->i_cq_quiesce, 1);
ic->i_flags &= ~RDS_IB_CQ_ERR;

/* first destroy the ib state that generates callbacks */
if (ic->i_cm_id->qp)
Expand Down

0 comments on commit 964cad6

Please sign in to comment.