Skip to content

Commit

Permalink
mon/OSDMonitor: drop stale failure_info
Browse files Browse the repository at this point in the history
failure_info keeps strong references of the MOSDFailure messages
sent by osd or peon monitors, whenever monitor starts to handle
an MOSDFailure message, it registers it in its OpTracker. and
the failure report messageis unregistered when monitor acks them
by either canceling them or replying the reporters with a new
osdmap marking the target osd down. but if this does not happen,
the failure reports just pile up in OpTracker. and monitor considers
them as slow ops. and they are reported as SLOW_OPS health warning.

in theory, it does not take long to mark an unresponsive osd down if
we have enough reporters. but there is chance, that a reporter fails
to cancel its report before it reboots, and the monitor also fails
to collect enough reports and mark the target osd down. so the
target osd never gets an osdmap marking it down, so it won't send
an alive message to monitor to fix this.

in this change, we check for the stale failure info in tick(), and
simply drop the stale reports. so the messages can released and
marked "done".

Fixes: https://tracker.ceph.com/issues/47380
Signed-off-by: Kefu Chai <kchai@redhat.com>
  • Loading branch information
tchaikov committed Mar 19, 2021
1 parent 6e512b2 commit a124ee8
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 1 deletion.
20 changes: 19 additions & 1 deletion src/mon/OSDMonitor.cc
Expand Up @@ -3181,8 +3181,15 @@ bool OSDMonitor::check_failures(utime_t now)
auto& [target_osd, fi] = *p;
if (can_mark_down(target_osd)) {
found_failure |= check_failure(now, target_osd, fi);
++p;
} else if (is_failure_stale(now, fi)) {
dout(10) << " dropping stale failure_info for osd." << target_osd
<< " from " << fi.reporters.size() << " reporters"
<< dendl;
p = failure_info.erase(p);
} else {
++p;
}
++p;
}
return found_failure;
}
Expand Down Expand Up @@ -3286,6 +3293,17 @@ bool OSDMonitor::check_failure(utime_t now, int target_osd, failure_info_t& fi)
return false;
}

bool OSDMonitor::is_failure_stale(utime_t now, failure_info_t& fi) const
{
// if it takes too long to either cancel the report to mark the osd down,
// some reporters must have failed to cancel their reports. let's just
// forget these reports.
const utime_t failed_for = now - fi.get_failed_since();
auto heartbeat_grace = cct->_conf.get_val<int64_t>("osd_heartbeat_grace");
auto heartbeat_stale = cct->_conf.get_val<int64_t>("osd_heartbeat_stale");
return failed_for >= (heartbeat_grace + heartbeat_stale);
}

void OSDMonitor::force_failure(int target_osd, int by)
{
// already pending failure?
Expand Down
1 change: 1 addition & 0 deletions src/mon/OSDMonitor.h
Expand Up @@ -237,6 +237,7 @@ class OSDMonitor : public PaxosService,
bool check_failures(utime_t now);
bool check_failure(utime_t now, int target_osd, failure_info_t& fi);
utime_t get_grace_time(utime_t now, int target_osd, failure_info_t& fi) const;
bool is_failure_stale(utime_t now, failure_info_t& fi) const;
void force_failure(int target_osd, int by);

bool _have_pending_crush();
Expand Down

0 comments on commit a124ee8

Please sign in to comment.