Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mon: handle monitor lag when killing mgrs #18268

Merged
merged 1 commit into from Nov 1, 2017
Merged

Conversation

jcsp
Copy link
Contributor

@jcsp jcsp commented Oct 12, 2017

This is a shameless copy of what MDSMonitor does to handle the same situation.

Fixes: http://tracker.ceph.com/issues/20629
Signed-off-by: John Spray john.spray@redhat.com

Fixes: http://tracker.ceph.com/issues/20629
Signed-off-by: John Spray <john.spray@redhat.com>
dout(4) << __func__ << ": resetting beacon timeouts due to mon delay "
"(slow election?) of " << now - last_tick << " seconds" << dendl;
for (auto &i : last_beacon) {
i.second = now;
Copy link
Contributor

@tchaikov tchaikov Oct 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think we should be so generous to update all mgr's beacon timestamps. instead, i think we might want to use last_tick + mgr_tick_period - mgr_beacon_grace as the value of cutoff to offset some of the time because of our own fault (slowness).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the idea is that if we have gone slow, then there are probably MgrBeacon messages queued up on peons waiting to be sent through -- the idea of resetting the last_beacons to now is that we will then be giving them mgr_beacon_grace to forward those beacons (assuming they should come soon, now that the mons are back online).

If we used an offset of (now - last_tick) - mgr_tick_period to account for the slowness with a specific number of seconds, then we would only be accounting for that this time through tick(), and we would then be strict again next time through tick -- the effect would be to only allow mgr_tick_period to receive the next beacons, instead of mgr_beacon_grace.

Because we see this behaviour on systems that are slow/laggy, I think it's better to use the more generous approach to allow for peons that are also being slow to forward messages after the cluster recovers (they're swapping a lot or something).

Copy link
Contributor

@tchaikov tchaikov Nov 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh, right! thanks for your explanation. makes sense to me. the next time when the laggy mon ticks, the innocent mgr(s) won't survive when the fix proposed by me. but they will with your fix.

dout(4) << __func__ << ": resetting beacon timeouts due to mon delay "
"(slow election?) of " << now - last_tick << " seconds" << dendl;
for (auto &i : last_beacon) {
i.second = now;
Copy link
Contributor

@tchaikov tchaikov Nov 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh, right! thanks for your explanation. makes sense to me. the next time when the laggy mon ticks, the innocent mgr(s) won't survive when the fix proposed by me. but they will with your fix.

@tchaikov tchaikov merged commit f225a32 into ceph:master Nov 1, 2017
@tchaikov
Copy link
Contributor

tchaikov commented Nov 1, 2017

i am adding backport=luminous in the tracker ticket, as i think luminous is also suffering from this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants