Unresponsive leader and lost DOWNs #30

msadkov opened this Issue Dec 4, 2012 · 8 comments


None yet

3 participants

msadkov commented Dec 4, 2012


I ran into an issue when globally registered name would not be automatically cleaned up by gproc if the name owner process crashed at the time when leader was unresponsive. My guess is that this is happening due to lost DOWN notification. Steps to reproduce:

start two nodes dev1 and dev2 (dev1 is the leader):

(dev1@> application:start(gproc). nodes().
(dev1@> nodes().

(dev2@> application:start(gproc). nodes().
(dev2@> nodes().
(dev2@> gproc_dist:get_leader().

start a process on dev2 which would register global name:

(dev2@> spawn(fun() -> gproc:add_global_name(foobar), timer:sleep(10000) end).

within 10 seconds timeout, send dev1 to background by hitting CTRL+Z
registration is there:

(dev2@> gproc:select({g,n}, [{'_', [], ['$$']}]).

wait for dev1 to disappear from nodes:

(dev2@> nodes().

registration is still there:

(dev2@> gproc:select({g,n}, [{'_', [], ['$$']}]).

gproc refuses to register it:

(dev2@> gproc:add_global_name(foobar).
** exception error: bad argument
     in function  gproc:add_global_name/1
        called as gproc:add_global_name(foobar)

where/1 filters it out since it's a local pid:

(dev2@> gproc:where({n,g,foobar}).

is this behavior a bug or feature? is there a good way to cope with it?

Thank you!

uwiger commented Dec 5, 2012

It's certainly not a feature! :)

I'm looking into it.

uwiger commented Dec 5, 2012

I have some ideas, but the trickiest part of the problem is that a netsplit occurs. Only a few of the gen_leader versions (e.g. garret-smith/gen_leader_revival) have some support for netsplits, and at least when I try this scenario with garret-smith's version, it doesn't seem to do the right thing.

However, a few things come to mind:

  • Gen_leader is further delayed after the node ping times out, unless -kernel dist_auto_connect is set to once or never. The reason is that each message to the unresponsive node will lead to a connection attempt, which will then hang for a while.
  • Until we have solid netsplit handling in both gen_leader and gproc, I recommend that you set up your own higher-level supervision. One way to do this is to set -kernel dist_auto_connect once, as mentioned above, then have a process on each node that periodically sends a UDP message to the other known (but not necessarily connected) nodes. If you receive a UDP message from a node that's not in the nodes() list, you have a netsplit situation. If you have no better strategy available, you can then restart the nodes that make up one of the 'islands'.
norton commented Dec 5, 2012

@msadkov There exists an application to detect network splits for mnesia and hibari. It would need some customisation for gproc. Nevertheless, it might be of help to you.

The application is here => https://github.com/hibari/partition-detector

The admin documentation is here => http://hibari.github.com/hibari-doc/hibari-sysadmin-guide.en.html#partition-detector

msadkov commented Dec 5, 2012

@uwiger @norton thank you for your replies! I'm aware of gen_leader/gproc not being able to handle net splits, so -kernel dist_auto_connect was set to once in this case (I should have mentioned this in my first post, sorry).. with that said, this situation doesn't look like a net split, but rather node going down (not immediately, but after a timeout), right? and after unresponsive node disappears from nodes list (which means a terminated connection, AFAIU) there is no timeout involved anymore, since I can get an immediate DOWN message after calling erlang:monitor with a dead pid sitting in gproc's ets..

uwiger commented Dec 5, 2012

I'm making som progress getting gproc to heal after netsplits, as well as doing proper monitoring. I don't have a solution for handling conflicts yet, and need to fix some regression bugs. I'll keep you informed..

msadkov commented Dec 5, 2012

Thank you!

uwiger commented May 24, 2013

Going through the issues list, closing out issues. This one is not yet resolved, so I'll leave it open. Sorry about the delay.

uwiger commented May 29, 2014

Closing this issue. Feel free to try out the locks_leader branch which should handle netsplits more robustly.

@uwiger uwiger closed this May 29, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment