New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unresponsive leader and lost DOWNs #30
Comments
It's certainly not a feature! :) I'm looking into it. |
I have some ideas, but the trickiest part of the problem is that a netsplit occurs. Only a few of the gen_leader versions (e.g. garret-smith/gen_leader_revival) have some support for netsplits, and at least when I try this scenario with garret-smith's version, it doesn't seem to do the right thing. However, a few things come to mind:
|
@msadkov There exists an application to detect network splits for mnesia and hibari. It would need some customisation for gproc. Nevertheless, it might be of help to you. The application is here => https://github.com/hibari/partition-detector The admin documentation is here => http://hibari.github.com/hibari-doc/hibari-sysadmin-guide.en.html#partition-detector |
@uwiger @norton thank you for your replies! I'm aware of gen_leader/gproc not being able to handle net splits, so |
I'm making som progress getting gproc to heal after netsplits, as well as doing proper monitoring. I don't have a solution for handling conflicts yet, and need to fix some regression bugs. I'll keep you informed.. |
Thank you! |
Going through the issues list, closing out issues. This one is not yet resolved, so I'll leave it open. Sorry about the delay. |
Closing this issue. Feel free to try out the locks_leader branch which should handle netsplits more robustly. |
Hello,
I ran into an issue when globally registered name would not be automatically cleaned up by gproc if the name owner process crashed at the time when leader was unresponsive. My guess is that this is happening due to lost DOWN notification. Steps to reproduce:
start two nodes dev1 and dev2 (dev1 is the leader):
start a process on dev2 which would register global name:
within 10 seconds timeout, send dev1 to background by hitting CTRL+Z
registration is there:
wait for dev1 to disappear from nodes:
registration is still there:
gproc refuses to register it:
where/1 filters it out since it's a local pid:
is this behavior a bug or feature? is there a good way to cope with it?
Thank you!
The text was updated successfully, but these errors were encountered: