-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clusterwide lock mode #47
Comments
This architecture would need to perform some kind of durable state management that can handle nodes and workers going down, including the master. I'm sceptical that this would be orders of magnitude faster than the same implemented in Postgres though I've no data to back up my feeling. How do you intend the state management and failure detection to work with this design? |
Nope, no durable state management is required. If a node goes down, we receive a down message to the dispatcher and simply retry the job on a new node. |
What happens when the global lock process goes down? What happens when a node is isolated from the global lock process by a network partition? |
First case scenario: A new singleton will be booted which will re-acquire the global lock and start reading jobs again. Some jobs may be executed twice. Second case scenario: Erlang's built-in monitoring will realise the network partition, interpret that node as down and assume none of the jobs it was running have been executed. The global lock process will re-dispatch these jobs to a node that is alive. Some jobs may be executed twice. |
What happens to the workers? All killed by the exit from the global lock process? In cloud environments network partitions are common (Erlang was designed for more reliable networks) so this may cause some disruption. I'm not sure how fast global links are, would be cool to test this. In the network partition situation if we're using global processes we'll end up with at least two nodes running the global lock process. Would this be safe? If we're still running the same SQL query to the database it would be but I'm unsure if that was the intention. All sounds fun so far :) I'd suggest that (at some point) it'd be worth doing some preliminary benchmarking so we can get a better understanding. |
Workers on the partitioned node may continue to run, which is why some jobs may execute twice. There was always a conscious design choice in Rihanna to guarantee at-least-once execution, hence this failure mode. We will never have two nodes running the global lock process, because postgres will only ever grant the advisory lock once. In the event of a netsplit and two master nodes occurring, one of them will fail to take the lock and simply do nothing. |
It's the same guarantee, but the likelihood of multiple delivery would increases substantially, one to document well.
Would there be an additional database lock then? If that's the case we wouldn't even need the same iterative query. I feel like there wouldn't actually be that much code shared with the current Rihanna. |
Yes, the probability of multiple executions will be slightly higher, but not unmanageably so. Unexpected netsplits and/or node deaths are not that common, especially if a graceful exit with job draining is implemented. I think this is probably an unavoidable cost of increased throughput. As for the lock, I'm not sure you have fully understood my original proposal. In this new scenario, there will be one and exactly one advisory lock taken by one global dispatcher. Workers will not be required to take any locks at all. It will not need the same query, it can work with a simple lockless e.g. |
I think that in the event of the dispatcher/lock death we want to brutally kill workers rather than killing them gracefully- otherwise multiple delivery is guaranteed.
I see, much clearer now :) |
See #46 for discussion.
This would bring enhanced performance when run on a single Erlang cluster.
The text was updated successfully, but these errors were encountered: