Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clusterwide lock mode #47

Open
samsondav opened this issue Dec 15, 2018 · 9 comments
Open

Clusterwide lock mode #47

samsondav opened this issue Dec 15, 2018 · 9 comments
Labels
enhancement New feature or request

Comments

@samsondav
Copy link
Owner

See #46 for discussion.

This would bring enhanced performance when run on a single Erlang cluster.

@samsondav samsondav added the enhancement New feature or request label Dec 15, 2018
@lpil
Copy link
Collaborator

lpil commented Dec 23, 2018

This architecture would need to perform some kind of durable state management that can handle nodes and workers going down, including the master. I'm sceptical that this would be orders of magnitude faster than the same implemented in Postgres though I've no data to back up my feeling.

How do you intend the state management and failure detection to work with this design?

@samsondav
Copy link
Owner Author

Nope, no durable state management is required. If a node goes down, we receive a down message to the dispatcher and simply retry the job on a new node.

@lpil
Copy link
Collaborator

lpil commented Dec 28, 2018

What happens when the global lock process goes down?

What happens when a node is isolated from the global lock process by a network partition?

@samsondav
Copy link
Owner Author

First case scenario:

A new singleton will be booted which will re-acquire the global lock and start reading jobs again. Some jobs may be executed twice.

Second case scenario:

Erlang's built-in monitoring will realise the network partition, interpret that node as down and assume none of the jobs it was running have been executed. The global lock process will re-dispatch these jobs to a node that is alive. Some jobs may be executed twice.

@lpil
Copy link
Collaborator

lpil commented Dec 28, 2018

What happens to the workers? All killed by the exit from the global lock process? In cloud environments network partitions are common (Erlang was designed for more reliable networks) so this may cause some disruption. I'm not sure how fast global links are, would be cool to test this.

In the network partition situation if we're using global processes we'll end up with at least two nodes running the global lock process. Would this be safe? If we're still running the same SQL query to the database it would be but I'm unsure if that was the intention.

All sounds fun so far :) I'd suggest that (at some point) it'd be worth doing some preliminary benchmarking so we can get a better understanding.

@samsondav
Copy link
Owner Author

samsondav commented Dec 28, 2018

Workers on the partitioned node may continue to run, which is why some jobs may execute twice. There was always a conscious design choice in Rihanna to guarantee at-least-once execution, hence this failure mode.

We will never have two nodes running the global lock process, because postgres will only ever grant the advisory lock once. In the event of a netsplit and two master nodes occurring, one of them will fail to take the lock and simply do nothing.

@lpil
Copy link
Collaborator

lpil commented Dec 28, 2018

It's the same guarantee, but the likelihood of multiple delivery would increases substantially, one to document well.

We will never have two nodes running the global lock process, because postgres will only ever grant the advisory lock once. In the event of a netsplit and two master nodes occurring, one of them will fail to take the lock and simply do nothing.

Would there be an additional database lock then?

If that's the case we wouldn't even need the same iterative query. I feel like there wouldn't actually be that much code shared with the current Rihanna.

@samsondav
Copy link
Owner Author

samsondav commented Dec 28, 2018

Yes, the probability of multiple executions will be slightly higher, but not unmanageably so. Unexpected netsplits and/or node deaths are not that common, especially if a graceful exit with job draining is implemented. I think this is probably an unavoidable cost of increased throughput.

As for the lock, I'm not sure you have fully understood my original proposal. In this new scenario, there will be one and exactly one advisory lock taken by one global dispatcher. Workers will not be required to take any locks at all.

It will not need the same query, it can work with a simple lockless SELECT LIMIT which would be hyper fast. There will be some code shared, around enqueuing and deleting jobs. I imagine it to be implemented as a separate dispatcher module, so the user can choose which one they boot in their supervision tree.

e.g. MultiDispatcher or SingletonDispatcher

@lpil
Copy link
Collaborator

lpil commented Dec 28, 2018

especially if a graceful exit with job draining is implemented.

I think that in the event of the dispatcher/lock death we want to brutally kill workers rather than killing them gracefully- otherwise multiple delivery is guaranteed.

As for the lock, I'm not sure you have fully understood my original proposal. In this new scenario, there will be one and exactly one advisory lock taken by one global dispatcher. Workers will not be required to take any locks at all.

I see, much clearer now :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants