New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for custom job ID #264
Comments
I can't see the reason for specifying a custom job ID within a queue. |
And I really see a reason and really hope this will implemented!
Please implement this!! 👍 |
This can be done easily by checking if a daily job is exucted before putting it into or after reserving it from beanstalkd. And, why not |
Allowing to specify custom job IDs would allow me to implement sort of HA and Failover on the library level:
Unless I'm missing something and current protocol allows for better solutions. PS. Adding job with ID that already exist should trigger an error |
How do you maintain your job ids without central point of failure? In general, in a distributed system you cannot have a job which runs exactly once (http://bravenewgeek.com/you-cannot-have-exactly-once-delivery/). Beanstalkd provides at most once and we do not want to change that. You can have at least once with more than one beanstalkd and some distributed locking or similar (some people use memcached or a db for that purpose). |
Maintaining job ids is out of the scope of this ticket. What I need is to be able to specify my own job ID. |
@rzajac Could you elaborate a little on this? I'm not exactly sure about your use-case. |
@JensRantil I explained it a little bit here: #264 (comment) |
This feature request breaks BC for protocol v1.x |
@rzajac Ah, sorry. Missed that. Thanks! I'm going to be the devil's advocate here and shoot down some of the use cases :-) @aight8 wrote:
There are various approaches to regular cronjobs:
Valid point. Workaround is to store the job id in another datastore.
Not an argument. Carry on. ;) @rzajac wrote:
I really don't think this is a good idea. Doing double writes independently to two queues is bound to eventually make them diverge and have different state. There are all sorts of race conditions. Examples; One TTL times out on one queue and not on the other. Another problem is that you currently can't reserve a specific job. You can delete a specific job, but then you can't be sure that no other consumer has reserved it etc. The real solution here would be to use something like Zookeeper's ZAB or probably even better RAFT algorithm. All writes would go through master and a majority would need to acknowledge each state change. This would obviously introduce complexity, new failure modes and additional latency to every operation. |
@rzajac @JensRantil I've also run into this. |
This will also help with self-throttling the job on the client side as well :) Simply checking if the job is already there allow us to avoid sending another one or just increase delay time. |
Okay, I'm going to close this issue as a no-go. Reasons are as follows:
Please open a new issue describing your use-case if you believe if your use-case can't be worked around using the above approaches. |
This would allow to implement failovers.
The text was updated successfully, but these errors were encountered: