Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support for running jobs in child processes #488

manast opened this issue Apr 6, 2017 · 0 comments

[Feature] Support for running jobs in child processes #488

manast opened this issue Apr 6, 2017 · 0 comments


Copy link

manast commented Apr 6, 2017

Many times, processes are long and synchronous. Currently design does not allow to run these kind of processes properly due to the locking mechanism. However, we can spawn a child process where the job is run instead. If the same child process is reused for every job (and we will need as many child processes as concurrency levels), the overhead will be minimum. Also as an added benefit, if the child process dies for any reason (memory leaks, or other cause), the worker will still be able to spawn a child replacement and continue working normally.

bradvogel added a commit to mixmaxhq/bull that referenced this issue Apr 23, 2017
…r the following reasons:

* A lot of people complain about jobs being double processed currently. So there must be a lot of poorly written job processor code out there that stalls the event loop :). Or folks, like me, forget that we're running our code on tiny instances in the cloud where the CPU is so limited that a tiny bit of JS work will max the CPU. A 30sec timeout would give a bit more buffer. At least until we figure out a generic solution like OptimalBits#488.
* An expired lock (due to event loop stalling) is quite fatal now that we check the lock prior to moving the job to the completed or failed (previously we would still move it if there wasn't a lock). So if a long running job (let's say 2min) stalls the event loop for even just 5sec, it means that the job can never complete at that point. It might still continue processing, but another worker would have likely picked it up as a stalled job and processed it again. Or if it doesn't happen to get picked up as stalled, when it finally completes it still won't be moved to completed because it lost the lock at one time.
* The tradeoff is that it will take longer for jobs to be considered 'stalled'. So instead waiting max 5sec to find out if a job was stalled, we'd wait max 30sec. I think this is generally OK and that most people aren't running jobs that are that time-sensitive. Actual stalled jobs [due to process crashes] should be extremely rare anyways.

This also sets the stalledInterval to 30sec since it doesn't do much good to have it run more frequently than the lock timeout. It's also slightly expensive to run in Redis as it iterates all jobs in the 'active' queue (see moveUnlockedJobsToWait.lua), so it'd be nice to run this less often anyways.
@manast manast added this to the v3.1.0 milestone May 24, 2017
manast pushed a commit that referenced this issue Mar 21, 2020
jtassin pushed a commit to jtassin/bull that referenced this issue Jul 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet

No branches or pull requests

1 participant