-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Threaded sandboxed jobs, how does it work? #1745
Comments
In my code, I figured it out, it was an issue with blocking code and not handling failures correctly. So it works like it should :) But I would very much like an explanation on what exactly is meant by "Jobs will not stall". If a job does not stall, then what exactly happens in a scenario where the job would / should have stalled? |
Hi! You need use a In the main page of Bull.js show a example of this. |
Right, but how does that prevent the job from stalling? Will bull terminate the process if it's not responding after a certain amount of time? |
@fjeddy jobs stall if they are very CPU intensive and nodejs event loop is too busy to be able to renew the job lock. Since sandboxed processes run in a separate process they do not block nodejs master process event loop. |
So what it does is prevent the master process from stalling (Which is quite obvious), it does not prevent the job itself from stalling, like the description claims. |
no, it prevents the job from stalling which in bull terms means that no other worker will pick up that same job at the same time. |
I'm having an issue where jobs running in a sandboxed child process, is all running at the same time, normally this wouldn't be a problem, but the jobs in question are using the same subset of files to perform a build, and they can therefor not run at the same time.
You can run blocking code without affecting the queue (jobs will not stall).
What exactly does this mean? jobs will not stall. I interpret this as the concurrency limit is ignored, but how can that be? I've read in several issues that child processess will be spawned and kept for re-use, but not above the maximum concurrent limit.So, when I have no concurrent limit (defaults to 1), how come all my child processess are running at the same time? Am I misunderstanding how something works?
The text was updated successfully, but these errors were encountered: