You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The default fork-on-job behavior makes my connection pools (to database) useless, thus add unnecessary load on both "work horse" machine and database machine.
Since I know I don't need to sandbox the environment for each job, I did a trick like this
classNoforkWorker(Worker):
deffork_and_perform_job(self, job):
self.main_work_horse(job)
self._horse_pid=os.getpid()
defmain_work_horse(self, job):
self._is_horse=Truesuccess=self.perform_job(job)
self._is_horse=False# Provide queue names to listen to as arguments to this script,# similar to rqworkerwithConnection():
w=NoforkWorker([job_queue])
w.work()
This is so significant that it reduces the load average on work horse machine from 30+ to 1, database machine's cpu usage reduced from 10-20% -> 4%
Would I break something in this way?
The text was updated successfully, but these errors were encountered:
@nvie explained the reasoning behind the initial choice of using os.forkhere.
We are in the process of reworking some of the internals to support different worker classes. However, it's a complicated job and progress on that has been kind of slow. You can see the full discussion on this issue
The default fork-on-job behavior makes my connection pools (to database) useless, thus add unnecessary load on both "work horse" machine and database machine.
Since I know I don't need to sandbox the environment for each job, I did a trick like this
This is so significant that it reduces the load average on work horse machine from 30+ to 1, database machine's cpu usage reduced from 10-20% -> 4%
Would I break something in this way?
The text was updated successfully, but these errors were encountered: