Resque uses forking to control memory management and bloat and I think this could be a good idea to have a fork worker to apply the same principles.
I think that's a critical feature of resque which might prohibit me from switching to beanstalkd/backburner. Have you received reports/issues of users having this type of problem?
Yes I have a few people that mentioned to me running into this issue, I have not had this problem yet in my own applications. It really depends on the types of jobs you are running. So far I just create multiple workers (separate processes) and have them process the jobs similar to delayed_job: https://github.com/collectiveidea/delayed_job
I am not convinced that a forked worker is necessary for a large set of cases. That said, I am actually interested in having a threaded and forking worker available so people can choose based on their needs.
What do you think of this? I kind of like using subclassing here to create workers because it requires less code in each individual worker and then usage of workers to be more explicit.
If you like it, can you try porting the forking_thread_worker to this format?
@nesquena yes. My only problem is with tests. I'm not good with this yet so I'll make only the basic ones.
Good reference for a simple forking worker michaeldwan/stalker@3862676 by @michaeldwan
FYI, I've got a start on this here:
I need to write a test still before I send a pull request.
@danielfarrell Awesome, thanks Daniel. Let me know when the tests are ready and I will merge and update all the documentation.
Closed with 0.3.1 thanks to @danielfarrell