You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Inspired by Apache's worker model. The multiprocess mode should spawn a number of slave processes that are assigned collector jobs by a master scheduler. The model should even allow for remote worker processes, for distributed collection.
How to launch remote worker processes is out of scope for this task, but should be considered when designing the communication between the scheduler process and the workers.
I would like one more feature out of this: I want workers to kill themselves (only to be restarted) after having run a configurable number of jobs.
Python isn't particulary keen on releasing memory to the OS, once allocated, causing some of the workers (especially one's processing topo jobs) to increase memory usage heavily over time.
This is analogous to Apache's MaxRequestsPerChild, and will offer simple protection against inadvertent "memory leaks".
Inspired by Apache's worker model. The multiprocess mode should spawn a number of slave processes that are assigned collector jobs by a master scheduler. The model should even allow for remote worker processes, for distributed collection.
How to launch remote worker processes is out of scope for this task, but should be considered when designing the communication between the scheduler process and the workers.
Imported from Launchpad using lp2gh.
The text was updated successfully, but these errors were encountered: