You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently have a prio system in place, across all tasks.
We also have a basic limitation of workers per server in place.
However, some tasks are more expensive (CPU, memory) than others and might either need to run completely on their own on a server, or other workers can only fetch very lightweight (cost) tasks to process in parallel to avoid server-overload.
How do we manage this?
One idea is to add a config to each task of a cost score (0-100), and if you fetched already a 60-cost task (high memory or CPU), any worker on the same server can only fetch tasks with < 40.
The cost per server can easily be summed up if stored in the processes table.
This way each server cant be overloaded that easily as it currently can, e.g. if 2x composer update is run using each 2GB of memory and each usually almost full CPU, competing against each other and often leading to unfinished job after timeout, forcing a re-schedule.
Other ideas? Or does this make sense?
The text was updated successfully, but these errors were encountered:
It basically looks for a "rate" config on the task class to determine how many of the same job can run simultaneously and with how much time in between.
if (property_exists($this->{$taskName}, 'rate')) {
$this->_taskConf[$taskName]['rate'] = $this->{$taskName}->rate;
}
and
if (array_key_exists('rate', $task) && $tmp['job_type'] && array_key_exists($tmp['job_type'], $this->rateHistory)) {
But that is yet another topic of "what tasks can run in parallel, and which ones need to always run in sequence".
This rate history seems to be not only server but even worker specific though, and should probably be removed/refactored as well.
We currently have a prio system in place, across all tasks.
We also have a basic limitation of workers per server in place.
However, some tasks are more expensive (CPU, memory) than others and might either need to run completely on their own on a server, or other workers can only fetch very lightweight (cost) tasks to process in parallel to avoid server-overload.
How do we manage this?
One idea is to add a config to each task of a cost score (0-100), and if you fetched already a 60-cost task (high memory or CPU), any worker on the same server can only fetch tasks with < 40.
The cost per server can easily be summed up if stored in the processes table.
This way each server cant be overloaded that easily as it currently can, e.g. if 2x composer update is run using each 2GB of memory and each usually almost full CPU, competing against each other and often leading to unfinished job after timeout, forcing a re-schedule.
Other ideas? Or does this make sense?
The text was updated successfully, but these errors were encountered: