You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
distributed.scheduler.worker-saturation does not seem to make a difference. Having less or more than 5 tasks (the threshold for is_rootish) doesn't seem to have an impact (as long as you have less tasks than threads).
Uncommenting root = 1, thus making the tasks with resources actually root (not just rootish) makes the issue disappear.
After increasing the number of tasks from 6 to 100, this is what I see on the dashboard:
The text was updated successfully, but these errors were encountered:
essentially once a dependency is in memory, dependents are forced to be scheduled to that worker. Only work stealing allows us to redistribute tasks again. I have a WIP branch open where I change this behavior but the impact on performance is highly nontrivial (based on the A/B tests) and I dropped further investigation due to lack of time
Thanks for locating this issue in the code. It is understandable that work stealing may have a negative impact on performance, while in some other situations, it is desirable. I would suggest having multiple scheduler strategies to suit the user's needs.
From https://dask.discourse.group/t/only-1-worker-is-running-when-the-dag-is-forking/2192
Non-root tasks that declare resources do not evenly distribute on the cluster, instead piling up on a single worker.
Expected: 4s
Actual: 8s
distributed.scheduler.worker-saturation
does not seem to make a difference. Having less or more than 5 tasks (the threshold foris_rootish
) doesn't seem to have an impact (as long as you have less tasks than threads).Uncommenting
root = 1
, thus making the tasks with resources actually root (not just rootish) makes the issue disappear.After increasing the number of tasks from 6 to 100, this is what I see on the dashboard:
The text was updated successfully, but these errors were encountered: