You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 10, 2021. It is now read-only.
This system is straightforward and effective, but there are other options that we could consider, such as adaptively scaling up or down based on load or on memory needs.
This might be kind of a dumb question, but how well is dask able to predict a priori the memory requirements of any task before the task is dispatched to a worker? If dask had 10 workers each with 1GB RAM each would it know if it had a number of operations in the graph that required some 10 GB workers?
Dask only knows the size of the result of a task after the task has computed. It would expand the network on-the-fly as it saw the total memory use increasing and that it still had more tasks to compute.
In normal operation Dask assumes that all tasks can run on all workers unless the user explicitly states otherwise (doc page, issue for expansion) so Dask would fail in the case of the 10GB task in the presence of the 1GB worker (unless we were to implement the issue linked to above).
The motivation for different deployment policies would be to remove the need for users to think about the size of their cluster, both in terms of scaling out, and in terms of cleaning up afterwards.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Currently deployment is handled by manually scaling the cluster up and down
This system is straightforward and effective, but there are other options that we could consider, such as adaptively scaling up or down based on load or on memory needs.
There was some preliminary work done a while ago in this direction described here: http://matthewrocklin.com/blog/work/2016/09/22/cluster-deployments
The text was updated successfully, but these errors were encountered: