Skip to content
This repository has been archived by the owner on Feb 10, 2021. It is now read-only.

Deployment Policies #6

Open
mrocklin opened this issue Oct 13, 2016 · 2 comments
Open

Deployment Policies #6

mrocklin opened this issue Oct 13, 2016 · 2 comments

Comments

@mrocklin
Copy link
Member

Currently deployment is handled by manually scaling the cluster up and down

jobids = cluster.start_workers(10)
cluster.stop_workers(jobids)

This system is straightforward and effective, but there are other options that we could consider, such as adaptively scaling up or down based on load or on memory needs.

There was some preliminary work done a while ago in this direction described here: http://matthewrocklin.com/blog/work/2016/09/22/cluster-deployments

@davidr
Copy link

davidr commented Oct 17, 2016

This might be kind of a dumb question, but how well is dask able to predict a priori the memory requirements of any task before the task is dispatched to a worker? If dask had 10 workers each with 1GB RAM each would it know if it had a number of operations in the graph that required some 10 GB workers?

@mrocklin
Copy link
Member Author

Dask only knows the size of the result of a task after the task has computed. It would expand the network on-the-fly as it saw the total memory use increasing and that it still had more tasks to compute.

In normal operation Dask assumes that all tasks can run on all workers unless the user explicitly states otherwise (doc page, issue for expansion) so Dask would fail in the case of the 10GB task in the presence of the 1GB worker (unless we were to implement the issue linked to above).

The motivation for different deployment policies would be to remove the need for users to think about the size of their cluster, both in terms of scaling out, and in terms of cleaning up afterwards.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants