This tutorial was last given at SciPy 2017 in Austin Texas. A video is available online.
Dask provides multi-core execution on larger-than-memory datasets.
We can think of dask at a high and a low level
- High level collections: Dask provides high-level Array, Bag, and DataFrame collections that mimic NumPy, lists, and Pandas but can operate in parallel on datasets that don't fit into main memory. Dask's high-level collections are alternatives to NumPy and Pandas for large datasets.
- Low Level schedulers: Dask provides dynamic task schedulers that
execute task graphs in parallel. These execution engines power the
high-level collections mentioned above but can also power custom,
user-defined workloads. These schedulers are low-latency (around 1ms) and
work hard to run computations in a small memory footprint. Dask's
schedulers are an alternative to direct use of
multiprocessinglibraries in complex cases or other task scheduling systems like
Different users operate at different levels but it is useful to understand
both. This tutorial will interleave between high-level use of
dask.dataframe (even sections) and low-level use of dask graphs and
schedulers (odd sections.)
You should clone this repository
git clone http://github.com/dask/dask-tutorial
and then install necessary packages.
a) Create a conda environment (preferred)
In the repo directory
conda env create -f environment.yml conda activate dask-tutorial
b) Install into an existing environment
You will need the following core libraries
conda install numpy pandas h5py Pillow matplotlib scipy toolz pytables snakeviz dask distributed
You may find the following libraries helpful for some exercises
pip install graphviz
c) Use Dockerfile
You can build a docker image out of the provided Dockerfile.
Graphviz on Windows
You may need to do the following
- conda install -c conda-forge graphviz
- conda install -c conda-forge python-graphviz
Prepare artificial data.
From the repo directory
From the repo directory
- Ask for help
Overview - dask's place in the universe
Delayed - the single-function way to parallelize general python code
1x. Lazy - some of the principles behing laxy execution, for the interested.
Bag - the first high-level collection: a generalized iterator for use with a functional programming style and to clean messy data.
Array - blocked numpy-like functionality with a collection of numpy arrays spread across your cluster.
Dataframe - parallelized operations on many pandas dataframes spread across your cluster.
Distributed - Dask's scheduler for clusters, with details of how to view the UI.
Advanced Distributed - further details on distributed computing, including how to debug.
Dataframe Storage - efficient ways to read and write dataframes to disc.
Machine Learning - aaplying dask to machine-learning problems