Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overload HLG values method #4918

Merged
merged 4 commits into from Jun 13, 2019
Merged

Overload HLG values method #4918

merged 4 commits into from Jun 13, 2019

Conversation

@jrbourbeau
Copy link
Member

@jrbourbeau jrbourbeau commented Jun 11, 2019

This PR overloads the built-in Mapping.values() method for HighLevelGraphs. The current values() method makes calls to the graph __getitem__() method which, for large-ish graphs, can be slow (see https://github.com/dask/distributed/blob/d202e6253ed8ddc7919d0d4f128d88954e9859b8/distributed/client.py#L4336 for a relevant use case).

The values() method implemented in this PR instead concatenates the values for each HLG layer, avoiding repeated calls to the HighLevelGraph.__getitem__() method.

Timing the following example

In [1]: import dask.array as da 
   ...:  
   ...: a = da.random.randint(0, 10, size=10_000, chunks=10) 
   ...:  
   ...: # Create more layers for HighLevelGraph 
   ...: for i in range(10): 
   ...:     a = a + 1 
   ...:                                                                                                                     

on the current master branch yields

In [2]: %timeit -n 5 list(a.dask.values())                                                                                  
81.8 ms ± 4.44 ms per loop (mean ± std. dev. of 7 runs, 5 loops each)

while with the changes in this PR we get a significant speedup factor

In [2]: %timeit -n 5 list(a.dask.values())                                                                                  
5.6 ms ± 2.68 ms per loop (mean ± std. dev. of 7 runs, 5 loops each)
  • Tests added / passed
  • Passes flake8 dask
Copy link
Member

@jcrist jcrist left a comment

Overall this seems fine to me.

Loading

@@ -153,6 +153,9 @@ def items(self):
def __iter__(self):
return toolz.unique(toolz.concat(self.layers.values()))

def values(self):
return toolz.concat(layer.values() for layer in self.layers.values())
Copy link
Member

@jcrist jcrist Jun 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems fine to me in practice, but does create a structure different than a dict_values view, which means that it has different properties. We should at least return a list instead of a generator, but I'm not sure if there's a better structure to use.

Loading

Copy link
Member

@jcrist jcrist Jun 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably also do this for keys and items?

Loading

Copy link
Member

@mrocklin mrocklin Jun 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we can just implement items and have the others come for free?

Why a list rather than a generator?

Loading

Copy link
Member

@jcrist jcrist Jun 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because a generator doesn't fullfill the contract of values/keys/items (e.g. item in d.values()). That's why Python 2 also had itervalues/iterkeys, iteritems.

Loading

Copy link
Member Author

@jrbourbeau jrbourbeau Jun 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcrist am I missing something? It looks like item in d.values() is fullfilled

In [1]: import dask.array as da                                                                                             

In [2]: a = da.ones(10, chunks=2)                                                                                           

In [3]: hlg = a.dask                                                                                                        

In [4]: list(hlg.values())[0] in hlg.values()                                                                               
Out[4]: True

Loading

Copy link
Member Author

@jrbourbeau jrbourbeau Jun 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrocklin to your point, the current items method already avoids the HighLevelGraph.__getitem__() method, so could be used as is to build the keys and values logic. E.g. something like

def values(self):
    for _, val in self.items():
        yield val

Loading

Copy link
Member

@jcrist jcrist Jun 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like itertools.chain implements __contains__, but it's implemented by consuming the generator until a match is found, leading to poor performance and odd behavior:

In [34]: res = toolz.concat([[1, 2, 3], [4], [5, 6, 7]])

In [35]: 1 in res
Out[35]: True

In [36]: 1 in res
Out[36]: False

Loading

@jrbourbeau
Copy link
Member Author

@jrbourbeau jrbourbeau commented Jun 12, 2019

I updated items to return a concrete list instead of a generator, and implemented keys and values methods which use the items method. At this point, I think all the values/keys/items properties should be satisfied (except being lazy).

Here are benchmark results using the original example HLG:

on the current master branch
In [2]: %timeit -n 10 list(a.dask.keys())                                                                             
4.8 ms ± 1.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [3]: %timeit -n 10 list(a.dask.values())                                                                           
78.2 ms ± 295 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [4]: %timeit -n 10 list(a.dask.items())                                                                            
8.09 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
with changes in this PR
In [2]: %timeit -n 10 a.dask.keys()                                                                                         
7.58 ms ± 1.41 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [3]: %timeit -n 10 a.dask.values()                                                                                       
7.32 ms ± 592 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [4]: %timeit -n 10 a.dask.items()                                                                                        
6.57 ms ± 328 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

@jcrist when you get a chance to take a look at this, any additional comments are appreciated

Loading

@jcrist
Copy link
Member

@jcrist jcrist commented Jun 12, 2019

Overall this looks good to me. Could you add a small test for keys and items (unless that's already somewhere else in the test file).

Loading

@jcrist
Copy link
Member

@jcrist jcrist commented Jun 13, 2019

LGTM. Merging.

Loading

@jcrist jcrist merged commit 46aef58 into dask:master Jun 13, 2019
2 checks passed
Loading
@jrbourbeau jrbourbeau deleted the hlg-values branch Jun 13, 2019
TomAugspurger added a commit to TomAugspurger/dask that referenced this issue Jun 13, 2019
commit 66531ba
Author: jakirkham <jakirkham@gmail.com>
Date:   Thu Jun 13 12:13:55 2019 -0400

    Drop size 0 arrays in concatenate (dask#4167)

    * Test `da.concatenate` with size 0 array

    Make sure that `da.concatenate` does not include empty arrays in the
    result as they don't contribute any data.

    * Drop size 0 arrays from `da.concatenate`

    If any of the arrays passed to `da.concatenate` has a size of 0, then it
    won't contribute anything to the array created by concatenation. As such
    make sure to drop any size 0 arrays from the sequence of arrays to
    concatenate before proceeding.

    * Handle dtype and all 0 size case

    * Cast inputs with asarray

    * Coerce all arrays to concatenate to the same type

    * Drop obsoleted type handling code

    * Comment on why arrays are being dropped

    * Use `np.promote_types` for parity w/old behavior

    * Handle endianness during type promotion

    * Construct empty array of right type

    Avoids the need to cast later and the addition of another node to the
    graph.

    * Promote types in `concatenate` using `_meta`

    There was some left over type promotion code for the arrays to
    concatenate using their `dtype`s. However this should now use the
    `_meta` information instead since that is available.

    * Ensure `concatenate` is working on Dask Arrays

    * Raise `ValueError` if `concatenate` gets no arrays

    NumPy will raise if no arrays are provided to concatenate as it is
    unclear what to do. This adds a similar exception for Dask Arrays. Also
    this short circuits handling unusual cases later. Plus raises a clearer
    exception than one might see if this weren't raised.

    * Test `concatenate` raises when no arrays are given

    * Determine the concatenated array's shape

    Needed to handle the case where all arrays have trivial shapes.

    * Handle special sequence cases together

    * Update dask/array/core.py

    Co-Authored-By: James Bourbeau <jrbourbeau@users.noreply.github.com>

    * Drop outdated comment

    * Assume valid `_meta` in `concatenate`

    Simplifies the `_meta` handling logic in `concatenate` to assume that
    `_meta` is valid. As all arguments have been coerced to Dask Arrays,
    this is a reasonable assumption to make.

commit 46aef58
Author: James Bourbeau <jrbourbeau@users.noreply.github.com>
Date:   Thu Jun 13 11:04:47 2019 -0500

    Overload HLG values method (dask#4918)

    * Overload HLG values method

    * Return lists for keys, values, and items

    * Add tests for keys and items

commit f9cd802
Author: mcsoini <33124051+mcsoini@users.noreply.github.com>
Date:   Thu Jun 13 18:03:55 2019 +0200

    Merge dtype warning (dask#4917)

    * add test covering the merge column dtype mismatch warning

    * for various merge types: checks that the resulting dataframe
      has either no nans or that a UserWarning has been thrown

    * Add warning for mismatches between column data types

    * fixes issue dask#4574
    * Warning is thrown if the on-columns of left and right have
      different dtypes

    * flake8 fixes

    * fixes

    * use asciitable for warning string

commit c400691
Author: Hugo <hugovk@users.noreply.github.com>
Date:   Thu Jun 13 17:38:37 2019 +0300

    Docs: Drop support for Python 2.7 (dask#4932)

commit 985cdf2
Author: Benjamin Zaitlen <quasiben@users.noreply.github.com>
Date:   Thu Jun 13 10:38:15 2019 -0400

    Groupby Covariance/Correlation (dask#4889)

commit 6e8c1b7
Author: Jim Crist <jcrist@users.noreply.github.com>
Date:   Wed Jun 12 15:55:11 2019 -0500

    Drop Python 2.7 (dask#4919)

    * Drop Python 2.7

    Drops Python 2.7 from our `setup.py`, and from our test matrix. We don't
    drop any of the compatability fixes (yet), but won't be adding new ones.

    * fixup

commit 7a9cfaf
Author: Ian Bolliger <bolliger32@gmail.com>
Date:   Wed Jun 12 11:44:26 2019 -0700

    keep index name with to_datetime (dask#4905)

    * keep index name with to_datetime

    * allow users to pass meta

    * Update dask/dataframe/core.py

    put meta as explicit kwarg

    Co-Authored-By: Matthew Rocklin <mrocklin@gmail.com>

    * Update dask/dataframe/core.py

    remove meta kwargs.pop

    Co-Authored-By: Matthew Rocklin <mrocklin@gmail.com>

    * remove test for index

    * allow index

commit abc86d3
Author: jakirkham <jakirkham@gmail.com>
Date:   Wed Jun 12 14:20:59 2019 -0400

    Raise ValueError if concatenate is given no arrays (dask#4927)

    * Raise `ValueError` if `concatenate` gets no arrays

    NumPy will raise if no arrays are provided to concatenate as it is
    unclear what to do. This adds a similar exception for Dask Arrays. Also
    this short circuits handling unusual cases later. Plus raises a clearer
    exception than one might see if this weren't raised.

    * Test `concatenate` raises when no arrays are given

commit ce2f866
Author: jakirkham <jakirkham@gmail.com>
Date:   Wed Jun 12 14:09:35 2019 -0400

    Promote types in `concatenate` using `_meta` (dask#4925)

    * Promote types in `concatenate` using `_meta`

    There was some left over type promotion code for the arrays to
    concatenate using their `dtype`s. However this should now use the
    `_meta` information instead since that is available.

    * Ensure `concatenate` is working on Dask Arrays
TomAugspurger added a commit to TomAugspurger/dask that referenced this issue Jun 17, 2019
commit 255cc5b
Author: Justin Waugh <justin@unsupervised.com>
Date:   Mon Jun 17 08:18:26 2019 -0600

    Map Dask Series to Dask Series (dask#4872)

    * index-test needed fix

    * single-parititon-error

    * added code to make it work

    * add tests

    * delete some comments

    * remove seed set

    * updated tests

    * remove sort_index and add tests

commit f7d73f8
Author: Matthew Rocklin <mrocklin@gmail.com>
Date:   Mon Jun 17 15:22:35 2019 +0200

    Further relax Array meta checks for Xarray (dask#4944)

    Our checks in slicing were causing issues for Xarray, which has some
    unslicable array types.  Additionally, this centralizes a bit of logic
    from blockwise into meta_from_array

    * simplify slicing meta code with meta_from_array

commit 4f97be6
Author: Peter Andreas Entschev <peter@entschev.com>
Date:   Mon Jun 17 15:21:15 2019 +0200

    Expand *_like_safe usage (dask#4946)

commit abe9e28
Author: Peter Andreas Entschev <peter@entschev.com>
Date:   Mon Jun 17 15:19:24 2019 +0200

    Defer order/casting einsum parameters to NumPy implementation (dask#4914)

commit 76f55fd
Author: Matthew Rocklin <mrocklin@gmail.com>
Date:   Mon Jun 17 09:28:07 2019 +0200

    Remove numpy warning in moment calculation (dask#4921)

    Previously we would divide by 0 in meta calculations for dask array
    moments, which would raise a Numpy RuntimeWarning to users.

    Now we avoid that situation, though we may also want to investigate a
    more thorough solution.

commit c437e63
Author: Matthew Rocklin <mrocklin@gmail.com>
Date:   Sun Jun 16 10:42:16 2019 +0200

    Fix meta_from_array to support Xarray test suite (dask#4938)

    Fixes pydata/xarray#3009

commit d8ff4c4
Author: jakirkham <jakirkham@gmail.com>
Date:   Fri Jun 14 10:35:00 2019 -0400

    Add a diagnostics extra (includes bokeh) (dask#4924)

    * Add a diagnostics extra (includes bokeh)

    * Bump bokeh minimum to 0.13.0

    * Add to `test_imports`

commit 773f775
Author: btw08 <45699730+btw08@users.noreply.github.com>
Date:   Fri Jun 14 14:34:34 2019 +0000

    4809 fix extra cr (dask#4935)

    * added test that fails to demonstrate the issue in 4809

    * modfied open_files/OpenFile to accept a newline parameter, similar to io.TextIOWrapper or the builtin open on py3. Pass newline='' to open_files when preparing to write csv files.

    Fixed dask#4809

    * modified newline documentation to follow convention

    * added blank line to make test_csv.py flake8-compliant

commit 419d27e
Author: Peter Andreas Entschev <peter@entschev.com>
Date:   Fri Jun 14 15:18:42 2019 +0200

    Minor meta construction cleanup in concatenate (dask#4937)

commit 1f821f4
Author: Bruce Merry <bmerry@ska.ac.za>
Date:   Fri Jun 14 12:49:59 2019 +0200

    Cache chunk boundaries for integer slicing (dask#4923)

    This is an alternative to dask#4909, to implement dask#4867.

    Instead of caching in the class as in dask#4909, use functools.lru_cache.
    This unfortunately has a fixed cache size rather than a cache entry
    stored with each array, but simplifies the code as it is not necessary
    to pass the cached value from the Array class down through the call tree
    to the point of use.

    A quick benchmark shows that the result for indexing a single value from
    a large array is similar to that from dask#4909, i.e., around 10x faster for
    constructing the graph.

    This only applies the cache in `_slice_1d`, so should be considered a
    proof-of-concept.

    * Move cached_cumsum to dask/array/slicing.py

    It can't go in dask/utils.py because the top level is not supposed to
    depend on numpy.

    * cached_cumsum: index cache by both id and hash

    The underlying _cumsum is first called with _HashIdWrapper, which will
    hit (very cheaply) if we've seen this tuple object before. If not, it
    will call itself again without the wrapper, which will hit (but at a
    higher cost for tuple.__hash__) if we've seen the same value before but
    in a different tuple object.

    * Apply cached_cumsum in more places

commit 66531ba
Author: jakirkham <jakirkham@gmail.com>
Date:   Thu Jun 13 12:13:55 2019 -0400

    Drop size 0 arrays in concatenate (dask#4167)

    * Test `da.concatenate` with size 0 array

    Make sure that `da.concatenate` does not include empty arrays in the
    result as they don't contribute any data.

    * Drop size 0 arrays from `da.concatenate`

    If any of the arrays passed to `da.concatenate` has a size of 0, then it
    won't contribute anything to the array created by concatenation. As such
    make sure to drop any size 0 arrays from the sequence of arrays to
    concatenate before proceeding.

    * Handle dtype and all 0 size case

    * Cast inputs with asarray

    * Coerce all arrays to concatenate to the same type

    * Drop obsoleted type handling code

    * Comment on why arrays are being dropped

    * Use `np.promote_types` for parity w/old behavior

    * Handle endianness during type promotion

    * Construct empty array of right type

    Avoids the need to cast later and the addition of another node to the
    graph.

    * Promote types in `concatenate` using `_meta`

    There was some left over type promotion code for the arrays to
    concatenate using their `dtype`s. However this should now use the
    `_meta` information instead since that is available.

    * Ensure `concatenate` is working on Dask Arrays

    * Raise `ValueError` if `concatenate` gets no arrays

    NumPy will raise if no arrays are provided to concatenate as it is
    unclear what to do. This adds a similar exception for Dask Arrays. Also
    this short circuits handling unusual cases later. Plus raises a clearer
    exception than one might see if this weren't raised.

    * Test `concatenate` raises when no arrays are given

    * Determine the concatenated array's shape

    Needed to handle the case where all arrays have trivial shapes.

    * Handle special sequence cases together

    * Update dask/array/core.py

    Co-Authored-By: James Bourbeau <jrbourbeau@users.noreply.github.com>

    * Drop outdated comment

    * Assume valid `_meta` in `concatenate`

    Simplifies the `_meta` handling logic in `concatenate` to assume that
    `_meta` is valid. As all arguments have been coerced to Dask Arrays,
    this is a reasonable assumption to make.

commit 46aef58
Author: James Bourbeau <jrbourbeau@users.noreply.github.com>
Date:   Thu Jun 13 11:04:47 2019 -0500

    Overload HLG values method (dask#4918)

    * Overload HLG values method

    * Return lists for keys, values, and items

    * Add tests for keys and items

commit f9cd802
Author: mcsoini <33124051+mcsoini@users.noreply.github.com>
Date:   Thu Jun 13 18:03:55 2019 +0200

    Merge dtype warning (dask#4917)

    * add test covering the merge column dtype mismatch warning

    * for various merge types: checks that the resulting dataframe
      has either no nans or that a UserWarning has been thrown

    * Add warning for mismatches between column data types

    * fixes issue dask#4574
    * Warning is thrown if the on-columns of left and right have
      different dtypes

    * flake8 fixes

    * fixes

    * use asciitable for warning string

commit c400691
Author: Hugo <hugovk@users.noreply.github.com>
Date:   Thu Jun 13 17:38:37 2019 +0300

    Docs: Drop support for Python 2.7 (dask#4932)

commit 985cdf2
Author: Benjamin Zaitlen <quasiben@users.noreply.github.com>
Date:   Thu Jun 13 10:38:15 2019 -0400

    Groupby Covariance/Correlation (dask#4889)

commit 6e8c1b7
Author: Jim Crist <jcrist@users.noreply.github.com>
Date:   Wed Jun 12 15:55:11 2019 -0500

    Drop Python 2.7 (dask#4919)

    * Drop Python 2.7

    Drops Python 2.7 from our `setup.py`, and from our test matrix. We don't
    drop any of the compatability fixes (yet), but won't be adding new ones.

    * fixup

commit 7a9cfaf
Author: Ian Bolliger <bolliger32@gmail.com>
Date:   Wed Jun 12 11:44:26 2019 -0700

    keep index name with to_datetime (dask#4905)

    * keep index name with to_datetime

    * allow users to pass meta

    * Update dask/dataframe/core.py

    put meta as explicit kwarg

    Co-Authored-By: Matthew Rocklin <mrocklin@gmail.com>

    * Update dask/dataframe/core.py

    remove meta kwargs.pop

    Co-Authored-By: Matthew Rocklin <mrocklin@gmail.com>

    * remove test for index

    * allow index

commit abc86d3
Author: jakirkham <jakirkham@gmail.com>
Date:   Wed Jun 12 14:20:59 2019 -0400

    Raise ValueError if concatenate is given no arrays (dask#4927)

    * Raise `ValueError` if `concatenate` gets no arrays

    NumPy will raise if no arrays are provided to concatenate as it is
    unclear what to do. This adds a similar exception for Dask Arrays. Also
    this short circuits handling unusual cases later. Plus raises a clearer
    exception than one might see if this weren't raised.

    * Test `concatenate` raises when no arrays are given

commit ce2f866
Author: jakirkham <jakirkham@gmail.com>
Date:   Wed Jun 12 14:09:35 2019 -0400

    Promote types in `concatenate` using `_meta` (dask#4925)

    * Promote types in `concatenate` using `_meta`

    There was some left over type promotion code for the arrays to
    concatenate using their `dtype`s. However this should now use the
    `_meta` information instead since that is available.

    * Ensure `concatenate` is working on Dask Arrays
Merge remote-tracking branch 'upstream/master' into dataframe-warnings
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

3 participants