-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix rechunk with chunksize of -1 in a dict #3469
Conversation
@@ -195,8 +195,9 @@ def blockshape_dict_to_tuple(old_chunks, d): | |||
shape = tuple(map(sum, old_chunks)) | |||
new_chunks = list(old_chunks) | |||
for k, v in d.items(): | |||
div = shape[k] // v | |||
mod = shape[k] % v | |||
if v == -1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there other values that we want to use here, like None
? Or does that mean don't change anything? Also, do you have the time to add this behavior to the docstring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the docstrings.
We didn't discuss any alternatives when I added this back in #2689, but in pydata/xarray#2103 @crusaderky suggests supporting np.inf
. That seems pretty reasonable to me, too.
Thanks for the doctest change. I should have been more clear though. It would be useful to update the docstring of user-facing |
Not necessary for the PR (this is an improvement as it stands) but it would be helpful. |
Indeed -- I updated the docstring for the rechunk() function but not the equivalent method. |
Thanks @shoyer ! |
* Fix rechunk with chunksize of -1 in a dict * Docstring updates (cherry picked from commit 4756b59)
* Fix rechunk with chunksize of -1 in a dict * Docstring updates (cherry picked from commit 4756b59)
* Fix rechunk with chunksize of -1 in a dict * Docstring updates (cherry picked from commit 4756b59)
* Fix rechunk with chunksize of -1 in a dict (#3469) * Fix rechunk with chunksize of -1 in a dict * Docstring updates (cherry picked from commit 4756b59) * Faster slice_1d in dask.array (#3479) This uses numpy and binary search to accelerate slicing performance when there are many chunks along a dimension See also benchmarks in dask/dask-benchmarks#15 (cherry picked from commit 7c41958) * einsum split_every parameter (#3472) * da.einsum split_every support * Check for invalid einsum parameters (cherry picked from commit 5826ae0) * Pandas 0.23.0 compat (#3499) * Pandas 0.23.0 compat Compatibility for result_type in apply Avoid deprecated Index.summary Avoid ambiguous column warning Catch deprecation warning Catch indexing warnings Handle rolling warnings Changelog fix (cherry picked from commit 390fc14) * Updated changelog * Removed distributed master install * RLS: 0.17.5
flake8 dask
This should fix using a chunk-size of
-1
via xarray's.chunk()
method (pydata/xarray#2103)