You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Canvas.raster method up- or downsamples xarray DataArrays, which themselves may wrap a numpy or dask array. Since dask arrays can often be larger than can be fit into memory at one time it would be great if it could apply the resampling in a distributed and out-of-core manner. This requires loading the chunks in one by one, up- or downsampling them and then stitching them all together again. Technically it should be possible to apply this using the new xr.apply_ufunc helper but there are certain things I haven't yet fully thought through, e.g. during downsampling I believe you need to have some overlap between chunks so that the aggregation at the edges of each chunk aggregates over the edge of the next chunk and a similar solution may be required for correct interpolation during upsampling.
This is a significant chunk of work but a very well defined task and would be very useful for very large arrays.
The text was updated successfully, but these errors were encountered:
The Canvas.raster method up- or downsamples xarray DataArrays, which themselves may wrap a numpy or dask array. Since dask arrays can often be larger than can be fit into memory at one time it would be great if it could apply the resampling in a distributed and out-of-core manner. This requires loading the chunks in one by one, up- or downsampling them and then stitching them all together again. Technically it should be possible to apply this using the new
xr.apply_ufunc
helper but there are certain things I haven't yet fully thought through, e.g. during downsampling I believe you need to have some overlap between chunks so that the aggregation at the edges of each chunk aggregates over the edge of the next chunk and a similar solution may be required for correct interpolation during upsampling.This is a significant chunk of work but a very well defined task and would be very useful for very large arrays.
The text was updated successfully, but these errors were encountered: