Skip to content
forked from pydata/xarray

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into map-blocks-schema
Browse files Browse the repository at this point in the history
* upstream/master: (39 commits)
  Pint support for DataArray (pydata#3643)
  Apply blackdoc to the documentation (pydata#4012)
  ensure Variable._repr_html_ works (pydata#3973)
  Fix handling of abbreviated units like msec (pydata#3998)
  full_like: error on non-scalar fill_value (pydata#3979)
  Fix some code quality and bug-risk issues (pydata#3999)
  DOC: add pandas.DataFrame.to_xarray (pydata#3994)
  Better chunking error messages for zarr backend (pydata#3983)
  Silence sphinx warnings (pydata#3990)
  Fix distributed tests on upstream-dev (pydata#3989)
  Add multi-dimensional extrapolation example and mention different behavior of kwargs in interp (pydata#3956)
  keep attrs in interpolate_na (pydata#3970)
  actually use preformatted text in the details summary (pydata#3978)
  facetgrid: Ensure that colormap params are only determined once. (pydata#3915)
  RasterioDeprecationWarning (pydata#3964)
  Empty line missing for DataArray.assign_coords doc (pydata#3963)
  New coords to existing dim (doc) (pydata#3958)
  implement a more threadsafe call to colorbar (pydata#3944)
  Fix wrong order of coordinate converted from pd.series with MultiIndex (pydata#3953)
  Updated list of core developers (pydata#3943)
  ...
  • Loading branch information
dcherian committed Apr 30, 2020
2 parents 66fe4c4 + 3820fb7 commit 085ce9a
Show file tree
Hide file tree
Showing 63 changed files with 3,112 additions and 1,199 deletions.
18 changes: 18 additions & 0 deletions .deepsource.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
version = 1

test_patterns = [
"*/tests/**",
"*/test_*.py"
]

exclude_patterns = [
"doc/**",
"ci/**"
]

[[analyzers]]
name = "python"
enabled = true

[analyzers.meta]
runtime_version = "3.x.x"
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ assignees: ''

#### Versions

<details><summary>Output of `xr.show_versions()`</summary>
<details><summary>Output of <tt>xr.show_versions()</tt></summary>

<!-- Paste the output here xr.show_versions() here -->

Expand Down
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<!-- Feel free to remove check-list items aren't relevant to your change -->

- [ ] Fixes #xxxx
- [ ] Closes #xxxx
- [ ] Tests added
- [ ] Passes `isort -rc . && black . && mypy . && flake8`
- [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API
5 changes: 3 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,14 @@ repos:
rev: 4.3.21-2
hooks:
- id: isort
files: .+\.py$
# https://github.com/python/black#version-control-integration
- repo: https://github.com/python/black
rev: stable
hooks:
- id: black
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.2.3
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.9
hooks:
- id: flake8
- repo: https://github.com/pre-commit/mirrors-mypy
Expand Down
21 changes: 11 additions & 10 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ jobs:
conda_env: py37
py38:
conda_env: py38
py38-all-but-dask:
conda_env: py38-all-but-dask
py38-upstream-dev:
conda_env: py38
upstream_dev: true
Expand All @@ -32,16 +34,15 @@ jobs:
steps:
- template: ci/azure/unit-tests.yml

# excluded while waiting for https://github.com/conda-forge/libwebp-feedstock/issues/26
# - job: MacOSX
# strategy:
# matrix:
# py38:
# conda_env: py38
# pool:
# vmImage: 'macOS-10.15'
# steps:
# - template: ci/azure/unit-tests.yml
- job: MacOSX
strategy:
matrix:
py38:
conda_env: py38
pool:
vmImage: 'macOS-10.15'
steps:
- template: ci/azure/unit-tests.yml

- job: Windows
strategy:
Expand Down
44 changes: 44 additions & 0 deletions ci/requirements/py38-all-but-dask.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
name: xarray-tests
channels:
- conda-forge
dependencies:
- python=3.8
- black
- boto3
- bottleneck
- cartopy
- cdms2
- cfgrib
- cftime
- coveralls
- flake8
- h5netcdf
- h5py
- hdf5
- hypothesis
- isort
- lxml # Optional dep of pydap
- matplotlib
- mypy=0.761 # Must match .pre-commit-config.yaml
- nc-time-axis
- netcdf4
- numba
- numpy
- pandas
- pint
- pip
- pseudonetcdf
- pydap
- pynio
- pytest
- pytest-cov
- pytest-env
- rasterio
- scipy
- seaborn
- setuptools
- sparse
- toolz
- zarr
- pip:
- numbagg
4 changes: 4 additions & 0 deletions doc/api-hidden.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
Dataset.any
Dataset.argmax
Dataset.argmin
Dataset.idxmax
Dataset.idxmin
Dataset.max
Dataset.min
Dataset.mean
Expand Down Expand Up @@ -160,6 +162,8 @@
DataArray.any
DataArray.argmax
DataArray.argmin
DataArray.idxmax
DataArray.idxmin
DataArray.max
DataArray.min
DataArray.mean
Expand Down
4 changes: 4 additions & 0 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,8 @@ Computation
:py:attr:`~Dataset.any`
:py:attr:`~Dataset.argmax`
:py:attr:`~Dataset.argmin`
:py:attr:`~Dataset.idxmax`
:py:attr:`~Dataset.idxmin`
:py:attr:`~Dataset.max`
:py:attr:`~Dataset.mean`
:py:attr:`~Dataset.median`
Expand Down Expand Up @@ -365,6 +367,8 @@ Computation
:py:attr:`~DataArray.any`
:py:attr:`~DataArray.argmax`
:py:attr:`~DataArray.argmin`
:py:attr:`~DataArray.idxmax`
:py:attr:`~DataArray.idxmin`
:py:attr:`~DataArray.max`
:py:attr:`~DataArray.mean`
:py:attr:`~DataArray.median`
Expand Down
62 changes: 32 additions & 30 deletions doc/combining.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,12 @@ Combining data
--------------

.. ipython:: python
:suppress:
:suppress:
import numpy as np
import pandas as pd
import xarray as xr
np.random.seed(123456)
* For combining datasets or data arrays along a single dimension, see concatenate_.
Expand All @@ -28,11 +29,10 @@ that dimension:

.. ipython:: python
arr = xr.DataArray(np.random.randn(2, 3),
[('x', ['a', 'b']), ('y', [10, 20, 30])])
arr = xr.DataArray(np.random.randn(2, 3), [("x", ["a", "b"]), ("y", [10, 20, 30])])
arr[:, :1]
# this resembles how you would use np.concatenate
xr.concat([arr[:, :1], arr[:, 1:]], dim='y')
xr.concat([arr[:, :1], arr[:, 1:]], dim="y")
In addition to combining along an existing dimension, ``concat`` can create a
new dimension by stacking lower dimensional arrays together:
Expand All @@ -41,30 +41,30 @@ new dimension by stacking lower dimensional arrays together:
arr[0]
# to combine these 1d arrays into a 2d array in numpy, you would use np.array
xr.concat([arr[0], arr[1]], 'x')
xr.concat([arr[0], arr[1]], "x")
If the second argument to ``concat`` is a new dimension name, the arrays will
be concatenated along that new dimension, which is always inserted as the first
dimension:

.. ipython:: python
xr.concat([arr[0], arr[1]], 'new_dim')
xr.concat([arr[0], arr[1]], "new_dim")
The second argument to ``concat`` can also be an :py:class:`~pandas.Index` or
:py:class:`~xarray.DataArray` object as well as a string, in which case it is
used to label the values along the new dimension:

.. ipython:: python
xr.concat([arr[0], arr[1]], pd.Index([-90, -100], name='new_dim'))
xr.concat([arr[0], arr[1]], pd.Index([-90, -100], name="new_dim"))
Of course, ``concat`` also works on ``Dataset`` objects:

.. ipython:: python
ds = arr.to_dataset(name='foo')
xr.concat([ds.sel(x='a'), ds.sel(x='b')], 'x')
ds = arr.to_dataset(name="foo")
xr.concat([ds.sel(x="a"), ds.sel(x="b")], "x")
:py:func:`~xarray.concat` has a number of options which provide deeper control
over which variables are concatenated and how it handles conflicting variables
Expand All @@ -84,16 +84,16 @@ To combine variables and coordinates between multiple ``DataArray`` and/or

.. ipython:: python
xr.merge([ds, ds.rename({'foo': 'bar'})])
xr.merge([xr.DataArray(n, name='var%d' % n) for n in range(5)])
xr.merge([ds, ds.rename({"foo": "bar"})])
xr.merge([xr.DataArray(n, name="var%d" % n) for n in range(5)])
If you merge another dataset (or a dictionary including data array objects), by
default the resulting dataset will be aligned on the **union** of all index
coordinates:

.. ipython:: python
other = xr.Dataset({'bar': ('x', [1, 2, 3, 4]), 'x': list('abcd')})
other = xr.Dataset({"bar": ("x", [1, 2, 3, 4]), "x": list("abcd")})
xr.merge([ds, other])
This ensures that ``merge`` is non-destructive. ``xarray.MergeError`` is raised
Expand All @@ -116,7 +116,7 @@ used in the :py:class:`~xarray.Dataset` constructor:

.. ipython:: python
xr.Dataset({'a': arr[:-1], 'b': arr[1:]})
xr.Dataset({"a": arr[:-1], "b": arr[1:]})
.. _combine:

Expand All @@ -131,8 +131,8 @@ are filled with ``NaN``. For example:

.. ipython:: python
ar0 = xr.DataArray([[0, 0], [0, 0]], [('x', ['a', 'b']), ('y', [-1, 0])])
ar1 = xr.DataArray([[1, 1], [1, 1]], [('x', ['b', 'c']), ('y', [0, 1])])
ar0 = xr.DataArray([[0, 0], [0, 0]], [("x", ["a", "b"]), ("y", [-1, 0])])
ar1 = xr.DataArray([[1, 1], [1, 1]], [("x", ["b", "c"]), ("y", [0, 1])])
ar0.combine_first(ar1)
ar1.combine_first(ar0)
Expand All @@ -152,7 +152,7 @@ variables with new values:

.. ipython:: python
ds.update({'space': ('space', [10.2, 9.4, 3.9])})
ds.update({"space": ("space", [10.2, 9.4, 3.9])})
However, dimensions are still required to be consistent between different
Dataset variables, so you cannot change the size of a dimension unless you
Expand All @@ -170,7 +170,7 @@ syntax:

.. ipython:: python
ds['baz'] = xr.DataArray([9, 9, 9, 9, 9], coords=[('x', list('abcde'))])
ds["baz"] = xr.DataArray([9, 9, 9, 9, 9], coords=[("x", list("abcde"))])
ds.baz
Equals and identical
Expand All @@ -193,16 +193,16 @@ object:

.. ipython:: python
arr.identical(arr.rename('bar'))
arr.identical(arr.rename("bar"))
:py:attr:`~xarray.Dataset.broadcast_equals` does a more relaxed form of equality
check that allows variables to have different dimensions, as long as values
are constant along those new dimensions:

.. ipython:: python
left = xr.Dataset(coords={'x': 0})
right = xr.Dataset({'x': [0, 0, 0]})
left = xr.Dataset(coords={"x": 0})
right = xr.Dataset({"x": [0, 0, 0]})
left.broadcast_equals(right)
Like pandas objects, two xarray objects are still equal or identical if they have
Expand Down Expand Up @@ -231,9 +231,9 @@ coordinates as long as any non-missing values agree or are disjoint:

.. ipython:: python
ds1 = xr.Dataset({'a': ('x', [10, 20, 30, np.nan])}, {'x': [1, 2, 3, 4]})
ds2 = xr.Dataset({'a': ('x', [np.nan, 30, 40, 50])}, {'x': [2, 3, 4, 5]})
xr.merge([ds1, ds2], compat='no_conflicts')
ds1 = xr.Dataset({"a": ("x", [10, 20, 30, np.nan])}, {"x": [1, 2, 3, 4]})
ds2 = xr.Dataset({"a": ("x", [np.nan, 30, 40, 50])}, {"x": [2, 3, 4, 5]})
xr.merge([ds1, ds2], compat="no_conflicts")
Note that due to the underlying representation of missing values as floating
point numbers (``NaN``), variable data type is not always preserved when merging
Expand Down Expand Up @@ -273,10 +273,12 @@ datasets into a doubly-nested list, e.g:

.. ipython:: python
arr = xr.DataArray(name='temperature', data=np.random.randint(5, size=(2, 2)), dims=['x', 'y'])
arr = xr.DataArray(
name="temperature", data=np.random.randint(5, size=(2, 2)), dims=["x", "y"]
)
arr
ds_grid = [[arr, arr], [arr, arr]]
xr.combine_nested(ds_grid, concat_dim=['x', 'y'])
xr.combine_nested(ds_grid, concat_dim=["x", "y"])
:py:func:`~xarray.combine_nested` can also be used to explicitly merge datasets
with different variables. For example if we have 4 datasets, which are divided
Expand All @@ -286,10 +288,10 @@ we wish to use ``merge`` instead of ``concat``:

.. ipython:: python
temp = xr.DataArray(name='temperature', data=np.random.randn(2), dims=['t'])
precip = xr.DataArray(name='precipitation', data=np.random.randn(2), dims=['t'])
temp = xr.DataArray(name="temperature", data=np.random.randn(2), dims=["t"])
precip = xr.DataArray(name="precipitation", data=np.random.randn(2), dims=["t"])
ds_grid = [[temp, precip], [temp, precip]]
xr.combine_nested(ds_grid, concat_dim=['t', None])
xr.combine_nested(ds_grid, concat_dim=["t", None])
:py:func:`~xarray.combine_by_coords` is for combining objects which have dimension
coordinates which specify their relationship to and order relative to one
Expand All @@ -302,8 +304,8 @@ coordinates, not on their position in the list passed to ``combine_by_coords``.
.. ipython:: python
:okwarning:
x1 = xr.DataArray(name='foo', data=np.random.randn(3), coords=[('x', [0, 1, 2])])
x2 = xr.DataArray(name='foo', data=np.random.randn(3), coords=[('x', [3, 4, 5])])
x1 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [0, 1, 2])])
x2 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [3, 4, 5])])
xr.combine_by_coords([x2, x1])
These functions can be used by :py:func:`~xarray.open_mfdataset` to open many
Expand Down
Loading

0 comments on commit 085ce9a

Please sign in to comment.