Skip to content

Commit

Permalink
Bump to 0.3.1 (#321)
Browse files Browse the repository at this point in the history
* Bumps the package to 0.3.1
* Updates the dependencies in the conda env yml files
* Updates the pre-commit hook versions
* Address `mypy` warnings
  * Remove `type: ignore` inline comments to silence mypy warnings related to xarray, can address in the future
  * Remove unused `type: ignore` based on latest version of mypy
* Update changelog with 0.3.1 and use names of contributors with link to GH user page
  • Loading branch information
tomvothecoder committed Aug 18, 2022
1 parent d3511aa commit e38fe72
Show file tree
Hide file tree
Showing 14 changed files with 218 additions and 111 deletions.
10 changes: 5 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@ fail_fast: true

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.1.0
rev: v4.3.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml

- repo: https://github.com/psf/black
rev: 22.3.0
rev: 22.6.0
hooks:
- id: black

Expand All @@ -30,15 +30,15 @@ repos:
additional_dependencies: [flake8-isort]

- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.961
rev: v0.971
hooks:
- id: mypy
args: ["--config=setup.cfg"]
additional_dependencies:
[
dask==2022.6.1,
dask==2022.7.1,
numpy==1.22.4,
pandas==1.4.3,
xarray==2022.3.0,
xarray==2022.6.0,
types-python-dateutil==2.8.19,
]
204 changes: 149 additions & 55 deletions HISTORY.rst

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,9 @@

xCDAT is an extension of `xarray`_ for climate data analysis on structured grids. It serves as a spiritual successor to the Community Data Analysis Tools (`CDAT`_) library.

The goal of xCDAT is to provide climate domain features and general utilities in xarray, which includes porting some core CDAT functionalities. xCDAT leverages several powerful libraries in the xarray ecosystem (e.g., `xESMF`_ and `cf_xarray`_) to deliver robust APIs. The xCDAT core team is aiming to provide a maintainable and extensible package that serves the needs of the climate community in the long-term.
The goal of xCDAT is to provide generalizable climate domain features and general utilities in xarray, which includes porting some core CDAT functionalities. xCDAT leverages several powerful libraries in the xarray ecosystem (e.g., `xESMF`_ and `cf_xarray`_) to deliver robust APIs. The xCDAT core team is aiming to provide a maintainable and extensible package that serves the needs of the climate community in the long-term.

A major design philosophy of xCDAT is streamlining the user experience while developing code to analyze climate data. This means reducing the complexity and number of lines required to achieve certain features with xarray.

.. _xarray: https://github.com/pydata/xarray
.. _CDAT: https://github.com/CDAT/cdat
Expand Down
1 change: 1 addition & 0 deletions conda-env/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ channels:
- conda-forge
- defaults
dependencies:
# ==================
# Base
# ==================
- python >=3.8
Expand Down
41 changes: 23 additions & 18 deletions conda-env/dev.yml
Original file line number Diff line number Diff line change
@@ -1,50 +1,55 @@
# Conda xcdat development environment
name: xcdat_dev
channels:
- conda-forge
- defaults
dependencies:
# ==================
# Base
# ==================
# NOTE: If versions are updated, also `additional_dependencies` list for mypy in `.pre-commit-config.yaml`
- python=3.9.13 # TODO: Update to >=3.10 once sphinxcontrib-napoleon supports it.
- pip=22.1.2
- cf_xarray=0.7.2
- cftime=1.6.0
- dask=2022.6.1
- pip=22.2.2
- cf_xarray=0.7.4
- cftime=1.6.1
- dask=2022.8.0
- esmpy=8.2.0
- netcdf4=1.5.8
- numba=0.55.2 # TODO: Remove this pin once `numba` is properly patched with `numpy` compatability.
- netcdf4=1.6.0
- numba=0.55.2 # TODO: Remove this pin once `numba` is properly patched with `numpy` compatibility.
- numpy=1.22.4
- pandas=1.4.3
- xarray=2022.3.0
- xarray=2022.6.0
- xesmf=0.6.3
- python-dateutil=2.8.2
- types-python-dateutil=2.8.19
# ==================
# Documentation
# ==================
- sphinx=4.5.0
- sphinxcontrib-napoleon=0.7
- sphinx-autosummary-accessors=2022-4-0
- sphinx-book-theme=0.3.2
- sphinx-autosummary-accessors=2022.4.0
- sphinx-book-theme=0.3.3
- sphinx-copybutton=0.5.0
- nbsphinx=0.8.9
- pandoc=2.18
- pandoc=2.19
# ==================
# Quality Assurance
# ==================
# If versions are updated, also update 'rev' in `.pre-commit.config.yaml`
- black=22.3.0
- flake8=4.0.1
- flake8-isort=4.1.1
# NOTE: If versions are updated, also update 'rev' in `.pre-commit.config.yaml`
- black=22.6.0
- flake8=5.0.4
- flake8-isort=4.2.0
- isort=5.10.1
- mypy=0.961
- pre-commit=2.19.0
- mypy=0.971
- pre-commit=2.20.0
# ==================
# Testing
# ==================
- pytest=7.1.2
- pytest-cov=3.0.0
# ==================
# Developer Tools
# ==================
- matplotlib=3.5.2
- jupyterlab=3.4.3
- jupyterlab=3.4.5
- tbump=6.9.0
prefix: /opt/miniconda3/envs/xcdat_dev
24 changes: 13 additions & 11 deletions conda-env/readthedocs.yml
Original file line number Diff line number Diff line change
@@ -1,32 +1,34 @@
# Conda xcdat readthedocs environment
name: xcdat_rtd
channels:
- conda-forge
- defaults
dependencies:
# ==================
# Base
# ==================
# NOTE: If versions are updated, also `additional_dependencies` list for mypy in `.pre-commit-config.yaml`
- python=3.9.13 # TODO: Update to >=3.10 once sphinxcontrib-napoleon supports it.
- pip=22.1.2
- cf_xarray=0.7.2
- cftime=1.6.0
- dask=2022.6.1
- pip=22.2.2
- cf_xarray=0.7.4
- cftime=1.6.1
- dask=2022.8.0
- esmpy=8.2.0
- netcdf4=1.5.8
- numba=0.55.2 # TODO: Remove this pin once `numba` is properly patched with `numpy` compatability.
- netcdf4=1.6.0
- numba=0.55.2 # TODO: Remove this pin once `numba` is properly patched with `numpy` compatibility.
- numpy=1.22.4
- pandas=1.4.3
- xarray=2022.3.0
- xarray=2022.6.0
- xesmf=0.6.3
- python-dateutil=2.8.2
- types-python-dateutil=2.8.19
# ==================
# Documentation
# ==================
- sphinx=4.5.0
- sphinxcontrib-napoleon=0.7
- sphinx-autosummary-accessors=2022-4-0
- sphinx-book-theme=0.3.2
- sphinx-autosummary-accessors=2022.4.0
- sphinx-book-theme=0.3.3
- sphinx-copybutton=0.5.0
- nbsphinx=0.8.9
- pandoc=2.18
- pandoc=2.19
prefix: /opt/miniconda3/envs/xcdat_rtd
4 changes: 3 additions & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ xCDAT: Xarray Climate Data Analysis Tools

xCDAT is an extension of `xarray`_ for climate data analysis on structured grids. It serves as a spiritual successor to the Community Data Analysis Tools (`CDAT`_) library.

The goal of xCDAT is to provide climate domain features and general utilities in xarray, which includes porting some core CDAT functionalities. xCDAT leverages several powerful libraries in the xarray ecosystem (e.g., `xESMF`_ and `cf_xarray`_) to deliver robust APIs. The xCDAT core team is aiming to provide a maintainable and extensible package that serves the needs of the climate community in the long-term.
The goal of xCDAT is to provide generalizable climate domain features and general utilities in xarray, which includes porting some core CDAT functionalities. xCDAT leverages several powerful libraries in the xarray ecosystem (e.g., `xESMF`_ and `cf_xarray`_) to deliver robust APIs. The xCDAT core team is aiming to provide a maintainable and extensible package that serves the needs of the climate community in the long-term.

A major design philosophy of xCDAT is streamlining the user experience while developing code to analyze climate data. This means reducing the complexity and number of lines required to achieve certain features with xarray.

.. _xarray: https://github.com/pydata/xarray
.. _CDAT: https://github.com/CDAT/cdat
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,6 @@
test_suite="tests",
tests_require=test_requires,
url="https://github.com/xCDAT/xcdat",
version="0.3.0",
version="0.3.1",
zip_safe=False,
)
2 changes: 1 addition & 1 deletion tbump.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
github_url = "https://github.com/xCDAT/xcdat"

[version]
current = "0.3.0"
current = "0.3.1"

# Example of a semver regexp.
# Make sure this matches current_version before
Expand Down
2 changes: 1 addition & 1 deletion xcdat/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@
from xcdat.temporal import TemporalAccessor # noqa: F401
from xcdat.utils import compare_datasets # noqa: F401

__version__ = "0.3.0"
__version__ = "0.3.1"
4 changes: 2 additions & 2 deletions xcdat/axis.py
Original file line number Diff line number Diff line change
Expand Up @@ -305,8 +305,8 @@ def _align_lon_to_360(dataset: xr.Dataset, p_meridian_index: np.ndarray) -> xr.D
# Create a Dataset with longitude data vars and merge it to the Dataset
# without longitude data vars.
ds_lon = xr.Dataset(data_vars={**lon_vars, lon_bounds.name: lon_bounds})
ds_no_lon = ds.get([v for v in ds.data_vars if lon.name not in ds[v].dims])
ds = xr.merge((ds_no_lon, ds_lon)) # type: ignore
ds_no_lon = ds.get([v for v in ds.data_vars if lon.name not in ds[v].dims]) # type: ignore
ds = xr.merge((ds_no_lon, ds_lon))
return ds


Expand Down
10 changes: 5 additions & 5 deletions xcdat/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,12 +99,12 @@ def open_dataset(
cf_compliant_time: Optional[bool] = _has_cf_compliant_time(path)
# xCDAT attempts to decode non-CF compliant time coordinates.
if cf_compliant_time is False:
ds = xr.open_dataset(path, decode_times=False, **kwargs)
ds = xr.open_dataset(path, decode_times=False, **kwargs) # type: ignore
ds = decode_non_cf_time(ds)
else:
ds = xr.open_dataset(path, decode_times=True, **kwargs)
ds = xr.open_dataset(path, decode_times=True, **kwargs) # type: ignore
else:
ds = xr.open_dataset(path, decode_times=False, **kwargs)
ds = xr.open_dataset(path, decode_times=False, **kwargs) # type: ignore

ds = _postprocess_dataset(ds, data_var, center_times, add_bounds, lon_orient)

Expand Down Expand Up @@ -225,7 +225,7 @@ def open_mfdataset(
decode_times=decode_times,
data_vars=data_vars,
preprocess=preprocess,
**kwargs,
**kwargs, # type: ignore
)
ds = _postprocess_dataset(ds, data_var, center_times, add_bounds, lon_orient)

Expand Down Expand Up @@ -489,7 +489,7 @@ def _has_cf_compliant_time(paths: Paths) -> Optional[bool]:
compliance.
"""
first_path = _get_first_path(paths)
ds = xr.open_dataset(first_path, decode_times=False)
ds = xr.open_dataset(first_path, decode_times=False) # type: ignore

if ds.cf.dims.get("T") is None:
return None
Expand Down
3 changes: 2 additions & 1 deletion xcdat/spatial.py
Original file line number Diff line number Diff line change
Expand Up @@ -647,7 +647,8 @@ def _combine_weights(self, axis_weights: AxisWeights) -> xr.DataArray:
(``axis``) in the region.
"""
region_weights = reduce((lambda x, y: x * y), axis_weights.values())
region_weights.name = "_".join(sorted(region_weights.coords.keys())) + "_wts"
coord_keys = sorted(region_weights.coords.keys()) # type: ignore
region_weights.name = "_".join(coord_keys) + "_wts" # type: ignore

return region_weights

Expand Down
18 changes: 9 additions & 9 deletions xcdat/temporal.py
Original file line number Diff line number Diff line change
Expand Up @@ -806,8 +806,8 @@ def _drop_incomplete_djf(self, dataset: xr.Dataset) -> xr.Dataset:
# method concatenates the time dimension to non-time dimension data
# vars, which is not a desired behavior.
ds = dataset.copy()
ds_time = ds.get([v for v in ds.data_vars if self._dim in ds[v].dims])
ds_no_time = ds.get([v for v in ds.data_vars if self._dim not in ds[v].dims])
ds_time = ds.get([v for v in ds.data_vars if self._dim in ds[v].dims]) # type: ignore
ds_no_time = ds.get([v for v in ds.data_vars if self._dim not in ds[v].dims]) # type: ignore

start_year, end_year = (
ds[self._dim].dt.year.values[0],
Expand All @@ -817,12 +817,12 @@ def _drop_incomplete_djf(self, dataset: xr.Dataset) -> xr.Dataset:
for year_month in incomplete_seasons:
try:
coord_pt = ds.loc[dict(time=year_month)][self._dim][0]
ds_time = ds_time.where(ds_time[self._dim] != coord_pt, drop=True) # type: ignore
ds_time = ds_time.where(ds_time[self._dim] != coord_pt, drop=True)
self._time_bounds = ds_time[self._time_bounds.name]
except (KeyError, IndexError):
continue

ds_final = xr.merge((ds_time, ds_no_time)) # type: ignore
ds_final = xr.merge((ds_time, ds_no_time))

return ds_final

Expand Down Expand Up @@ -920,17 +920,17 @@ def _group_average(self, data_var: xr.DataArray) -> xr.DataArray:
if self._weighted:
self._weights = self._get_weights()
dv *= self._weights
dv = self._group_data(dv).sum() # type: ignore
dv = self._group_data(dv).sum()
else:
dv = self._group_data(dv).mean() # type: ignore
dv = self._group_data(dv).mean()

# After grouping and aggregating the data variable values, the
# original time dimension is replaced with the grouped time dimension.
# For example, grouping on "year_season" replaces the time dimension
# with "year_season". This dimension needs to be renamed back to
# the original time dimension name before the data variable is added
# back to the dataset so that the CF compliant name is maintained.
dv = dv.rename({self._labeled_time.name: self._dim}) # type: ignore
dv = dv.rename({self._labeled_time.name: self._dim})

# After grouping and aggregating, the grouped time dimension's
# attributes are removed. Xarray's `keep_attrs=True` option only keeps
Expand Down Expand Up @@ -988,11 +988,11 @@ def _get_weights(self) -> xr.DataArray:
time_lengths = time_lengths.astype(np.float64)

grouped_time_lengths = self._group_data(time_lengths)
weights: xr.DataArray = grouped_time_lengths / grouped_time_lengths.sum() # type: ignore
weights: xr.DataArray = grouped_time_lengths / grouped_time_lengths.sum()
weights.name = f"{self._dim}_wts"

# Validate the sum of weights for each group is 1.0.
actual_sum = self._group_data(weights).sum().values # type: ignore
actual_sum = self._group_data(weights).sum().values
expected_sum = np.ones(len(grouped_time_lengths.groups))
np.testing.assert_allclose(actual_sum, expected_sum)

Expand Down

0 comments on commit e38fe72

Please sign in to comment.