Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyOpenSci] Reviewer #2 comments #1335

Closed
19 tasks done
Zeitsperre opened this issue Mar 27, 2023 · 4 comments
Closed
19 tasks done

[PyOpenSci] Reviewer #2 comments #1335

Zeitsperre opened this issue Mar 27, 2023 · 4 comments
Assignees
Labels
docs Improvements to documenation information For development/intsructional purposes standards / conventions Suggestions on ways forward
Milestone

Comments

@Zeitsperre
Copy link
Collaborator

Zeitsperre commented Mar 27, 2023

Originally posted by @jmunroe in pyOpenSci/software-submission#73 (comment)

This issue is a tickbox summary of comments from the reviewer that seemed addressable in the near-term.

The foreword to the review:

Thank so much for your patience with me. I have now gone through the package carefully following the PyOpenSci review instructions. This package is an incredibly thorough treatment of climate based metrics and I found it a real pleasure to review. A few issue I identified are presented below but none of the them are serious and I recommend this package be accepted by PyOpenSci.

I hope that in acceptance by PyOpenSci (and hopefully by JOSS as well), xclim will continue to be extended and adopted by a wide international community of climate scientists and professionals. This is an important body of work that should empower those trying to understand, mitigate, and adapt to our changing climate.

Documentation

  • Package installation instructions

    The README does not include explicit installation instructions. It does refer to the package documentation which does contain installation instructions.

TJS: This is addressed in #1338

  • Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.

    Descriptive links to all vingettes are not in the README, but they are easily findable in the included pacakge documentation.

TJS: This is addressed in #1338

  • I am not aware of another Python package that does exactly the same thing xclim in the scientific ecosystem. There are potentially related packages (such as MetPy for analyzing weather data, or climpred for computing metrcs for earth systems forecasts) in the broader ecosystem that could be referenced in the documentation just in case a potential user would be better served by a different project.

    Another potential Python package to reference is NCAR's GeoCAT (which is a part of NCAR overall strategy to migrate from away from NCL to Python). Historically, my understanding is non-Python tools like NCL were more widely used to do similar things to xclim.

    All three of these packages (MetPy, climpred, and GeoCAT) are based on the same xarray/dask foundations like xclim.

TJS: This is addressed in #1338

Functionality

  • Automated tests: Tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine.

    Try to verify locally with pytest on Python 3.10. There were a few failed tests in a local dev environment (33 failed, 1359 passed). I have not attempted to debug any of these failing tests.

    I also note that on GitHub CI with more recent fixes that that the tag 0.40.0 I was reviewing, it appears the entire testing suite passes without failures.

    I am more suspicious of my own development environment than xclim itself and I am satisfied if the testing suite passes on GitHub actions, as it appears to do.

    $ pytest xclim --numprocesses=logical --durations=10 --cov=xclim --cov-report=term-missing

    
    =============================================== short test summary info ================================================
    FAILED xclim/testing/tests/test_atmos.py::TestWaterBudget::test_convert_units - AttributeError: 'Dataset' object has no attribute 'rsus'. Did you mean: 'rsds'?
    FAILED xclim/testing/tests/test_atmos.py::TestWaterBudget::test_nan_values - AttributeError: 'Dataset' object has no attribute 'rsus'. Did you mean: 'rsds'?
    FAILED xclim/testing/tests/test_atmos.py::TestUTCI::test_universal_thermal_climate_index - AttributeError: 'Dataset' object has no attribute 'rsus'. Did you mean: 'rsds'?
    FAILED xclim/testing/tests/test_atmos.py::TestPotentialEvapotranspiration::test_convert_units - AttributeError: 'Dataset' object has no attribute 'rsus'. Did you mean: 'rsds'?
    FAILED xclim/testing/tests/test_atmos.py::TestPotentialEvapotranspiration::test_nan_values - AttributeError: 'Dataset' object has no attribute 'rsus'. Did you mean: 'rsds'?
    FAILED xclim/testing/tests/test_atmos.py::test_wind_chill_index - AssertionError:
    FAILED xclim/testing/tests/test_cffwis.py::TestCFFWIS::test_fire_weather_ufunc_overwintering - AssertionError:
    FAILED xclim/testing/tests/test_ffdi.py::TestFFDI::test_ffdi_indicators[xlim-True] - KeyError: 'rh'
    FAILED xclim/testing/tests/test_ffdi.py::TestFFDI::test_ffdi_indicators[xlim-False] - KeyError: 'rh'
    FAILED xclim/testing/tests/test_ffdi.py::TestFFDI::test_ffdi_indicators[discrete-True] - KeyError: 'rh'
    FAILED xclim/testing/tests/test_ffdi.py::TestFFDI::test_ffdi_indicators[discrete-False] - KeyError: 'rh'
    FAILED xclim/testing/tests/test_indices.py::TestTG::test_simple[tg_mean-283.1391] - AssertionError:
    FAILED xclim/testing/tests/test_indices.py::TestTG::test_simple[tg_min-266.1117] - AssertionError:
    FAILED xclim/testing/tests/test_indices.py::TestTG::test_simple[tg_max-292.125] - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::TestDaysWithSnow::test_simple - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::test_days_over_precip_doy_thresh - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::test_days_over_precip_thresh - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::test_days_over_precip_thresh__seasonal_indexer - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::test_fraction_over_precip_doy_thresh - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::test_fraction_over_precip_thresh - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::test_dry_spell - AssertionError:
    FAILED xclim/testing/tests/test_precip.py::test_dry_spell_frequency_op - RuntimeError: NetCDF: HDF error
    FAILED xclim/testing/tests/test_run_length.py::test_run_bounds_data[True] - AssertionError:
    FAILED xclim/testing/tests/test_run_length.py::test_keep_longest_run_data[True] - AssertionError:
    FAILED xclim/testing/tests/test_run_length.py::test_run_bounds_data[False] - AssertionError:
    FAILED xclim/testing/tests/test_run_length.py::test_keep_longest_run_data[False] - AssertionError:
    FAILED xclim/testing/tests/test_snow.py::TestSndMaxDoy::test_no_snow - AssertionError:
    FAILED xclim/testing/tests/test_temperature.py::TestWarmSpellDurationIndex::test_warm_spell_duration_index - AssertionError:
    FAILED xclim/testing/tests/test_temperature.py::TestFreezeThawSpell::test_freezethaw_spell_frequency - AssertionError:
    FAILED xclim/testing/tests/test_temperature.py::TestFreezeThawSpell::test_freezethaw_spell_mean_length - AssertionError:
    
    FAILED xclim/testing/tests/test_temperature.py::TestFreezeThawSpell::test_freezethaw_spell_max_length - AssertionError:
    FAILED xclim/testing/tests/test_temperature.py::test_corn_heat_units - AssertionError:
    FAILED xclim/testing/tests/test_sdba/test_properties.py::TestProperties::test_spatial_correlogram - AssertionError:
    =================== 33 failed, 1359 passed, 67 skipped, 2 xfailed, 96 warnings in 341.70s (0:05:41) ====================
    
    

TJS: The reason why this is occurring is because we made significant changes to our xclim-testdata repository in recent versions. I realize now that this is breaking because we aren't tagging explicit versions/commits of the testdata that are guaranteed to work. I'm thinking that we might want to start doing that from now on, rather than always point at master. @aulemahal, what do you think?
Update: This is addressed in #1339

  • Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)

Both pylint and black are configured through GitHub actions for CI.

TJS: pylint is configured but we do not currently pass those compliance checks (run with allowed failure). If the amount of effort to get us passing is reasonable, I'll attempt to get this working.


Review Comments

Installation notes
  • Following the instructions at the xclim documentation under Installation, I created a separate conda environment to install the required dependencies:
conda create -n my_xclim_env python=3.8 --file=environment.yml
conda activate my_xclim_env
pip install ".[dev]"

And there I hit my first issue:

CondaValueError: could not parse 'name: xclim' in: environment.yml

The fix (at least for conda 22.11.1) is that --file is an option to pass to conda env create and not conda create. This needs to be fixed in the install instructions.

TJS: This is addressed in #1338

  • The instructions refer to 'Extra dependencies' such as flox, SBCK, eigen, eofs, pybind. Since I used conda's environment.yml the extras eofs, eigen, pybind11 were already included.

I confess I tend to get confused when there is the option of using either environment.yml and requirements_*.txt files. So, I skipped the instructions following 'Extra Dependencies' in the documentation.
I assume there must be situtations when I should and should not install these extra dependencies but as a new user of the package, I don't what those situations are yet.
Since theses installation instructions are right near the top of the documenation, perhaps it would be better for the maintainers to make those choices for me? For example, I am now wondering "should I be installing flox?". Since it is 'highly recommended', would it not make more sense to have it as part of the default instructions?

TJS: This is addressed in #1338

Basic Usage
  • I installed Jupyter lab and created a notebook to test the instructions given under the Basic Usage section of the documentation. The very first example
# ds = xr.open_dataset("your_file.nc")
ds = open_dataset("ERA5/daily_surface_cancities_1990-1993.nc")
ds.tas

My initial reading of this code made me think that this ERA5 dataset was something I need to first download locally (I did not distinguish between xr.open_dataset and open_dataset in my very first glance at the code).
After some review, I see now that there companion GitHub repo that was available that had testing data and the xclim.testing API automatically makes a locally cached copy of this file. I think it would be clearer if this very first example was written out as

# ds = xr.open_dataset("your_file.nc")
ds = xclim.testing.open_dataset("ERA5/daily_surface_cancities_1990-1993.nc")
ds.tas

so that it was clear that the open_dataset was utility method of the testing framework for xclim.

TJS: This is addressed in #1338

  • I liked how this initital documentation oriented the user quickly to the differences between 'indicators' and 'indices' which, especially for the new-user, may be something that could be confusing.

In the example of Health checks and metadata attributes there is a typo:

gdd = xclim.atmos.growing_degree_days(tas=ds6h.tas, thresh="10.0 degC", freq="MS")

should be

gdd = xclim.atmos.growing_degree_days(tas=ds6h.air, thresh="10.0 degC", freq="MS")

TJS: This is addressed in #1338

  • The final basic usage examples on Graphics are could be improved by adding some descriptions in the text of what those three visualizations are actually showing. This could be as brief as taking the comment from each code snippet and using that as a text description.

While in-code comments are generally fine, these last few examples on graphics feel tacked on given the strong narrative text established in the beginning of the Basic Usage section of the documentation.

TJS: This is addressed in #1338

Examples
Workflow Examples

Minor spelling error in the docs:

  • Finally, xarray is tightly in*(t)*egrated with dask, a package that can automatically parallelize operations.

TJS: This is addressed in #1338

  • Under Subsetting and selecting data with xarray, the documentation reads:

Usually, xclim users are encouraged to use the subsetting utilities of the clisops package. Here, we will reduce the size of our data using the methods implemented in xarray

This is confusing because, as the first example workflow, the user has not yet been shown to use the clisops package. Should there be a subsub-section immediately before such as Subsetting and selecting data with cliops to demonstrate that recommended workflow?

TJS: This is addressed in #1338

  • Under Different ways of resampling there is setting of a matplotlib style
# import plotting stuff
import matplotlib.pyplot as plt

%matplotlib inline
plt.style.use("seaborn")
plt.rcParams["figure.figsize"] = (11, 5)

that leads to the warning

/tmp/ipykernel_7039/887583071.py:5: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead.
  plt.style.use("seaborn")

I think the offending line should be changed to

plt.sytle.use("seaborn-v0_8")

(and elsewhere in the documenation where seaborn styles are used)

TJS: This is addressed in #1338

  • Since the example is supposed to contrast the differences between the two sampling methods, I think they should be on the same colorbar scale. There may be a more general solution for this with matplotlib, but explicitly setting the range is a quick fix:
hw_before.sel(time="2010-07-01").plot(vmin=0, vmax=7)
plt.title("Resample, then run length")
plt.figure()
hw_after.sel(time="2010-07-01").plot(vmin=0, vmax=7)
plt.title("Run length, then resample")

TJS: This is addressed in #1338

  • Under Spatially varying thresholds, the code comments showing the thresholds do not
# The tasmin threshold is 15°C for the northern half of the domain and 20°C for the southern half.
# (notice that the lat coordinate is in decreasing order : from north to south)
thresh_tasmin = xr.DataArray(
    [7] * 24 + [11] * 24, dims=("lat",), coords={"lat": ds5.lat}, attrs={"units": "°C"}
)
# The tasmax threshold is 16°C for the western half of the domain and 19°C for the eastern half.
thresh_tasmax = xr.DataArray(
    [17] * 24 + [21] * 24, dims=("lon",), coords={"lon": ds5.lon}, attrs={"units": "°C"}
)

don't appear to match the values used in the code. I assume the code comments just need to be updated.

PB: This is addressed in #1338

Ensemble-Reductinon Techniques
  • Only because the utility of the rest of the package is so high, I was a bit confused with the need to manually create a 2D array of criteria (values) and realizations (runs/simulations). While I think I understand why all dimensions/variables, other the realization id, need to be flattened into a 2D array to apply an ensemble reduction technique, would it be possible to create a helper function that interates through all non-realization 'dimensions', and all variables to generalize the code snippet given below?
# Create 2d xr.DataArray containing criteria values
crit = None
for h in ds_crit.horizon:
    for v in ds_crit.data_vars:
        if crit is None:
            crit = ds_crit[v].sel(horizon=h)
        else:
            crit = xr.concat((crit, ds_crit[v].sel(horizon=h)), dim="criteria")
crit.name = "criteria"

Is this "criteria" array effectively the equivalent of creating a feature matrix used in data science?

TJS: This is addressed in #1341

Ensemble-Reductinon Techniques
Statistical Downscaling and Bias-Adjustment
  • The content is clear and well explained. Just a couple of very small typos that could be fixed:

A more complex example could have bias distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass group='time.month'. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, interp='linear' can be passed to adjust and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for April and those for May.

TJS: Many typos and grammatical errors have been addressed in #1338

  • in the following notebook on advanced SDBA:

The previous notebook covered the most common utilities of xclim.sdba for conventional cases

TJS: Many typos and grammatical errors have been addressed in #1338

@Zeitsperre Zeitsperre added standards / conventions Suggestions on ways forward information For development/intsructional purposes docs Improvements to documenation labels Mar 27, 2023
@Zeitsperre Zeitsperre added this to the v0.42 milestone Mar 27, 2023
@Zeitsperre Zeitsperre self-assigned this Mar 27, 2023
@aulemahal
Copy link
Collaborator

Nice!
I didn't read everything but indeed, back in the day, @tlvu warned us of using another git repo for the testing data.
On one hand, I'm not convinced we need to be able to run tests on older versions. In theory, at the time of release, all tests were passing, no? On the other one, I realize we use the testing data in the notebooks and not being able to reproduce those seems more problematic to me.

Thus indeed, I guess that tagging a testdata version would help solve this!
(Long, I tagged you in cas you had any advice? This comment refers to the first box of section Functionality above)

@tlvu
Copy link

tlvu commented Mar 28, 2023

My previous worries about splitting the testdata with the code Ouranosinc/xclim-testdata#1 (comment)

So tagging the testdata should solve this reproducibility issue.

But to ensure smooth dev workflow, the code should allow overriding the tag with a branch name. During dev cycle, both the testdata and the code would most probably move together. Without the override capability, we will have to continuously tag the testdata so it can be used with the code and this can get tedious.

However, with this tag override capability, we must not forget to tag the final version of the testdata and bump that tag on the code side before merging. The tag should be the default value when no override is used.

@Zeitsperre Zeitsperre mentioned this issue Mar 30, 2023
5 tasks
@Zeitsperre
Copy link
Collaborator Author

@tlvu

Thanks for the suggestion on how best to proceed for this. We now have a testing data tagging scheme and some GitHub Actions to prevent us from accidentally breaking it during some development branch. It's nothing fancy, but we should be able to more easily test older versions of xclim going forward.

aulemahal added a commit that referenced this issue Mar 31, 2023
<!--Please ensure the PR fulfills the following requirements! -->
<!-- If this is your first PR, make sure to add your details to the
AUTHORS.rst! -->
### Pull Request Checklist:
- [x] This PR addresses an already opened issue (for bug fixes /
features)
    - This PR implements a suggestion made in #1335 
- [x] Tests for the changes have been added (for bug fixes / features)
- [x] (If applicable) Documentation has been added / updated (for bug
fixes / features)
- [x] CHANGES.rst has been updated (with summary of main changes)
- [x] Link to issue (:issue:`number`) and pull request (:pull:`number`)
has been added

### What kind of change does this PR introduce?

* Add a `make_criteria` helper to reshape dataset into 2D critera
arrays, as expected by the ensemble reduction functions.

EDIT: Also, made this function a bit complicated so that it accepts
dataset with variables of different shapes and still preserves the
coordinates.

### Does this PR introduce a breaking change?
No.

### Other information:
~I need to add a test and udpate the history.~
@Zeitsperre
Copy link
Collaborator Author

Except for #1342, we managed to address all major comments in under a work-week. Nicely done, team!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Improvements to documenation information For development/intsructional purposes standards / conventions Suggestions on ways forward
Projects
None yet
Development

No branches or pull requests

3 participants