Skip to content

Commit

Permalink
Merge 7f91109 into 51bb58a
Browse files Browse the repository at this point in the history
  • Loading branch information
aulemahal committed Dec 9, 2022
2 parents 51bb58a + 7f91109 commit 387ae65
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 48 deletions.
10 changes: 4 additions & 6 deletions HISTORY.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ Contributors to this version: Trevor James Smith (:user:`Zeitsperre`), Pascal Bo
New features and enhancements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Virtual modules can add variables to ``xclim.core.utils.VARIABLES`` through the new `variables` section of the yaml files. (:issue:`1129`, :pull:`1231`).
* ``xclim.core.units.convert_units_to`` can now perform automatic conversions based on the standard name of the input when needed. (:issue:`1205`, :pull:`1206`).
- Conversion from amount (thickness) to flux (rate), using ``amount2rate`` and ``rate2amount``.
- Conversion from amount to thickness for liquid water quantities, using the new ``amount2lwethickness`` and ``lwethickness2amount``. This is similar to the implicit transformations enabled by the "hydro" unit context.

Breaking changes
^^^^^^^^^^^^^^^^
Expand All @@ -30,12 +33,7 @@ Bug fixes
* The `make docs` Makefile recipe was failing with an esoteric error. This has been resolved by splitting the `linkcheck` and `docs` steps into separate actions. (:issue:`1248`. :pull:`1251`).
* The setup step for `pytest` needed to be addressed due to the fact that files were being accessed/modified by multiple tests at a time, causing segmentation faults in some tests. This has been resolved by splitting functions into those that fetch or generate test data (under `xclim.testing.tests.data`) and the fixtures that supply accessors to them (under `xclim.testing.tests.conftest`). (:issue:`1238`, :pull:`1254`).
* Relaxed the expected output for ``test_spatial_analogs[friedman_rafsky]`` to support expected results from `scikit-learn` 1.2.0.

New features and enhancements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* ``xclim.core.units.convert_units_to`` can now perform automatic conversions based on the standard name of the input when needed. (:issue:`1205`, :pull:`1206`).
- Conversion from amount (thickness) to flux (rate), using ``amount2rate`` and ``rate2amount``.
- Conversion from amount to thickness for liquid water quantities, using the new ``amount2lwethickness`` and ``lwethickness2amount``. This is similar to the implicit transformations enabled by the "hydro" unit context.
* The MBCn example in documentation has been fixed to properly imitate the source. (:issue:`1249`, :pull:`1250`).

Internal changes
^^^^^^^^^^^^^^^^
Expand Down
62 changes: 20 additions & 42 deletions docs/notebooks/sdba.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -510,7 +510,7 @@
"##### Stack the variables to multivariate arrays and standardize them\n",
"The standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.\n",
"\n",
"`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result."
"`scenh` and `scens` are standardized together so the two series are coherent. As we'll see further, we do not need to keep the mean and standard deviation as we only keep the rank order information from the `NpdfTransform` output."
]
},
{
Expand All @@ -527,9 +527,9 @@
"# Standardize\n",
"ref, _, _ = sdba.processing.standardize(ref)\n",
"\n",
"allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), \"time\"))\n",
"hist = allsim.sel(time=scenh.time)\n",
"sim = allsim.sel(time=scens.time)"
"allsim_std, _, _ = sdba.processing.standardize(xr.concat((scenh, scens), \"time\"))\n",
"scenh_std = allsim_std.sel(time=scenh.time)\n",
"scens_std = allsim_std.sel(time=scens.time)"
]
},
{
Expand All @@ -553,21 +553,17 @@
"with set_options(sdba_extra_output=True):\n",
" out = sdba.adjustment.NpdfTransform.adjust(\n",
" ref,\n",
" hist,\n",
" sim,\n",
" scenh_std,\n",
" scens_std,\n",
" base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.\n",
" base_kws={\"nquantiles\": 20, \"group\": \"time\"},\n",
" n_iter=20, # perform 20 iteration\n",
" n_escore=1000, # only send 1000 points to the escore metric (it is realy slow)\n",
" )\n",
"\n",
"scenh = out.scenh.rename(time_hist=\"time\") # Bias-adjusted historical period\n",
"scens = out.scen # Bias-adjusted future period\n",
"extra = out.drop_vars([\"scenh\", \"scen\"])\n",
"\n",
"# Un-standardize (add the mean and the std back)\n",
"scenh = sdba.processing.unstandardize(scenh, savg, sstd)\n",
"scens = sdba.processing.unstandardize(scens, savg, sstd)"
"scenh_npdft = out.scenh.rename(time_hist=\"time\") # Bias-adjusted historical period\n",
"scens_npdft = out.scen # Bias-adjusted future period\n",
"extra = out.drop_vars([\"scenh\", \"scen\"])"
]
},
{
Expand All @@ -587,8 +583,8 @@
"metadata": {},
"outputs": [],
"source": [
"scenh = sdba.processing.reordering(hist, scenh, group=\"time\")\n",
"scens = sdba.processing.reordering(sim, scens, group=\"time\")"
"scenh = sdba.processing.reordering(scenh_npdft, scenh, group=\"time\")\n",
"scens = sdba.processing.reordering(scens_npdft, scens, group=\"time\")"
]
},
{
Expand All @@ -607,7 +603,7 @@
"source": [
"##### There we are!\n",
"\n",
"Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call."
"Let's trigger all the computations. The use of `dask.compute` allows the three DataArrays to be computed at the same time, avoiding repeating the common steps."
]
},
{
Expand All @@ -619,16 +615,10 @@
"from dask import compute\n",
"from dask.diagnostics import ProgressBar\n",
"\n",
"tasks = [\n",
" scenh.isel(location=2).to_netcdf(\"mbcn_scen_hist_loc2.nc\", compute=False),\n",
" scens.isel(location=2).to_netcdf(\"mbcn_scen_sim_loc2.nc\", compute=False),\n",
" extra.escores.isel(location=2)\n",
" .to_dataset()\n",
" .to_netcdf(\"mbcn_escores_loc2.nc\", compute=False),\n",
"]\n",
"\n",
"with ProgressBar():\n",
" compute(tasks)"
" scenh, scens, escores = compute(\n",
" scenh.isel(location=2), scens.isel(location=2), extra.escores.isel(location=2)\n",
" )"
]
},
{
Expand All @@ -644,8 +634,6 @@
"metadata": {},
"outputs": [],
"source": [
"scenh = xr.open_dataset(\"mbcn_scen_hist_loc2.nc\")\n",
"\n",
"fig, ax = plt.subplots()\n",
"\n",
"dref.isel(location=2).tasmax.plot(ax=ax, label=\"Reference\")\n",
Expand All @@ -661,20 +649,10 @@
"metadata": {},
"outputs": [],
"source": [
"escores = xr.open_dataarray(\"mbcn_escores_loc2.nc\")\n",
"diff_escore = escores.differentiate(\"iterations\")\n",
"diff_escore.plot()\n",
"plt.title(\"Difference of the subsequent e-scores.\")\n",
"plt.ylabel(\"E-scores difference\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"diff_escore"
"escores.plot()\n",
"plt.title(\"E-scores for each iteration.\")\n",
"plt.xlabel(\"iteration\")\n",
"plt.ylabel(\"E-score\")"
]
},
{
Expand All @@ -701,7 +679,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
"version": "3.10.6"
},
"toc": {
"base_numbering": 1,
Expand Down

0 comments on commit 387ae65

Please sign in to comment.