Skip to content

Commit

Permalink
Feature #2583 ecnt (#2825)
Browse files Browse the repository at this point in the history
* Unrelated to #2583, fix typo in code comments.

* Per #2583, add hooks write 3 new ECNT columns for observation error data.

* Per #2583, make error messages about mis-matched array lengths more informative.

* Per #2583, switch to more concise variable naming conventions of ign_oerr_cnv, ign_oerr_cor, and dawid_seb.

* Per #2583, fix typo to enable compilation

* Per #2583, define the 5 new ECNT column names.

* Per #2583, add 5 new columns to the ECNT table in the Ensemble-Stat chapter

* Per #2583, update stat_columns.cc to write these 5 new ECNT columns

* Per #2583, update ECNTInfo class to compute the 5 new ECNT statistics.

* Per #2583, update stat-analysis to parse the 5 new ECNT columns.

* Per #2583, update aggregate_stat logic for 5 new ECNT columns.

* Per #2583, update PairDataEnsemble logic for 5 new ECNT columns

* Per #2583, update vx_statistics library with obs_error handling logic for the 5 new ECNT columns

* Per #2583, changes to make it compile

* Per #2583, changes to make it compile

* Per #2583, switch to a consistent ECNT column naming convention with OERR at the end. Using IGN_CONV_OERR and IGN_CORR_OERR.

* Per #2583, define ObsErrorEntry::variance() with a call to the dist_var() utility function.

* Per #2583, update PairDataEnsemble::compute_pair_vals() to compute the 5 new stats with the correct inputs.

* Per #2583, add DEBUG(10) log messages about computing these new stats.

* Per #2583, update Stat-Analysis to compute these 5 new stats from the ORANK line type.

* Per #2583, whitespace and comments.

* Per #2583, update the User's Guide.

* Per #2583, remove the DS_ADD_OERR and DS_MULT_OERR ECNT columns and rename DS_OERR as DSS, since observation error is not actually involved in its computation.

* Per #2583, minor update to Appendix C

* Per #2583, rename ECNT line type statistic DSS to IDSS.

* Per #2583, fix a couple of typos

* Per #2583, more error checking.

* Per #2583, remove the ECNT IDSS column since its just 2*pi*IGN, the existing ignorance score, and only provides meaningful information when combined with the other Dawid-Sebastiani statistics that have already been removed.

* Per #2583, add Eric's documentation of these new stats to Appendix C. Along the way, update the DOI links in the references based on this APA style guide: https://apastyle.apa.org/style-grammar-guidelines/references/dois-urls#:~:text=Include%20a%20DOI%20for%20all,URL%2C%20include%20only%20the%20DOI.

* Per #2583, fix new equations with embedded underscores for PDF by defining both html and pdf formatting options.

* Per #2583, update the ign_conv_oerr equation to include a 2
*pi multiplier for consistency with the existing ignorance score. Also, fix the documented equations.

* Per #2583, remove log file that was inadvertently added on this branch.

* Per #2583, simplify ObsErrorEntry::variance() implementation. For the distribution type of NONE, return a variance of 0.0 rather than bad data, as discussed with @michelleharrold and @JeffBeck-NOAA on 3/8/2024.

---------

Co-authored-by: MET Tools Test Account <met_test@seneca.rap.ucar.edu>
  • Loading branch information
JohnHalleyGotway and MET Tools Test Account authored Mar 14, 2024
1 parent a6f7646 commit 108a895
Show file tree
Hide file tree
Showing 22 changed files with 390 additions and 135 deletions.
2 changes: 1 addition & 1 deletion data/table_files/met_header_columns_V12.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ V12.0 : STAT : PJC : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID
V12.0 : STAT : PRC : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL (N_THRESH) THRESH_[0-9]* PODY_[0-9]* POFD_[0-9]*
V12.0 : STAT : PSTD : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL (N_THRESH) BASER BASER_NCL BASER_NCU RELIABILITY RESOLUTION UNCERTAINTY ROC_AUC BRIER BRIER_NCL BRIER_NCU BRIERCL BRIERCL_NCL BRIERCL_NCU BSS BSS_SMPL THRESH_[0-9]*
V12.0 : STAT : ECLV : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL BASER VALUE_BASER (N_PTS) CL_[0-9]* VALUE_[0-9]*
V12.0 : STAT : ECNT : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL N_ENS CRPS CRPSS IGN ME RMSE SPREAD ME_OERR RMSE_OERR SPREAD_OERR SPREAD_PLUS_OERR CRPSCL CRPS_EMP CRPSCL_EMP CRPSS_EMP CRPS_EMP_FAIR SPREAD_MD MAE MAE_OERR BIAS_RATIO N_GE_OBS ME_GE_OBS N_LT_OBS ME_LT_OBS
V12.0 : STAT : ECNT : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL N_ENS CRPS CRPSS IGN ME RMSE SPREAD ME_OERR RMSE_OERR SPREAD_OERR SPREAD_PLUS_OERR CRPSCL CRPS_EMP CRPSCL_EMP CRPSS_EMP CRPS_EMP_FAIR SPREAD_MD MAE MAE_OERR BIAS_RATIO N_GE_OBS ME_GE_OBS N_LT_OBS ME_LT_OBS IGN_CONV_OERR IGN_CORR_OERR
V12.0 : STAT : RPS : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL N_PROB RPS_REL RPS_RES RPS_UNC RPS RPSS RPSS_SMPL RPS_COMP
V12.0 : STAT : RHIST : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL (N_RANK) RANK_[0-9]*
V12.0 : STAT : PHIST : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL BIN_SIZE (N_BIN) BIN_[0-9]*
Expand Down
53 changes: 42 additions & 11 deletions docs/Users_Guide/appendixC.rst

Large diffs are not rendered by default.

10 changes: 9 additions & 1 deletion docs/Users_Guide/ensemble-stat.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,9 @@ The climatological distribution is also used for the RPSS. The forecast RPS stat
Ensemble Observation Error
--------------------------

In an attempt to ameliorate the effect of observation errors on the verification of forecasts, a random perturbation approach has been implemented. A great deal of user flexibility has been built in, but the methods detailed in :ref:`Candille and Talagrand (2008) <Candille-2008>`. can be replicated using the appropriate options. The user selects a distribution for the observation error, along with parameters for that distribution. Rescaling and bias correction can also be specified prior to the perturbation. Random draws from the distribution can then be added to either, or both, of the forecast and observed fields, including ensemble members. Details about the effects of the choices on verification statistics should be considered, with many details provided in the literature (*e.g.* :ref:`Candille and Talagrand, 2008 <Candille-2008>`; :ref:`Saetra et al., 2004 <Saetra-2004>`; :ref:`Santos and Ghelli, 2012 <Santos-2012>`). Generally, perturbation makes verification statistics better when applied to ensemble members, and worse when applied to the observations themselves.
In an attempt to ameliorate the effect of observation errors on the verification of forecasts, a random perturbation approach has been implemented. A great deal of user flexibility has been built in, but the methods detailed in :ref:`Candille and Talagrand (2008) <Candille-2008>` can be replicated using the appropriate options. Additional probabilistic measures that include observational uncertainty recommended by :ref:`Ferro, 2017 <Ferro-2017>` are also provided.

Observation error information can be defined directly in the Ensemble-Stat configuration file or through a more flexible observation error table lookup. The user selects a distribution for the observation error, along with parameters for that distribution. Rescaling and bias correction can also be specified prior to the perturbation. Random draws from the distribution can then be added to either, or both, of the forecast and observed fields, including ensemble members. Details about the effects of the choices on verification statistics should be considered, with many details provided in the literature (*e.g.* :ref:`Candille and Talagrand, 2008 <Candille-2008>`; :ref:`Saetra et al., 2004 <Saetra-2004>`; :ref:`Santos and Ghelli, 2012 <Santos-2012>`). Generally, perturbation makes verification statistics better when applied to ensemble members, and worse when applied to the observations themselves.

Normal and uniform are common choices for the observation error distribution. The uniform distribution provides the benefit of being bounded on both sides, thus preventing the perturbation from taking on extreme values. Normal is the most common choice for observation error. However, the user should realize that with the very large samples typical in NWP, some large outliers will almost certainly be introduced with the perturbation. For variables that are bounded below by 0, and that may have inconsistent observation errors (e.g. larger errors with larger measurements), a lognormal distribution may be selected. Wind speeds and precipitation measurements are the most common of this type of NWP variable. The lognormal error perturbation prevents measurements of 0 from being perturbed, and applies larger perturbations when measurements are larger. This is often the desired behavior in these cases, but this distribution can also lead to some outliers being introduced in the perturbation step.

Expand Down Expand Up @@ -647,6 +649,12 @@ The format of the STAT and ASCII output of the Ensemble-Stat tool are described
* - 49
- ME_LT_OBS
- The Mean Error of the ensemble values less than or equal to their observations
* - 50
- IGN_CONV_OERR
- Error-convolved logarithmic scoring rule (i.e. ignornance score) from Equation 5 of :ref:`Ferro, 2017 <Ferro-2017>`
* - 51
- IGN_CORR_OERR
- Error-corrected logarithmic scoring rule (i.e. ignornance score) from Equation 7 of :ref:`Ferro, 2017 <Ferro-2017>`

.. _table_ES_header_info_es_out_RPS:

Expand Down
135 changes: 88 additions & 47 deletions docs/Users_Guide/refs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,18 @@ References

| Ahijevych, D., E. Gilleland, B.G. Brown, and E.E. Ebert, 2009: Application of
| spatial verification methods to idealized and NWP-gridded precipitation forecasts.
| *Weather and Forecasting*, 24 (6), 1485 - 1497, doi: 10.1175/2009WAF2222298.1.
| *Weather and Forecasting*, 24 (6), 1485 - 1497.
| doi: https://doi.org/10.1175/2009WAF2222298.1
|
.. _Barker-1991:
.. _Andersen-1996:

| Anderson JL., 1996: A method for producing and evaluating probabilistic forecasts
| from ensemble model integrations. *J. Clim.* 9: 1518-1530.
| doi: `https://doi.org/10.1175/1520-0442(1996)009<1518:AMFPAE>2.0.CO;2 <https://doi.org/10.1175/1520-0442(1996)009\<1518:AMFPAE\>2.0.CO;2>`_
|
.. _Barker-1991:

| Barker, T. W., 1991: The relationship between spread and forecast error in
| extended-range forecasts. *Journal of Climate*, 4, 733-742.
Expand All @@ -29,14 +36,14 @@ References
| Bradley, A.A., S.S. Schwartz, and T. Hashino, 2008: Sampling Uncertainty
| and Confidence Intervals for the Brier Score and Brier Skill Score.
| *Weather and Forecasting*, 23, 992-1006.
|
|
.. _Brill-2009:

| Brill, K. F., and F. Mesinger, 2009: Applying a general analytic method
| for assessing bias sensitivity to bias-adjusted threat and equitable
| threat scores. *Weather and Forecasting*, 24, 1748-1754.
|
|
.. _Brown-2007:

Expand All @@ -49,32 +56,47 @@ References
| http://ams.confex.com/ams/pdfpapers/124856.pdf.
|
.. _Bröcker-2007:

| Bröcker J, Smith LA., 2007: Scoring probabilistic forecasts: The importance
| of being proper. *Weather Forecasting*, 22, 382-388.
| doi: https://doi.org/10.1175/WAF966.1
|
.. _Buizza-1997:

| Buizza, R., 1997: Potential forecast skill of ensemble prediction and spread
| and skill distributions of the ECMWF ensemble prediction system. *Monthly*
| *Weather Review*,125, 99-119.
|
| *Weather Review*, 125, 99-119.
|
.. _Bullock-2016:

| Bullock, R., T. Fowler, and B. Brown, 2016: Method for Object-Based
| Diagnostic Evaluation. *NCAR Technical Note* NCAR/TN-532+STR, 66 pp.
|
|
.. _Candille-2007:

| Candille G, Côté C, Houtekamer PL, Pellerin G, 2007: Verification of an
| ensemble prediction system against observations. *Mon. Weather Rev.*
| 135: 2688-2699.
| doi: https://doi.org/10.1175/MWR3414.1
|
.. _Candille-2008:

| Candille, G., and O. Talagrand, 2008: Impact of observational error on the
| validation of ensemble prediction systems. *Quarterly Journal of the Royal*
| *Meteorological Society* 134: 959-971.
|
|
.. _Casati-2004:

| Casati, B., G. Ross, and D. Stephenson, 2004: A new intensity-scale approach
| for the verification of spatial precipitation forecasts. *Meteorological*
| *Applications* 11, 141-154.
|
|
.. _Davis-2006:

Expand All @@ -86,37 +108,45 @@ References
| Davis, C.A., B.G. Brown, and R.G. Bullock, 2006b: Object-based verification
| of precipitation forecasts, Part II: Application to convective rain systems.
| *Monthly Weather Review*, 134, 1785-1795.
|
|
.. _Dawid-1984:

| Dawid, A.P., 1984: Statistical theory: The prequential approach. *Journal of*
| *the Royal Statistical Society* A147, 278-292.
|
|
.. _Ebert-2008:

| Ebert, E.E., 2008: Fuzzy verification of high-resolution gridded forecasts:
| a review and proposed framework. *Meteorological Applications,* 15, 51-64.
|
| a review and proposed framework. *Meteorological Applications*, 15, 51-64.
|
.. _Eckel-2012:

| Eckel, F. A., M.S. Allen, M. C. Sittel, 2012: Estimation of Ambiguity in
| Ensemble Forecasts. *Weather Forecasting,* 27, 50-69.
| doi: http://dx.doi.org/10.1175/WAF-D-11-00015.1
| Ensemble Forecasts. *Weather Forecasting*, 27, 50-69.
| doi: https://doi.org/10.1175/WAF-D-11-00015.1
|
.. _Efron-2007:

| Efron, B. 2007: Correlation and large-scale significance testing. *Journal*
| of the American Statistical Association,* 102(477), 93-103.
| of the American Statistical Association*, 102(477), 93-103.
|
.. _Epstein-1969:

| Epstein, E. S., 1969: A scoring system for probability forecasts of ranked categories.
| *J. Appl. Meteor.*, 8, 985-987, 10.1175/1520-0450(1969)008<0985:ASSFPF>2.0.CO;2.
| *J. Appl. Meteor.*, 8, 985-987.
| doi: `https://doi.org/10.1175/1520-0450(1969)008<0985:ASSFPF>2.0.CO;2 <https://doi.org/10.1175/1520-0450(1969)008\<0985:ASSFPF\>2.0.CO;2>`_
|
.. _Ferro-2017:

| Ferro C. A. T., 2017: Measuring forecast performance in the presence of observation error.
| *Q. J. R. Meteorol. Soc.*, 143 (708), 2665-2676.
| doi: https://doi.org/10.1002/qj.3115
|
.. _Gilleland-2010:
Expand All @@ -129,29 +159,32 @@ References

| Gilleland, E., 2017: A new characterization in the spatial verification
| framework for false alarms, misses, and overall patterns.
| *Weather and Forecasting*, 32 (1), 187 - 198, doi: 10.1175/WAF-D-16-0134.1.
| *Weather and Forecasting*, 32 (1), 187 - 198.
| doi: https://doi.org/10.1175/WAF-D-16-0134.1
|
.. _Gilleland_PartI-2020:

| Gilleland, E., 2020: Bootstrap methods for statistical inference.
| Part I: Comparative forecast verification for continuous variables.
| *Journal of Atmospheric and Oceanic Technology*, 37 (11), 2117 - 2134,
| doi: 10.1175/JTECH-D-20-0069.1.
| *Journal of Atmospheric and Oceanic Technology*, 37 (11), 2117 - 2134.
| doi: https://doi.org/10.1175/JTECH-D-20-0069.1
|
.. _Gilleland_PartII-2020:

| Gilleland, E., 2020: Bootstrap methods for statistical inference.
| Part II: Extreme-value analysis. *Journal of Atmospheric and Oceanic*
| *Technology*, 37 (11), 2135 - 2144, doi: 10.1175/JTECH-D-20-0070.1.
| *Technology*, 37 (11), 2135 - 2144.
| doi: https://doi.org/10.1175/JTECH-D-20-0070.1
|
.. _Gilleland-2021:

| Gilleland, E., 2021: Novel measures for summarizing high-resolution forecast
| performance. *Advances in Statistical Climatology, Meteorology and Oceanography*,
| 7 (1), 13 - 34, doi: 10.5194/ascmo-7-13-2021.
| 7 (1), 13 - 34.
| doi: https://doi.org/10.5194/ascmo-7-13-2021
|
.. _Gneiting-2004:
Expand All @@ -161,7 +194,7 @@ References
| *Minimum CRPS Estimation*. Technical Report no. 449, Department of
| Statistics, University of Washington. Available at
| http://www.stat.washington.edu/www/research/reports/
|
|
.. _Haiden-2012:

Expand All @@ -175,41 +208,41 @@ References

| Hamill, T. M., 2001: Interpretation of rank histograms for verifying ensemble
| forecasts. *Monthly Weather Review*, 129, 550-560.
|
|
.. _Hersbach-2000:

| Hersbach, H., 2000: Decomposition of the Continuous Ranked Probability Score
| for Ensemble Prediction Systems. *Weather and Forecasting*, 15, 559-570.
|
|
.. _Jolliffe-2012:

| Jolliffe, I.T., and D.B. Stephenson, 2012: *Forecast verification. A*
| *practitioner's guide in atmospheric science.* Wiley and Sons Ltd, 240 pp.
|
|
.. _Knaff-2003:

| Knaff, J.A., M. DeMaria, C.R. Sampson, and J.M. Gross, 2003: Statistical,
| Five-Day Tropical Cyclone Intensity Forecasts Derived from Climatology
| and Persistence. *Weather and Forecasting,* Vol. 18 Issue 2, p. 80-92.
|
| and Persistence. *Weather and Forecasting*, Vol. 18 Issue 2, p. 80-92.
|
.. _Mason-2004:

| Mason, S. J., 2004: On Using "Climatology" as a Reference Strategy
| in the Brier and Ranked Probability Skill Scores. *Monthly Weather Review*,
| 132, 1891-1895.
|
|
.. _Mason-2008:

| Mason, S. J., 2008: Understanding forecast verification statistics.
| *Meteor. Appl.*, 15, 31-40, doi: 10.1002/met.51.
| *Meteor. Appl.*, 15, 31-40.
| doi: https://doi.org/10.1002/met.51
|

.. _Mittermaier-2014:

| Mittermaier, M., 2014: A strategy for verifying near-convection-resolving
Expand All @@ -220,21 +253,20 @@ References

| Mood, A. M., F. A. Graybill and D. C. Boes, 1974: *Introduction to the*
| *Theory of Statistics*, McGraw-Hill, 299-338.
|
|
.. _Murphy-1969:

| Murphy, A.H., 1969: On the ranked probability score. *Journal of Applied*
| *Meteorology and Climatology*, 8 (6), 988 - 989,
| doi: 10.1175/1520-0450(1969)008<0988:OTPS>2.0.CO;2.
| doi: `https://doi.org/10.1175/1520-0450(1969)008<0988:OTPS>2.0.CO;2 <https://doi.org/10.1175/1520-0450(1969)008\<0988:OTPS\>2.0.CO;2>`_
|
.. _Murphy-1987:

| Murphy, A.H., and R.L. Winkler, 1987: A general framework for forecast
| verification. *Monthly Weather Review*, 115, 1330-1338.
|
|
.. _North-2022:

Expand All @@ -256,7 +288,7 @@ References
| Roberts, N.M., and H.W. Lean, 2008: Scale-selective verification of rainfall
| accumulations from high-resolution forecasts of convective events.
| *Monthly Weather Review*, 136, 78-97.
|
|
.. _Rodwell-2010:

Expand All @@ -273,19 +305,26 @@ References
| https://www.ecmwf.int/node/14595
|
.. _Röpnack-2013:

| Röpnack A, Hense A, Gebhardt C, Majewski D., 2013: Bayesian model verification
| of NWP ensemble forecasts. *Mon. Weather Rev.* 141: 375–387.
| doi: https://doi.org/10.1175/MWR-D-11-00350.1
|
.. _Saetra-2004:

| Saetra O., H. Hersbach, J-R Bidlot, D. Richardson, 2004: Effects of
| Saetra Ø., H. Hersbach, J-R Bidlot, D. Richardson, 2004: Effects of
| observation errors on the statistics for ensemble spread and
| reliability. *Monthly Weather Review* 132: 1487-1501.
| reliability. *Monthly Weather Review*, 132: 1487-1501.
|
.. _Santos-2012:

| Santos C. and A. Ghelli, 2012: Observational probability method to assess
| ensemble precipitation forecasts. *Quarterly Journal of the Royal*
| *Meteorological Society* 138: 209-221.
|
|
.. _Schwartz-2017:

Expand All @@ -298,38 +337,40 @@ References

| Stephenson, D.B., 2000: Use of the "Odds Ratio" for diagnosing
| forecast skill. *Weather and Forecasting*, 15, 221-232.
|
|
.. _Stephenson-2008:

| Stephenson, D.B., B. Casati, C.A.T. Ferro, and C.A. Wilson, 2008: The extreme
| dependency score: A non-vanishing measure for forecasts of rare events.
| *Meteorological Applications* 15, 41-50.
|
|
.. _Todter-2012:
.. _Tödter-2012:

| Tödter, J. and B. Ahrens, 2012: Generalization of the Ignorance Score:
| Continuous ranked version and its decomposition. *Monthly Weather Review*,
| 140 (6), 2005 - 2017, doi: 10.1175/MWR-D-11-00266.1.
| 140 (6), 2005 - 2017.
| doi: https://doi.org/10.1175/MWR-D-11-00266.1
|
.. _Weniger-2016:

| Weniger, M., F. Kapp, and P. Friederichs, 2016: Spatial Verification Using
| Wavelet Transforms: A Review. *Quarterly Journal of the Royal*
| *Meteorological Society*, 143, 120-136.
|
|
.. _Wilks-2010:

| Wilks, D.S. 2010: Sampling distributions of the Brier score and Brier skill
| score under serial dependence. *Quarterly Journal of the Royal*
| *Meteorological Society*, 136, 2109-2118. doi:10.1002/qj.709
|
| *Meteorological Society*, 136, 2109-2118.
| doi: https://doi.org/10.1002/qj.709
|
.. _Wilks-2011:

| Wilks, D., 2011: *Statistical methods in the atmospheric sciences.*
| Elsevier, San Diego.
|
|
Loading

0 comments on commit 108a895

Please sign in to comment.