Skip to content
This repository has been archived by the owner on Dec 23, 2020. It is now read-only.

Latest commit

 

History

History
60 lines (36 loc) · 1.42 KB

BGE1.rst

File metadata and controls

60 lines (36 loc) · 1.42 KB

Metrics for Model evaluation

Methods commonly used to evaluate model performance, include:

  • Mean absolute error (MAE)

$$\mathrm{MAE}=\frac{1}{N} \sum_{i=1}^{N}\left|y_{i}-\hat{y}_{i}\right|$$

where N is number of observations, yi the actual expected output and i the model’s prediction (same notations below if not indicated otherwise).

  • Mean bias error (MBE)

$$\mathrm{MBE}=\frac{1}{N} \sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)$$

  • Mean square error (MSE)

$$\mathrm{MSE}=\frac{1}{N} \sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}$$

  • Root mean square error (RMSE)

$$\mathrm{RMSE}=\sqrt{\frac{1}{N} \sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}}$$

  • Coefficient of determination (R2)

$$R^{2}= 1-\frac{\mathrm{MSE}(\text { model })} {\mathrm{MSE}(\text { baseline })}$$$$\mathrm{MSE}(\text { baseline })= \frac{1}{N} \sum_{i=1}^{N}\left(y_{i}-\overline{y}\right)^{2}$$

where $\overline{y}$ is mean of observed yi.

Combined with plots (e.g. scatter, time series) allows identification of periods when a model performs well/poorly relative to observations. It should be remembered that both the model (e.g. parameters, forcing data) and the evaluation observations have errors.