Skip to content

Specifically designed metrics quantify model performance against observations via ``metric scores'' and compare to scores to "null" model-benchmarks based on the temporal or spatial mean value of the observations and a ``random'' model produced by bootstrap resampling of the observations.

douglask3/benchmarkmetrics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Benchmark Metrics

This is an R package containing sepcifically design metrics to assess and benchmark Land Surface and Vegetation Models against observations, but can also be applied to any simulation vs observation comparison.

Metrics

There are 8 metrics used for different types of comparison

  • Normalised Mean Error and Normalised Mean Squred Error (NME, NMSE) for most comparisons (inc gridded, temporal and site based comparisons)
  • Manhattan Metric and Square Chord Distance (MM, SCD) for "item" comparisons (i.e, fractional cover of different land surfaces)
  • A Discrete version of MM and SCD (DMM, DSCD) for item comparison for discete, one value items per cell/site, and items are described by affinity to traits.
  • Mean Phase Difference (MPD) for comparisons of osillaiting processes (i.e, seasonal or diurnal variations

Interpretation

Scores of metrics are easy to interpret for model everulation and inter-model comparisons. Null models help aid score evalutation futher.

How TO Use

To insatll the package, first make sure you have R/Rstudio running and you have installed and loaded devtools:

install.packages('devtools')
library(devtools)

Then run the following command to install and load Benchmark Metrics:

install_github('douglask3/benchmarkmetrics/benchmarkMetrics')
library(benchmarkMetrics)

To see help files and examples of how to use:

?benchmarkMetrics

Info on the main functions can be found using:

?NME
?MM
?DMM
?MPD

More Info

click here for documentation.

About

Specifically designed metrics quantify model performance against observations via ``metric scores'' and compare to scores to "null" model-benchmarks based on the temporal or spatial mean value of the observations and a ``random'' model produced by bootstrap resampling of the observations.

Resources

Stars

Watchers

Forks

Packages

No packages published