scoringutils: Utilities for Scoring and Assessing Predictions
This package is designed to help with assessing the quality of predictions.
It provides a collection of proper scoring rules and metrics that can be
accessed independently or collectively to score predictions automatically.
It provides some metrics, e.g. for evaluating bias or for
assessing calibration using
the probability integral transformation that go beyond the scope of existing
packages like scoringRules. Through the function eval_forecasts it also
provides functionality that automatically and very conveniently
scores forecasts using data.table.
For a quick overview, have a look at the package vignette
Predicitions can be either probabilistic forecasts (generally predictive samples generated by Markov Chain Monte Carlo procedures), quantile forecasts or point forecasts. The type of the predictions and the true values can be either continuous, integer, or binary.
Installation
Install the stable version from CRAN using
install.packages("scoringutils")New features not yet available on CRAN be accessed via
{drat}:
install.packages("drat")
drat:::add("epiforecasts")
install.packages("scoringutils")The version of the package under active development can also be installed with:
remotes::install_github("epiforecasts/scoringutils")Supported scores and metrics
pitbiassharpnesscrpsdssbrier_scoreinterval_score
More information on the different metrics and examples can be found in the package vignette.