Scoring forecasts directly
Nikos Bosse
2024-01-05
Source:vignettes/scoring-forecasts-directly.Rmd
scoring-forecasts-directly.Rmd
A variety of metrics and scoring rules can also be accessed directly
through the scoringutils
package.
The following gives an overview of (most of) the implemented metrics.
Bias
The function bias
determines bias from predictive
Monte-Carlo samples, automatically recognising whether forecasts are
continuous or integer valued.
For continuous forecasts, Bias is measured as \[B_t (P_t, x_t) = 1 - 2 \cdot (P_t (x_t))\]
where \(P_t\) is the empirical cumulative distribution function of the prediction for the true value \(x_t\). Computationally, \(P_t (x_t)\) is just calculated as the fraction of predictive samples for \(x_t\) that are smaller than \(x_t\).
For integer valued forecasts, Bias is measured as
\[B_t (P_t, x_t) = 1 - (P_t (x_t) + P_t (x_t + 1))\]
to adjust for the integer nature of the forecasts. In both cases, Bias can assume values between -1 and 1 and is 0 ideally.
## integer valued forecasts
true_values <- rpois(30, lambda = 1:30)
predictions <- replicate(200, rpois(n = 30, lambda = 1:30))
bias_sample(true_values, predictions)
#> [1] -0.060 0.510 0.740 0.690 0.830 0.920 0.335 -0.720 0.145 -0.645
#> [11] 0.320 0.955 -0.560 -0.250 0.610 -0.405 -0.215 0.695 -0.285 -0.595
#> [21] -0.830 -0.560 0.560 -0.800 -0.060 -0.470 0.470 -0.675 -0.225 -0.210
## continuous forecasts
true_values <- rnorm(30, mean = 1:30)
predictions <- replicate(200, rnorm(30, mean = 1:30))
bias_sample(true_values, predictions)
#> [1] 0.86 -0.21 -0.70 -0.74 -0.28 0.16 -0.57 0.20 0.92 0.02 0.18 -0.98
#> [13] 0.42 -0.55 -0.41 0.31 0.83 0.02 -0.89 0.69 -0.22 -0.64 -0.21 -0.03
#> [25] -0.36 -0.89 0.52 -0.91 0.14 -0.92
Sharpness
Sharpness is the ability of the model to generate predictions within a narrow range. It is a data-independent measure, and is purely a feature of the forecasts themselves.
Sharpness / dispersion of predictive samples corresponding to one
single true value is measured as the normalised median of the absolute
deviation from the median of the predictive samples. For details, see
?stats::mad
predictions <- replicate(200, rpois(n = 30, lambda = 1:30))
mad_sample(predictions)
#> [1] 1.4826 1.4826 1.4826 1.4826 1.4826 2.9652 2.9652 2.9652 2.9652 2.9652
#> [11] 2.9652 2.9652 3.7065 4.4478 4.4478 2.9652 2.9652 4.4478 4.4478 4.4478
#> [21] 4.4478 4.4478 4.4478 5.1891 4.4478 5.9304 4.4478 5.9304 4.4478 5.9304
Calibration
Calibration or reliability of forecasts is the ability of a model to correctly identify its own uncertainty in making predictions. In a model with perfect calibration, the observed data at each time point look as if they came from the predictive probability distribution at that time.
Equivalently, one can inspect the probability integral transform of the predictive distribution at time t,
\[u_t = F_t (x_t)\]
where \(x_t\) is the observed data point at time \(t \text{ in } t_1, …, t_n\), n being the number of forecasts, and \(F_t\) is the (continuous) predictive cumulative probability distribution at time t. If the true probability distribution of outcomes at time t is \(G_t\) then the forecasts \(F_t\) are said to be ideal if \(F_t = G_t\) at all times \(t\). In that case, the probabilities ut are distributed uniformly.
In the case of discrete outcomes such as incidence counts, the PIT is no longer uniform even when forecasts are ideal. In that case a randomised PIT can be used instead:
\[u_t = P_t(k_t) + v \cdot (P_t(k_t) - P_t(k_t - 1) )\]
where \(k_t\) is the observed count, \(P_t(x)\) is the predictive cumulative probability of observing incidence \(k\) at time \(t\), \(P_t (-1) = 0\) by definition and \(v\) is standard uniform and independent of \(k\). If \(P_t\) is the true cumulative probability distribution, then \(u_t\) is standard uniform.
The function checks whether integer or continuous forecasts were provided. It then applies the (randomised) probability integral and tests the values \(u_t\) for uniformity using the Anderson-Darling test.
As a rule of thumb, there is no evidence to suggest a forecasting model is miscalibrated if the p-value found was greater than a threshold of \(p >= 0.1\), some evidence that it was miscalibrated if \(0.01 < p < 0.1\), and good evidence that it was miscalibrated if \(p <= 0.01\). In this context it should be noted, though, that uniformity of the PIT is a necessary but not sufficient condition of calibration. It should also be noted that the test only works given sufficient samples, otherwise the Null hypothesis will often be rejected outright.
Continuous Ranked Probability Score (CRPS)
Wrapper around the crps_sample()
function from the
scoringRules
package. For more information look at the
manuals from the scoringRules
package. The function can be
used for continuous as well as integer valued forecasts. Smaller values
are better.
true_values <- rpois(30, lambda = 1:30)
predictions <- replicate(200, rpois(n = 30, lambda = 1:30))
crps_sample(true_values, predictions)
#> [1] 0.619075 1.141925 0.421800 1.187925 1.862175 1.421125 3.175625 0.707225
#> [9] 0.821000 2.202550 1.489675 4.230350 2.068600 1.846400 2.033150 1.006725
#> [17] 4.382625 5.714400 1.308225 1.644700 1.016875 7.485375 1.172850 4.020650
#> [25] 6.572950 2.435500 2.027250 4.798200 5.996925 1.304075
Dawid-Sebastiani Score (DSS)
Wrapper around the dss_sample()
function from the
scoringRules
package. For more information look at the
manuals from the scoringRules
package. The function can be
used for continuous as well as integer valued forecasts. Smaller values
are better.
true_values <- rpois(30, lambda = 1:30)
predictions <- replicate(200, rpois(n = 30, lambda = 1:30))
dss_sample(true_values, predictions)
#> [1] 1.165659 1.050672 2.036363 2.468454 2.593678 3.293437 3.313360 3.274400
#> [9] 5.060960 5.099363 2.418170 2.622987 3.025972 2.883681 2.717724 3.393027
#> [17] 4.039027 3.280330 3.730624 2.984937 5.068618 4.988829 6.597516 3.188194
#> [25] 3.406496 4.382078 3.318902 3.673912 5.901138 3.555209
Log Score
Wrapper around the logs_sample()
function from the
scoringRules
package. For more information look at the
manuals from the scoringRules
package. The function should
not be used for integer valued forecasts. While Log Scores are in
principle possible for integer valued forecasts they require a kernel
density estimate which is not well defined for discrete values. Smaller
values are better.
true_values <- rnorm(30, mean = 1:30)
predictions <- replicate(200, rnorm(n = 30, mean = 1:30))
logs_sample(true_values, predictions)
#> [1] 1.0237927 2.2212938 0.9748835 1.5016227 2.3900393 3.1304078 2.4975716
#> [8] 1.0549976 1.0984040 1.0540025 1.0810217 1.1917151 1.3401613 2.2647809
#> [15] 1.2568646 1.1724414 1.3442432 1.7490976 1.1767289 1.0240906 1.1313794
#> [22] 2.3427133 1.0749628 1.4782878 2.5371978 1.1889431 1.0348537 0.9774932
#> [29] 1.2378228 0.9858892
Brier Score
The Brier score is a proper score rule that assesses the accuracy of probabilistic binary predictions. The outcomes can be either 0 or 1, the predictions must be a probability that the true outcome will be 1.
The Brier Score is then computed as the mean squared error between the probabilistic prediction and the true outcome.
\[\text{Brier_Score} = \frac{1}{N} \sum_{t = 1}^{n} (\text{prediction}_t - \text{outcome}_t)^2\]
true_values <- sample(c(0, 1), size = 30, replace = TRUE)
predictions <- runif(n = 30, min = 0, max = 1)
brier_score(true_values, predictions)
#> [1] 0.865540531 0.045168467 0.086694747 0.595718911 0.416585683 0.633915594
#> [7] 0.014313674 0.926126211 0.211062581 0.019926024 0.080794303 0.530490349
#> [13] 0.164082353 0.373946098 0.410966854 0.237093996 0.094151585 0.857520865
#> [19] 0.179130686 0.703889627 0.487750648 0.253234251 0.118782626 0.990205614
#> [25] 0.539813342 0.655227998 0.178944720 0.004635267 0.067633383 0.056118898
Interval Score
The Interval Score is a Proper Scoring Rule to score quantile predictions, following Gneiting and Raftery (2007). Smaller values are better.
The score is computed as
\[ \text{score} = (\text{upper} - \text{lower}) + \\ \frac{2}{\alpha} \cdot (\text{lower} - \text{true_value}) \cdot 1(\text{true_values} < \text{lower}) + \\ \frac{2}{\alpha} \cdot (\text{true_value} - \text{upper}) \cdot 1(\text{true_value} > \text{upper})\]
where \(1()\) is the indicator
function and \(\alpha\) is the decimal
value that indicates how much is outside the prediction interval. To
improve usability, the user is asked to provide an interval range in
percentage terms, i.e. interval_range = 90 (percent) for a 90 percent
prediction interval. Correspondingly, the user would have to provide the
5% and 95% quantiles (the corresponding alpha would then be 0.1). No
specific distribution is assumed, but the range has to be symmetric (i.e
you can’t use the 0.1 quantile as the lower bound and the 0.7 quantile
as the upper). Setting weigh = TRUE
will weigh the score by
\(\frac{\alpha}{2}\) such that the
Interval Score converges to the CRPS for increasing number of
quantiles.
true_values <- rnorm(30, mean = 1:30)
interval_range <- 90
alpha <- (100 - interval_range) / 100
lower <- qnorm(alpha / 2, rnorm(30, mean = 1:30))
upper <- qnorm((1 - alpha / 2), rnorm(30, mean = 1:30))
interval_score(
true_values = true_values,
lower = lower,
upper = upper,
interval_range = interval_range
)
#> [1] 0.2151940 0.2650117 0.1705533 0.1184143 0.2780883 0.1354288 0.1879721
#> [8] 2.1592026 0.1197901 0.1386311 0.3213405 0.2312974 0.2421270 0.1896443
#> [15] 0.2119109 0.1461521 0.1761693 0.2396483 0.1973792 0.1040551 0.2130877
#> [22] 0.2509228 0.1283601 0.2474427 0.1107187 1.3530117 0.8432551 0.2077642
#> [29] 0.4716714 0.1720388