Do Pairwise Comparison for one Set of Forecasts
Source:R/pairwise-comparisons.R
pairwise_comparison_one_group.Rd
This function does the pairwise comparison for one set of forecasts, but
multiple models involved. It gets called from pairwise_comparison()
.
pairwise_comparison()
splits the data into arbitrary subgroups specified
by the user (e.g. if pairwise comparison should be done separately for
different forecast targets) and then the actual pairwise comparison for that
subgroup is managed from pairwise_comparison_one_group()
. In order to
actually do the comparison between two models over a subset of common
forecasts it calls compare_two_models()
.
Arguments
- scores
A data.table of scores as produced by
score()
.- metric
A character vector of length one with the metric to do the comparison on. The default is "auto", meaning that either "interval_score", "crps", or "brier_score" will be selected where available. See
available_metrics()
for available metrics.- baseline
character vector of length one that denotes the baseline model against which to compare other models.
- by
character vector with names of columns present in the input data.frame.
by
determines how pairwise comparisons will be computed. You will get a relative skill score for every grouping level determined inby
. If, for example,by = c("model", "location")
. Then you will get a separate relative skill score for every model in every location. Internally, the data.frame will be split accordingby
(but removing "model" before splitting) and the pairwise comparisons will be computed separately for the split data.frames.- ...
additional arguments for the comparison between two models. See
compare_two_models()
for more information.