Evaluation of prediction performance on the OOB set is done using various measure for classification problems.

Usage,
evaluation(x, y, plot = FALSE)

Arguments

x

true class labels

y

predicted class labels

plot

logical; if TRUE a discrimination plot and reliability plot are shown for each class

Value

A list with one element per evaluation measure except for the cselement, which returns a list of class-specific evaluation measures.

Details

The currently supported evaluation measures include discriminatory measures like log loss, AUC, and PDI, macro-averaged PPV (Precision)/Sensitivity (Recall)/F1-score, accuracy (same as micro-averaged PPV Sensitivity/F1-score), Matthew's Correlation Coefficient (and its micro-averaged analog), Kappa, G-mean, and class-specific accuracy/PPV/NPV/Sensitivity/Specificity/F1-score/MCC/Kappa/G-mean.

Author

Derek Chiu

Examples

data(hgsc)
class <- factor(attr(hgsc, "class.true"))
set.seed(1)
training.id <- sample(seq_along(class), replace = TRUE)
test.id <- which(!seq_along(class) %in% training.id)
mod <- classification(hgsc[training.id, ], class[training.id], "xgboost")
pred <- prediction(mod, hgsc, class, test.id)
evaluation(class[test.id], pred)
#> $logloss
#> [1] 0.9125752
#> 
#> $auc
#> [1] 0.9372867
#> 
#> $pdi
#> [1] 0.8493226
#> 
#> $accuracy
#> [1] 0.7540107
#> 
#> $macro_ppv
#> [1] 0.7578747
#> 
#> $macro_npv
#> [1] 0.9177646
#> 
#> $macro_sensitivity
#> [1] 0.758613
#> 
#> $macro_specificity
#> [1] 0.9176079
#> 
#> $macro_f1
#> [1] 0.7560955
#> 
#> $mcc
#> [1] 0.6726097
#> 
#> $kappa
#> [1] 0.6714537
#> 
#> $gmean
#> [1] 0.7548308
#> 
#> $cs
#>    accuracy.DIF.C4    accuracy.IMM.C2    accuracy.MES.C1    accuracy.PRO.C5 
#>          0.8235294          0.8770053          0.9251337          0.8823529 
#>         ppv.DIF.C4         ppv.IMM.C2         ppv.MES.C1         ppv.PRO.C5 
#>          0.6851852          0.7045455          0.8125000          0.8292683 
#>         npv.DIF.C4         npv.IMM.C2         npv.MES.C1         npv.PRO.C5 
#>          0.8796992          0.9300699          0.9640288          0.8972603 
#> sensitivity.DIF.C4 sensitivity.IMM.C2 sensitivity.MES.C1 sensitivity.PRO.C5 
#>          0.6981132          0.7560976          0.8863636          0.6938776 
#> specificity.DIF.C4 specificity.IMM.C2 specificity.MES.C1 specificity.PRO.C5 
#>          0.8731343          0.9109589          0.9370629          0.9492754 
#>          f1.DIF.C4          f1.IMM.C2          f1.MES.C1          f1.PRO.C5 
#>          0.6915888          0.7294118          0.8478261          0.7555556 
#>         mcc.DIF.C4         mcc.IMM.C2         mcc.MES.C1         mcc.PRO.C5 
#>          0.5680571          0.6506338          0.7996339          0.6835707 
#>       kappa.DIF.C4       kappa.IMM.C2       kappa.MES.C1       kappa.PRO.C5 
#>          0.5680084          0.6499552          0.7983051          0.6788948 
#>       gmean.DIF.C4       gmean.IMM.C2       gmean.MES.C1       gmean.PRO.C5 
#>          0.7807347          0.8299240          0.9113608          0.8115916 
#>