coverforest.metrics.classification_coverage_score#
- coverforest.metrics.classification_coverage_score(y_true, y_pred, *, labels=None, sample_weight=None)[source]#
Compute the empirical coverage for classification prediction sets.
The coverage score measures the proportion of true labels that are included in the prediction sets.
- Parameters:
y_true (array-like of shape (n_samples,)) – Ground truth (correct) labels.
y_pred (tuple, list or array-like of shape (n_samples, n_classes)) – Binary matrix indicating the predicted set for each sample, where 1 indicates the class is included in the prediction set and 0 indicates it is not.
labels (array-like of shape (n_classes,), default=None) – List of labels in the same order of the columns of y_pred.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights. If None, then samples are equally weighted.
- Returns:
score – Returns the empirical coverage, i.e., the proportion of true labels included in the prediction sets, weighted by sample_weight. Best value is 1 and worst value is 0.
- Return type:
Examples
>>> import numpy as np >>> from metrics import classification_coverage_score >>> y_true = [0, 1, 2] >>> y_pred = np.array([[1, 0, 1], [0, 0, 1], [0, 0, 1]]) >>> labels = [0, 1, 2] >>> classification_coverage_score(y_true, y_pred, labels=labels) 0.66...