CoverForestClassifier#
- class coverforest.CoverForestClassifier(n_estimators=5, *, method='cv', cv=5, k_init='auto', lambda_init='auto', repeat_params_search=True, allow_empty_sets=True, randomized=True, alpha_default=None, n_forests_per_fold=1, resample_n_estimators=True, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, max_samples=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, monotonic_cst=None)[source]#
A conformal random forest classifier.
This class provides an implementation of conformal random forest for prediction sets that contain the true labels with probability above 1-alpha, where alpha is a user-specified miscoverage rate. The prediction sets are constructed using the Adaptive Prediction Set (APS) method.
The class supports three subsampling methods for out-of-sample calibration:
‘cv’: Uses K-fold cross-validation to split the training set. This method is referred to as CV+.
‘bootstrap’: Uses bootstrap subsampling on the training set. This method is referred to as Jackknife+-after-Bootstrap.
‘split’: Uses train-test split on the training set. This method is referred to as split conformal.
If there a lot of empty sets returned by the
predict()
method, try increasing the target coverage rate by decreasing the value ofalpha
. The optionallow_empty_sets=False
should be used sparingly.The Jackknife+-after-bootstrap implementation (
method='bootstrap'
) follows [5] Specifically, before fitting, the number of sub-estimators is resampled from the binomial distribution: Binomial(n_estimators / p, p) wherep = 1 / (1 - n_samples)**max_samples.
To fit the model with exactly
n_estimators
number of sub-estimators, initiate the model withresample_n_estimators=False
.- Parameters:
n_estimators (int, default=10) – The number of
sklearn.tree.DecisionTreeClassifier
in the forest.method ({'cv', 'bootstrap', 'split'}, default='cv') –
The conformal prediction method to use:
’cv’: Uses CV+ for conformal prediction
’bootstrap’: Uses Jackknife+-after-Bootstrap
’split’: Uses split conformal prediction
cv (int or cross-validation generator, default=5) – Used when
method='cv'
. If an integer is provided, then it is the number of folds used. See the module sklearn.model_selection module for the list of possible cross-validation objects.k_init (int or "auto", default="auto") – Initial value for the parameter k that penalizes any set prediction that contains more than k classes. If “auto”, the value is chosen automatically during fitting.
lambda_init (float or "auto", default="auto") – Initial value for lambda parameter (regularization strength). If “auto”, the value is chosen automatically during fitting.
repeat_params_search (bool, default=True) – Whether to repeat the search for optimal parameters when refitting.
allow_empty_sets (bool, default=True) – If True, allows empty prediction sets when no class meets the confidence threshold.
randomized (bool, default=True) – If True, adds randomization during the label selection which yields smaller prediction sets. If False, the predictions will have more conservative coverage.
alpha_default (float, default=None) – The default value of miscoverage rate
alpha
that will be passed topredict()
whenever it is called indirectly i.e. via scikit-learn’sGridSearchCV
.n_forests_per_fold (int, default=1) – Used when
method='cv'
. The number of the forests to be fitted on each combination of K-1 folds.resample_n_estimators (bool, default=True) – Used when
method='bootstrap'
. If True, resample the value ofn_estimators
following the procedure in Kim, Xu & Barber (2020). Specifically, a new number of estimators is sampled from Binomial(n_estimators / p, p) where p = 1 / (1 - n_samples)**max_samples.bootstrap (bool, default=True) – Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree. When
method='cv'
, the value will be passed on to theFastRandomForestClassifier
sub-estimators.max_samples (int or float, default=None) –
If bootstrap is True, the number of samples to draw from X to train each base estimator.
If None (default), then draw
X.shape[0]
samples.If int, then draw
max_samples
samples.If float, then draw
max(round(n_samples * max_samples), 1)
samples.
Thus,
max_samples
should be in the interval(0.0, 1.0]
. Whenmethod='cv'
, the value will be passed on to theFastRandomForestClassifier
sub-estimators.criterion ({"gini", "entropy", "log_loss"}, default="gini") – The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “log_loss” and “entropy” both for the Shannon information gain, see
tree_mathematical_formulation
. Note: This parameter is tree-specific.max_depth (int, default=None) – The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
min_samples_split (int or float, default=2) –
The minimum number of samples required to split an internal node:
If int, then consider
min_samples_split
as the minimum number.If float, then
min_samples_split
is a fraction andceil(min_samples_split * n_samples)
are the minimum number of samples for each split.
min_samples_leaf (int or float, default=1) –
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaf
training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.If int, then consider
min_samples_leaf
as the minimum number.If float, then
min_samples_leaf
is a fraction andceil(min_samples_leaf * n_samples)
are the minimum number of samples for each node.
min_weight_fraction_leaf (float, default=0.0) – The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_features ({"sqrt", "log2", None}, int or float, default="sqrt") –
The number of features to consider when looking for the best split:
If int, then consider
max_features
features at each split.If float, then
max_features
is a fraction andmax(1, int(max_features * n_features_in_))
features are considered at each split.If “sqrt”, then
max_features=sqrt(n_features)
.If “log2”, then
max_features=log2(n_features)
.If None, then
max_features=n_features
.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than
max_features
features.max_leaf_nodes (int, default=None) – Grow trees with
max_leaf_nodes
in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.min_impurity_decrease (float, default=0.0) –
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)
where
N
is the total number of samples,N_t
is the number of samples at the current node,N_t_L
is the number of samples in the left child, andN_t_R
is the number of samples in the right child.N
,N_t
,N_t_R
andN_t_L
all refer to the weighted sum, ifsample_weight
is passed.oob_score (bool or callable, default=False) – Whether to use out-of-bag samples to estimate the generalization score. By default,
sklearn.metrics.accuracy_score
is used. Provide a callable with signaturemetric(y_true, y_pred)
to use a custom metric. Only available ifbootstrap=True
.n_jobs (int, default=None) – The number of jobs to run in parallel.
fit
,predict
,decision_path
andapply
are all parallelized over the trees.None
means 1 unless in ajoblib.parallel_backend
context.-1
means using all processors.random_state (int, RandomState instance or None, default=None) – Controls both the randomness of the bootstrapping of the samples used when building trees (if
bootstrap=True
) and the sampling of the features to consider when looking for the best split at each node (ifmax_features < n_features
).verbose (int, default=0) – Controls the verbosity when fitting and predicting.
warm_start (bool, default=False) – When set to
True
, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest.class_weight ({"balanced", "balanced_subsample"}, dict or list of dicts, default=None) –
Weights associated with classes in the form
{class_label: weight}
. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y.Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}].
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as
n_samples / (n_classes * np.bincount(y))
The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown.
For multi-output, the weights of each column of y will be multiplied.
Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.
ccp_alpha (non-negative float, default=0.0) – Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than
ccp_alpha
will be chosen. By default, no pruning is performed.monotonic_cst (array-like of int of shape (n_features), default=None) –
Indicates the monotonicity constraint to enforce on each feature.
1: monotonic increase
0: no constraint
-1: monotonic decrease
If monotonic_cst is None, no constraints are applied.
Monotonicity constraints are not supported for classifications trained on data with missing values.
The constraints hold over the probability of the positive class.
- estimator_#
The child estimator template used to create the collection of fitted sub-estimators. It will be a
FastRandomForestClassifier
ifmethod='cv'
andsklearn.tree.DecisionTreeClassifier
otherwise.- Type:
FastRandomForestClassifier
orsklearn.tree.DecisionTreeClassifier
- estimators_#
The collection of fitted sub-estimators. A list of
FastRandomForestClassifier
ifmethod='cv'
and a list ofsklearn.tree.DecisionTreeClassifier
otherwise.- Type:
list of
FastRandomForestClassifier
orsklearn.tree.DecisionTreeClassifier
- oob_pred_#
The out-of-bag probability predictions on the training set.
- Type:
ndarray of shape (n_samples, n_classes, 1)
- train_giqs_#
The generalized inverse quantile scores of the training set.
- Type:
ndarray of shape (n_samples, n_classes)
- classes_#
The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).
- Type:
ndarray of shape (n_classes,) or a list of such arrays
- n_classes_#
The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem).
- feature_names_in_#
Names of features seen during
fit
. Defined only whenX
has feature names that are all strings.- Type:
ndarray of shape (
n_features_in_
,)
- feature_importances_#
The impurity-based feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance.
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values).
- Type:
ndarray of shape (n_features,)
- oob_score_#
Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when
oob_score
is True.- Type:
- oob_decision_function_#
Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case,
oob_decision_function_
might contain NaN. This attribute exists only whenoob_score
is True.- Type:
ndarray of shape (n_samples, n_classes) or (n_samples, n_classes, n_outputs)
- estimators_samples_#
The subset of drawn samples (i.e., the in-bag samples) for each base estimator. Each subset is defined by an array of the indices selected.
- Type:
list of arrays
See also
CoverForestRegressor
A conformal random forest for regression tasks.
sklearn.ensemble.RandomForestClassifier
The standard random forest classifier from scikit-learn.
References
Examples
>>> from coverforest import CoverForestClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_samples=200, n_features=4, ... n_informative=2, n_redundant=0, ... random_state=0, shuffle=False) >>> clf = CoverForestClassifier(n_estimators=10, method='cv', random_state=0) >>> clf.fit(X, y) CoverForestClassifier(...) >>> print(clf.predict(X[:1])) (array([0]), [array([0, 1])])
Methods
apply
(X)Apply trees in the forest to X, return leaf indices.
decision_path
(X)Return the decision path in the forest.
fit
(X, y[, alpha, calib_size, valid_size, ...])Fit the conformal forest classifier.
get_metadata_routing
()Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
predict
(X[, alpha, binary_output, num_threads])Predict class labels and prediction sets for X.
predict_log_proba
(X)Predict class log-probabilities for X.
predict_proba
(X)Predict class probabilities for X.
score
(X, y[, alpha, scoring, sample_weight])Evaluate the prediction set on the given test data and labels.
search_k_and_lambda
(X, y[, alpha, ...])Search for optimal values of k and lambda parameters and store them as attributes
k_star_
andlambda_star_
, respectively.set_fit_request
(*[, alpha, calib_size, ...])Request metadata passed to the
fit
method.set_params
(**params)Set the parameters of this estimator.
set_predict_request
(*[, alpha, ...])Request metadata passed to the
predict
method.set_score_request
(*[, alpha, sample_weight, ...])Request metadata passed to the
score
method.