Kolmogorov-Smirnov — alibi-detect 0.12.0 documentation (original) (raw)

source

Overview

The drift detector applies feature-wise two-sample Kolmogorov-Smirnov (K-S) tests. For multivariate data, the obtained p-values for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur.

For high-dimensional data, we typically want to reduce the dimensionality before computing the feature-wise univariate K-S tests and aggregating those via the chosen correction method. Following suggestions in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift, we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier’s softmax outputs (BBSDs) as out-of-the box preprocessing methods and note that PCA can also be easily implemented using scikit-learn. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift. The adversarial detector which is part of the library can also be transformed into a drift detector picking up drift that reduces the performance of the classification model. We can therefore combine different preprocessing techniques to figure out if there is drift which hurts the model performance, and whether this drift can be classified as input drift or label shift.

Detecting input data drift (covariate shift) \(\Delta p(x)\) for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting \(\Delta p(x)\) anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from HuggingFace’s transformer package but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the Text drift detection on IMDB movie reviews notebook.

Usage

Initialize

Arguments:

Keyword arguments:

Initialized drift detector example:

from alibi_detect.cd import KSDrift

cd = KSDrift(x_ref, p_val=0.05)

Detect Drift

We detect data drift by simply calling predict on a batch of instances x. We can return the feature-wise p-values before the multivariate correction by setting return_p_val to True. The drift can also be detected at the feature level by setting drift_type to ‘feature’. No multivariate correction will take place since we return the output of n_features univariate tests. For drift detection on all the features combined with the correction, use ‘batch’. return_p_valequal to True will also return the threshold used by the detector (either for the univariate case or after the multivariate correction).

The prediction takes the form of a dictionary with meta and data keys. meta contains the detector’s metadata while data is also a dictionary which contains the actual predictions stored in the following keys:

preds = cd.predict(x, drift_type='batch', return_p_val=True, return_distance=True)

Examples

Graph

Drift detection on molecular graphs

Image

Drift detection on CIFAR10

Text

Text drift detection on IMDB movie reviews