incrementalLearner - Convert binary classification support vector machine (SVM) model to incremental learner - MATLAB (original) (raw)

Convert binary classification support vector machine (SVM) model to incremental learner

Since R2020b

Syntax

Description

[IncrementalMdl](#mw%5F7debacf6-87e1-4f78-ad82-cc214d49b434%5Fsep%5Fshared-incrementalclassificationmdl) = incrementalLearner([Mdl](#mw%5F7debacf6-87e1-4f78-ad82-cc214d49b434%5Fsep%5Fshared-incrementallearner%5Fmdl)) returns a binary classification linear model for incremental learning,IncrementalMdl, using the traditionally trained linear SVM model object or SVM model template object in Mdl.

If you specify a traditionally trained model, then its property values reflect the knowledge gained from Mdl (parameters and hyperparameters of the model). Therefore, IncrementalMdl can predict labels given new observations, and it is warm, meaning that its predictive performance is tracked.

example

[IncrementalMdl](#mw%5F7debacf6-87e1-4f78-ad82-cc214d49b434%5Fsep%5Fshared-incrementalclassificationmdl) = incrementalLearner([Mdl](#mw%5F7debacf6-87e1-4f78-ad82-cc214d49b434%5Fsep%5Fshared-incrementallearner%5Fmdl),[Name,Value](#namevaluepairarguments)) uses additional options specified by one or more name-value arguments. Some options require you to train IncrementalMdl before its predictive performance is tracked. For example,'MetricsWarmupPeriod',50,'MetricsWindowSize',100 specifies a preliminary incremental training period of 50 observations before performance metrics are tracked, and specifies processing 100 observations before updating the window performance metrics.

example

Examples

collapse all

Train an SVM model by using fitcsvm, and then convert it to an incremental learner.

Load and Preprocess Data

Load the human activity data set.

For details on the data set, enter Description at the command line.

Responses can be one of five classes: Sitting, Standing, Walking, Running, or Dancing. Dichotomize the response by identifying whether the subject is moving (actid > 2).

Train SVM Model

Fit an SVM model to the entire data set. Discard the support vectors (Alpha) from the model so that the software uses the linear coefficients (Beta) for prediction.

TTMdl = fitcsvm(feat,Y); TTMdl = discardSupportVectors(TTMdl)

TTMdl = ClassificationSVM ResponseName: 'Y' CategoricalPredictors: [] ClassNames: [0 1] ScoreTransform: 'none' NumObservations: 24075 Beta: [60×1 double] Bias: -6.4280 KernelParameters: [1×1 struct] BoxConstraints: [24075×1 double] ConvergenceInfo: [1×1 struct] IsSupportVector: [24075×1 logical] Solver: 'SMO'

Properties, Methods

TTMdl is a ClassificationSVM model object representing a traditionally trained SVM model.

Convert Trained Model

Convert the traditionally trained SVM model to a binary classification linear model for incremental learning.

IncrementalMdl = incrementalLearner(TTMdl)

IncrementalMdl = incrementalClassificationLinear

        IsWarm: 1
       Metrics: [1×2 table]
    ClassNames: [0 1]
ScoreTransform: 'none'
          Beta: [60×1 double]
          Bias: -6.4280
       Learner: 'svm'

Properties, Methods

IncrementalMdl is an incrementalClassificationLinear model object prepared for incremental learning using SVM.

Predict Responses

An incremental learner created from converting a traditionally trained model can generate predictions without further processing.

Predict classification scores for all observations using both models.

[,ttscores] = predict(TTMdl,feat); [,ilcores] = predict(IncrementalMdl,feat); compareScores = norm(ttscores(:,1) - ilcores(:,1))

The difference between the scores generated by the models is 0.

The default solver is the adaptive scale-invariant solver. If you specify this solver, you do not need to tune any parameters for training. However, if you specify either the standard SGD or ASGD solver instead, you can also specify an estimation period, during which the incremental fitting functions tune the learning rate.

Load the human activity data set.

For details on the data set, enter Description at the command line.

Responses can be one of five classes: Sitting, Standing, Walking, Running, and Dancing. Dichotomize the response by identifying whether the subject is moving (actid > 2).

Randomly split the data in half: the first half for training a model traditionally, and the second half for incremental learning.

n = numel(Y);

rng(1) % For reproducibility cvp = cvpartition(n,'Holdout',0.5); idxtt = training(cvp); idxil = test(cvp);

% First half of data Xtt = feat(idxtt,:); Ytt = Y(idxtt);

% Second half of data Xil = feat(idxil,:); Yil = Y(idxil);

Fit an SVM model to the first half of the data. Standardize the predictor data by setting 'Standardize',true.

TTMdl = fitcsvm(Xtt,Ytt,'Standardize',true);

The Mu and Sigma properties of TTMdl contain the predictor data sample means and standard deviations, respectively.

Suppose that the distribution of the predictors is not expected to change in the future. Convert the traditionally trained SVM model to a binary classification linear model for incremental learning. Specify the standard SGD solver and an estimation period of 2000 observations (the default is 1000 when a learning rate is required).

IncrementalMdl = incrementalLearner(TTMdl,'Solver','sgd','EstimationPeriod',2000);

IncrementalMdl is an incrementalClassificationLinear model object. Because the predictor data of TTMdl is standardized (TTMdl.Mu and TTMdl.Sigma are nonempty), incrementalLearner prepares incremental learning functions to standardize supplied predictor data by using the previously learned moments (stored in IncrementalMdl.Mu and IncrementalMdl.Sigma).

Fit the incremental model to the second half of the data by using the fit function. At each iteration:

% Preallocation nil = numel(Yil); numObsPerChunk = 10; nchunk = floor(nil/numObsPerChunk); learnrate = [IncrementalMdl.LearnRate; zeros(nchunk,1)]; beta1 = [IncrementalMdl.Beta(1); zeros(nchunk,1)];

% Incremental fitting for j = 1:nchunk ibegin = min(nil,numObsPerChunk*(j-1) + 1); iend = min(nil,numObsPerChunk*j); idx = ibegin:iend; IncrementalMdl = fit(IncrementalMdl,Xil(idx,:),Yil(idx)); beta1(j + 1) = IncrementalMdl.Beta(1); learnrate(j + 1) = IncrementalMdl.LearnRate; end

IncrementalMdl is an incrementalClassificationLinear model object trained on all the data in the stream.

To see how the initial learning rate and β1 evolve during training, plot them on separate tiles.

t = tiledlayout(2,1); nexttile plot(beta1) ylabel('\beta_1') xline(IncrementalMdl.EstimationPeriod/numObsPerChunk,'r-.') nexttile plot(learnrate) ylabel('Initial Learning Rate') xline(IncrementalMdl.EstimationPeriod/numObsPerChunk,'r-.') xlabel(t,'Iteration')

Figure contains 2 axes objects. Axes object 1 with ylabel \beta_1 contains 2 objects of type line, constantline. Axes object 2 with ylabel Initial Learning Rate contains 2 objects of type line, constantline.

The initial learning rate jumps from 0.7 to its autotuned value after the estimation period. During training, the software uses a learning rate that gradually decays from the initial value specified in the LearnRateSchedule property of IncrementalMdl.

Because fit does not fit the model to the streaming data during the estimation period, β1 is constant for the first 200 iterations (2000 observations). Then, β1 changes during incremental fitting.

Use a trained SVM model to initialize an incremental learner. Prepare the incremental learner by specifying a metrics warm-up period, during which the updateMetricsAndFit function only fits the model. Specify a metrics window size of 500 observations.

Load the human activity data set.

For details on the data set, enter Description at the command line

Responses can be one of five classes: Sitting, Standing, Walking, Running, and Dancing. Dichotomize the response by identifying whether the subject is moving (actid > 2).

Because the data set is grouped by activity, shuffle it to reduce bias. Then, randomly split the data in half: the first half for training a model traditionally, and the second half for incremental learning.

n = numel(Y);

rng(1) % For reproducibility cvp = cvpartition(n,'Holdout',0.5); idxtt = training(cvp); idxil = test(cvp); shuffidx = randperm(n); X = feat(shuffidx,:); Y = Y(shuffidx);

% First half of data Xtt = X(idxtt,:); Ytt = Y(idxtt);

% Second half of data Xil = X(idxil,:); Yil = Y(idxil);

Fit an SVM model to the first half of the data.

TTMdl = fitcsvm(Xtt,Ytt);

Convert the traditionally trained SVM model to a binary classification linear model for incremental learning. Specify the following:

IncrementalMdl = incrementalLearner(TTMdl,'MetricsWarmupPeriod',2000,'MetricsWindowSize',500,... 'Metrics',["classiferror" "hinge"]);

Fit the incremental model to the second half of the data by using the updateMetricsAndFit function. At each iteration:

% Preallocation nil = numel(Yil); numObsPerChunk = 20; nchunk = ceil(nil/numObsPerChunk); ce = array2table(zeros(nchunk,2),'VariableNames',["Cumulative" "Window"]); hinge = array2table(zeros(nchunk,2),'VariableNames',["Cumulative" "Window"]); beta1 = [IncrementalMdl.Beta(1); zeros(nchunk,1)];

% Incremental fitting for j = 1:nchunk ibegin = min(nil,numObsPerChunk*(j-1) + 1); iend = min(nil,numObsPerChunk*j); idx = ibegin:iend;
IncrementalMdl = updateMetricsAndFit(IncrementalMdl,Xil(idx,:),Yil(idx)); ce{j,:} = IncrementalMdl.Metrics{"ClassificationError",:}; hinge{j,:} = IncrementalMdl.Metrics{"HingeLoss",:}; beta1(j + 1) = IncrementalMdl.Beta(1); end

IncrementalMdl is an incrementalClassificationLinear model object trained on all the data in the stream. During incremental learning and after the model is warmed up, updateMetricsAndFit checks the performance of the model on the incoming observations, and then fits the model to those observations.

To see how the performance metrics and β1 evolve during training, plot them on separate tiles.

t = tiledlayout(3,1); nexttile plot(beta1) ylabel('\beta_1') xlim([0 nchunk]); xline(IncrementalMdl.MetricsWarmupPeriod/numObsPerChunk,'r-.'); nexttile h = plot(ce.Variables); xlim([0 nchunk]); ylabel('Classification Error') xline(IncrementalMdl.MetricsWarmupPeriod/numObsPerChunk,'r-.'); legend(h,ce.Properties.VariableNames,'Location','northwest') nexttile h = plot(hinge.Variables); xlim([0 nchunk]); ylabel('Hinge Loss') xline(IncrementalMdl.MetricsWarmupPeriod/numObsPerChunk,'r-.'); legend(h,hinge.Properties.VariableNames,'Location','northwest') xlabel(t,'Iteration')

Figure contains 3 axes objects. Axes object 1 with ylabel \beta_1 contains 2 objects of type line, constantline. Axes object 2 with ylabel Classification Error contains 3 objects of type line, constantline. These objects represent Cumulative, Window. Axes object 3 with ylabel Hinge Loss contains 3 objects of type line, constantline. These objects represent Cumulative, Window.

The plot suggests that updateMetricsAndFit does the following:

Input Arguments

Name-Value Arguments

expand all

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: 'Solver','scale-invariant','MetricsWindowSize',100 specifies the adaptive scale-invariant solver for objective optimization, and specifies processing 100 observations before updating the window performance metrics.

General Options

expand all

Data Types: char | string

Data Types: single | double

SGD and ASGD Solver Options

Adaptive Scale-Invariant Solver Options

Performance Metrics Options

expand all

Data Types: char | string | struct | cell | function_handle

Data Types: single | double

Data Types: single | double

Output Arguments

More About

collapse all

Incremental learning, or online learning, is a branch of machine learning concerned with processing incoming data from a data stream, possibly given little to no knowledge of the distribution of the predictor variables, aspects of the prediction or objective function (including tuning parameter values), or whether the observations are labeled. Incremental learning differs from traditional machine learning, where enough labeled data is available to fit to a model, perform cross-validation to tune hyperparameters, and infer the predictor distribution.

Given incoming observations, an incremental learning model processes data in any of the following ways, but usually in this order:

For more details, see Incremental Learning Overview.

The adaptive scale-invariant solver for incremental learning, introduced in [1], is a gradient-descent-based objective solver for training linear predictive models. The solver is hyperparameter free, insensitive to differences in predictor variable scales, and does not require prior knowledge of the distribution of the predictor variables. These characteristics make it well suited to incremental learning.

The standard SGD and ASGD solvers are sensitive to differing scales among the predictor variables, resulting in models that can perform poorly. To achieve better accuracy using SGD and ASGD, you can standardize the predictor data, and tune the regularization and learning rate parameters. For traditional machine learning, enough data is available to enable hyperparameter tuning by cross-validation and predictor standardization. However, for incremental learning, enough data might not be available (for example, observations might be available only one at a time) and the distribution of the predictors might be unknown. These characteristics make parameter tuning and predictor standardization difficult or impossible to do during incremental learning.

The incremental fitting functions for classification fit and updateMetricsAndFit use the more aggressive ScInOL2 version of the algorithm.

Algorithms

collapse all

During the estimation period, the incremental fitting functions fit and updateMetricsAndFit use the first incoming EstimationPeriod observations to estimate (tune) hyperparameters required for incremental training. Estimation occurs only when EstimationPeriod is positive. This table describes the hyperparameters and when they are estimated, or tuned.

Hyperparameter Model Property Usage Conditions
Predictor means and standard deviations Mu and Sigma Standardize predictor data The hyperparameters are estimated when both of these conditions apply: Incremental fitting functions are configured to standardize predictor data (see Standardize Data).Mdl.Mu and Mdl.Sigma are empty arrays [].
Learning rate LearnRate Adjust the solver step size The hyperparameter is estimated when both of these conditions apply: You change the solver of Mdl to SGD or ASGD (seeSolver).You do not specify the 'LearnRate' name-value argument as a positive scalar.

During the estimation period, fit does not fit the model, and updateMetricsAndFit does not fit the model or update the performance metrics. At the end of the estimation period, the functions update the properties that store the hyperparameters.

If incremental learning functions are configured to standardize predictor variables, they do so using the means and standard deviations stored in the Mu and Sigma properties of the incremental learning model IncrementalMdl.

References

[1] Kempka, Michał, Wojciech Kotłowski, and Manfred K. Warmuth. "Adaptive Scale-Invariant Online Algorithms for Learning Linear Models." Preprint, submitted February 10, 2019. https://arxiv.org/abs/1902.07528.

[2] Langford, J., L. Li, and T. Zhang. “Sparse Online Learning Via Truncated Gradient.” J. Mach. Learn. Res., Vol. 10, 2009, pp. 777–801.

[3] Shalev-Shwartz, S., Y. Singer, and N. Srebro. “Pegasos: Primal Estimated Sub-Gradient Solver for SVM.” Proceedings of the 24th International Conference on Machine Learning, ICML ’07, 2007, pp. 807–814.

[4] Xu, Wei. “Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent.” CoRR, abs/1107.2490, 2011.

Version History

Introduced in R2020b