predict - Classify observations using support vector machine (SVM) classifier - MATLAB (original) (raw)

Classify observations using support vector machine (SVM) classifier

Syntax

Description

[label](#mw%5Fd348509d-3d30-409d-b889-d0ed859a842a) = predict([SVMModel](#bt74bzo%5Fsep%5Fshared-SVMModel),[X](#mw%5Fbde43cb9-51ab-4cbf-992b-8ac2b2877736)) returns a vector of predicted class labels for the predictor data in the table or matrix X, based on the trained support vector machine (SVM) classification model SVMModel. The trained SVM model can either be full or compact.

example

[[label](#mw%5Fd348509d-3d30-409d-b889-d0ed859a842a),[score](#mw%5F606ce41c-e0f0-42bb-91f5-f24e9fdc08f5)] = predict([SVMModel](#bt74bzo%5Fsep%5Fshared-SVMModel),[X](#mw%5Fbde43cb9-51ab-4cbf-992b-8ac2b2877736)) also returns a matrix of scores (score) indicating the likelihood that a label comes from a particular class. For SVM, likelihood measures are either classification scores or class posterior probabilities. For each observation in X, the predicted class label corresponds to the maximum score among all classes.

example

Examples

collapse all

Load the ionosphere data set.

load ionosphere rng(1); % For reproducibility

Train an SVM classifier. Specify a 15% holdout sample for testing, standardize the data, and specify that 'g' is the positive class.

CVSVMModel = fitcsvm(X,Y,'Holdout',0.15,'ClassNames',{'b','g'},... 'Standardize',true); CompactSVMModel = CVSVMModel.Trained{1}; % Extract trained, compact classifier testInds = test(CVSVMModel.Partition); % Extract the test indices XTest = X(testInds,:); YTest = Y(testInds,:);

CVSVMModel is a ClassificationPartitionedModel classifier. It contains the property Trained, which is a 1-by-1 cell array holding a CompactClassificationSVM classifier that the software trained using the training set.

Label the test sample observations. Display the results for the first 10 observations in the test sample.

[label,score] = predict(CompactSVMModel,XTest); table(YTest(1:10),label(1:10),score(1:10,2),'VariableNames',... {'TrueLabel','PredictedLabel','Score'})

ans=10×3 table TrueLabel PredictedLabel Score
_________ ______________ ________

  {'b'}          {'b'}          -1.7175
  {'g'}          {'g'}           2.0003
  {'b'}          {'b'}          -9.6836
  {'g'}          {'g'}           2.5616
  {'b'}          {'b'}           -1.548
  {'g'}          {'g'}           2.0983
  {'b'}          {'b'}          -2.7016
  {'b'}          {'b'}         -0.66331
  {'g'}          {'g'}           1.6047
  {'g'}          {'g'}            1.773

Label new observations using an SVM classifier.

Load the ionosphere data set. Assume that the last 10 observations become available after you train the SVM classifier.

load ionosphere rng(1); % For reproducibility n = size(X,1); % Training sample size isInds = 1:(n-10); % In-sample indices oosInds = (n-9):n; % Out-of-sample indices

Train an SVM classifier. Standardize the data and specify that 'g' is the positive class. Conserve memory by reducing the size of the trained SVM classifier.

SVMModel = fitcsvm(X(isInds,:),Y(isInds),'Standardize',true,... 'ClassNames',{'b','g'}); CompactSVMModel = compact(SVMModel); whos('SVMModel','CompactSVMModel')

Name Size Bytes Class Attributes

CompactSVMModel 1x1 30173 classreg.learning.classif.CompactClassificationSVM
SVMModel 1x1 136713 ClassificationSVM

The CompactClassificationSVM classifier (CompactSVMModel) uses less space than the ClassificationSVM classifier (SVMModel) because SVMModel stores the data.

Estimate the optimal score-to-posterior-probability transformation function.

CompactSVMModel = fitPosterior(CompactSVMModel,... X(isInds,:),Y(isInds))

CompactSVMModel = CompactClassificationSVM ResponseName: 'Y' CategoricalPredictors: [] ClassNames: {'b' 'g'} ScoreTransform: '@(S)sigmoid(S,-1.968371e+00,3.122227e-01)' Alpha: [88×1 double] Bias: -0.2142 KernelParameters: [1×1 struct] Mu: [0.8886 0 0.6365 0.0457 0.5933 0.1200 0.5414 0.1217 0.5020 0.1872 0.4659 0.1596 0.3889 0.0970 0.3308 0.0723 0.3685 -0.0039 0.3453 -0.0256 0.3231 0.0097 0.3490 -0.0596 0.3839 -0.0731 0.5343 -0.0718 0.3659 … ] (1×34 double) Sigma: [0.3151 0 0.5032 0.4476 0.5251 0.4668 0.4966 0.5275 0.5107 0.4896 0.5681 0.5011 0.6267 0.5009 0.6569 0.4639 0.6214 0.5035 0.6296 0.5265 0.6128 0.5250 0.6067 0.5349 0.5817 0.5157 0.5212 0.5577 0.5787 0.5151 … ] (1×34 double) SupportVectors: [88×34 double] SupportVectorLabels: [88×1 double]

Properties, Methods

The optimal score transformation function (CompactSVMModel.ScoreTransform) is the sigmoid function because the classes are inseparable.

Predict the out-of-sample labels and positive class posterior probabilities. Because true labels are available, compare them with the predicted labels.

[labels,PostProbs] = predict(CompactSVMModel,X(oosInds,:)); table(Y(oosInds),labels,PostProbs(:,2),'VariableNames',... {'TrueLabels','PredictedLabels','PosClassPosterior'})

ans=10×3 table TrueLabels PredictedLabels PosClassPosterior __________ _______________ _________________

  {'g'}            {'g'}              0.98418     
  {'g'}            {'g'}              0.95545     
  {'g'}            {'g'}              0.67793     
  {'g'}            {'g'}              0.94447     
  {'g'}            {'g'}              0.98744     
  {'g'}            {'g'}               0.9248     
  {'g'}            {'g'}               0.9711     
  {'g'}            {'g'}              0.96986     
  {'g'}            {'g'}              0.97802     
  {'g'}            {'g'}               0.9436     

PostProbs is a 10-by-2 matrix, where the first column is the negative class posterior probabilities, and the second column is the positive class posterior probabilities corresponding to the new observations.

Input Arguments

collapse all

Predictor data to be classified, specified as a numeric matrix or table.

Each row of X corresponds to one observation, and each column corresponds to one variable.

If you set 'Standardize',true infitcsvm to train SVMModel, then the software standardizes the columns of X using the corresponding means in SVMModel.Mu and the standard deviations in SVMModel.Sigma.

Data Types: table | double | single

Output Arguments

collapse all

Predicted class labels, returned as a categorical or character array, logical or numeric vector, or cell array of character vectors.

label has the same data type as the observed class labels (Y) that trained SVMModel, and its length is equal to the number of rows in X.(The software treats string arrays as cell arrays of character vectors.)

The predict function classifies an observation into the class yielding the highest score. For an observation with NaN scores, the function classifies the observation into the majority class, which makes up the largest proportion of the training labels.

For one-class learning, each value in label is the same—the one class in the training data. Use score to identify anomalies.

Predicted class scores or posterior probabilities, returned as a numeric column vector or numeric matrix.

IfSVMModel.KernelParameters.Function is 'linear', then the classification score for the observation x is

SVMModel stores_β_, b, and s in the properties Beta, Bias, andKernelParameters .Scale, respectively.

To estimate classification scores manually, you must first apply any transformations to the predictor data that were applied during training. Specifically, if you specify 'Standardize',true when using fitcsvm, then you must standardize the predictor data manually by using the mean SVMModel.Mu and standard deviation SVMModel.Sigma, and then divide the result by the kernel scale in SVMModel.KernelParameters.Scale.

All SVM functions, such as resubPredict and predict, apply any required transformation before estimation.

IfSVMModel .KernelParameters.Function is not 'linear', then Beta is empty ([]).

More About

collapse all

The SVM classification score for classifying observation x is the signed distance from x to the decision boundary ranging from -∞ to +∞. A positive score for a class indicates that x is predicted to be in that class. A negative score indicates otherwise.

The positive class classification score f(x) is the trained SVM classification function. f(x) is also the numerical predicted response for x, or the score for predicting x into the positive class.

where (α1,...,αn,b) are the estimated SVM parameters, G(xj,x) is the dot product in the predictor space between x and the support vectors, and the sum includes the training set observations. The negative class classification score for x, or the score for predicting_x_ into the negative class, is –f(x).

If G(xj,x) = xj_′_x (the linear kernel), then the score function reduces to

s is the kernel scale and β is the vector of fitted linear coefficients.

For more details, see Understanding Support Vector Machines.

The posterior probability is the probability that an observation belongs in a particular class, given the data.

For SVM, the posterior probability is a function of the score_P_(s) that observation j is in class k = {-1,1}.

The prior probability of a class is the assumed relative frequency with which observations from that class occur in a population.

Tips

Algorithms

Alternative Functionality

To integrate the prediction of an SVM classification model into Simulink®, you can use the ClassificationSVM Predict block in the Statistics and Machine Learning Toolbox™ library or a MATLAB Function block with the predict function. For examples, see Predict Class Labels Using ClassificationSVM Predict Block and Predict Class Labels Using MATLAB Function Block.

When deciding which approach to use, consider the following:

References

[1] Christianini, N., and J. C. Shawe-Taylor.An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, UK: Cambridge University Press, 2000.

[2] Hastie, T., R. Tibshirani, and J. Friedman.The Elements of Statistical Learning, Second Edition. NY: Springer, 2008.

[3] Platt, J. “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods.” Advances in Large Margin Classifiers. MIT Press, 1999, pages 61–74.

Extended Capabilities

expand all

Thepredict function fully supports tall arrays. For more information, see Tall Arrays.

Usage notes and limitations:

For more information, see Introduction to Code Generation.

Usage notes and limitations:

For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).

Version History

Introduced in R2014a