kfoldMargin - Classification margins for cross-validated kernel ECOC model - MATLAB (original) (raw)
Classification margins for cross-validated kernel ECOC model
Syntax
Description
[margin](#mw%5Ff31b3d71-e036-4944-9247-b4a902d49834) = kfoldMargin([CVMdl](#mw%5F4851a19f-6d26-405c-9928-15eb9aba50f5%5Fsep%5Fshared-CVMdl),[Name,Value](#namevaluepairarguments))
returns classification margins with additional options specified by one or more name-value pair arguments. For example, specify the binary learner loss function, decoding scheme, or verbosity level.
Examples
Load Fisher's iris data set. X
contains flower measurements, and Y
contains the names of flower species.
load fisheriris X = meas; Y = species;
Cross-validate an ECOC model composed of kernel binary learners.
CVMdl = fitcecoc(X,Y,'Learners','kernel','CrossVal','on')
CVMdl = ClassificationPartitionedKernelECOC CrossValidatedModel: 'KernelECOC' ResponseName: 'Y' NumObservations: 150 KFold: 10 Partition: [1×1 cvpartition] ClassNames: {'setosa' 'versicolor' 'virginica'} ScoreTransform: 'none'
Properties, Methods
CVMdl
is a ClassificationPartitionedKernelECOC
model. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold'
name-value pair argument instead of 'Crossval'
.
Estimate the classification margins for validation-fold observations.
m = kfoldMargin(CVMdl); size(m)
m
is a 150-by-1 vector. m(j)
is the classification margin for observation j
.
Plot the k-fold margins using a boxplot.
boxplot(m,'Labels','All Observations') title('Distribution of Margins')
Perform feature selection by comparing _k_-fold margins from multiple models. Based solely on this criterion, the classifier with the greatest margins is the best classifier.
Load Fisher's iris data set. X
contains flower measurements, and Y
contains the names of flower species.
load fisheriris X = meas; Y = species;
Randomly choose half of the predictor variables.
rng(1); % For reproducibility p = size(X,2); % Number of predictors idxPart = randsample(p,ceil(0.5*p));
Cross-validate two ECOC models composed of kernel classification models: one that uses all of the predictors, and one that uses half of the predictors.
CVMdl = fitcecoc(X,Y,'Learners','kernel','CrossVal','on'); PCVMdl = fitcecoc(X(:,idxPart),Y,'Learners','kernel','CrossVal','on');
CVMdl
and PCVMdl
are ClassificationPartitionedKernelECOC
models. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold'
name-value pair argument instead of 'Crossval'
.
Estimate the _k_-fold margins for each classifier.
fullMargins = kfoldMargin(CVMdl); partMargins = kfoldMargin(PCVMdl);
Plot the distribution of the margin sets using box plots.
boxplot([fullMargins partMargins], ... 'Labels',{'All Predictors','Half of the Predictors'}); title('Distribution of Margins')
The PCVMdl
margin distribution is similar to the CVMdl
margin distribution.
Input Arguments
Name-Value Arguments
Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN
, where Name
is the argument name and Value
is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name
in quotes.
Example: kfoldMargin(CVMdl,'Verbose',1)
specifies to display diagnostic messages in the Command Window.
Data Types: char
| string
| function_handle
Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of 'Decoding'
and'lossweighted'
or 'lossbased'
. For more information, see Binary Loss.
Example: 'Decoding','lossbased'
Data Types: single
| double
Output Arguments
Classification margins, returned as a numeric vector. margin
is an_n_-by-1 vector, where each row is the margin of the corresponding observation and n is the number of observations (size(CVMdl.Y,1)
).
More About
The classification margin is, for each observation, the difference between the negative loss for the true class and the maximal negative loss among the false classes. If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.
The binary loss is a function of the class and classification score that determines how well a binary learner classifies an observation into the class. The decoding scheme of an ECOC model specifies how the software aggregates the binary losses and determines the predicted class for each observation.
Assume the following:
- mkj is element (k,j) of the coding design matrix_M_—that is, the code corresponding to class_k_ of binary learner j.M is a K_-by-B matrix, where K is the number of classes, and_B is the number of binary learners.
- sj is the score of binary learner_j_ for an observation.
- g is the binary loss function.
- k^ is the predicted class for the observation.
The software supports two decoding schemes:
- Loss-based decoding [2] (
Decoding
is"lossbased"
) — The predicted class of an observation corresponds to the class that produces the minimum average of the binary losses over all binary learners. - Loss-weighted decoding [3] (
Decoding
is"lossweighted"
) — The predicted class of an observation corresponds to the class that produces the minimum average of the binary losses over the binary learners for the corresponding class.
The denominator corresponds to the number of binary learners for class_k_. [1] suggests that loss-weighted decoding improves classification accuracy by keeping loss values for all classes in the same dynamic range.
The predict
, resubPredict
, andkfoldPredict
functions return the negated value of the objective function of argmin
as the second output argument (NegLoss
) for each observation and class.
This table summarizes the supported binary loss functions, where_yj_ is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and_g_(yj,sj) is the binary loss function.
Value | Description | Score Domain | g(yj,sj) |
---|---|---|---|
"binodeviance" | Binomial deviance | (–∞,∞) | log[1 + exp(–2_yjsj_)]/[2log(2)] |
"exponential" | Exponential | (–∞,∞) | exp(–yjsj)/2 |
"hamming" | Hamming | [0,1] or (–∞,∞) | [1 – sign(yjsj)]/2 |
"hinge" | Hinge | (–∞,∞) | max(0,1 – yjsj)/2 |
"linear" | Linear | (–∞,∞) | (1 – yjsj)/2 |
"logit" | Logistic | (–∞,∞) | log[1 + exp(–yjsj)]/[2log(2)] |
"quadratic" | Quadratic | [0,1] | [1 – yj(2_sj_ – 1)]2/2 |
The software normalizes binary losses so that the loss is 0.5 when_yj_ = 0, and aggregates using the average of the binary learners [1].
Do not confuse the binary loss with the overall classification loss (specified by theLossFun
name-value argument of the kfoldLoss
andkfoldPredict
object functions), which measures how well an ECOC classifier performs as a whole.
References
[1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classifiers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141.
[2] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.”Pattern Recog. Lett. Vol. 30, Issue 3, 2009, pp. 285–297.
[3] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134.
Extended Capabilities
To run in parallel, specify the Options
name-value argument in the call to this function and set the UseParallel
field of the options structure to true
usingstatset
:
Options=statset(UseParallel=true)
For more information about parallel computing, see Run MATLAB Functions with Automatic Parallel Support (Parallel Computing Toolbox).
Version History
Introduced in R2018b
Starting in R2023b, the following classification model object functions use observations with missing predictor values as part of resubstitution ("resub") and cross-validation ("kfold") computations for classification edges, losses, margins, and predictions.
In previous releases, the software omitted observations with missing predictor values from the resubstitution and cross-validation computations.