RegressionLinear.predict - Predict response of linear regression model - MATLAB (original) (raw)

Class: RegressionLinear

Predict response of linear regression model

Syntax

Description

[YHat](#bu6ufvo-YHat) = predict([Mdl](#bu6ufvo%5Fsep%5Fshared-Mdl),[X](#mw%5F967c41ab-c678-47b4-bc49-dd47f8f19cde)) returns predicted responses for each observation in the predictor data X based on the trained linear regression model Mdl. YHat contains responses for each regularization strength in Mdl.

example

[YHat](#bu6ufvo-YHat) = predict([Mdl](#bu6ufvo%5Fsep%5Fshared-Mdl),[X](#mw%5F967c41ab-c678-47b4-bc49-dd47f8f19cde),[Name,Value](#namevaluepairarguments)) specifies additional options using one or more name-value arguments. For example, specify that columns in the predictor data correspond to observations.

example

Input Arguments

expand all

Linear regression model, specified as a RegressionLinear model object. You can create a RegressionLinear model object using fitrlinear.

Predictor data used to generate responses, specified as a full or sparse numeric matrix or a table.

By default, each row of X corresponds to one observation, and each column corresponds to one variable.

Note

If you orient your predictor matrix so that observations correspond to columns and specify "ObservationsIn","columns", then you might experience a significant reduction in optimization execution time. You cannot specify "ObservationsIn","columns" for predictor data in a table.

Data Types: double | single | table

Name-Value Arguments

expand all

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: predict(Mdl,X,"ObservationsIn","columns") indicates that columns in the predictor data correspond to observations.

Predictor data observation dimension, specified as"columns" or "rows".

Note

If you orient your predictor matrix so that observations correspond to columns and specify"ObservationsIn","columns", then you might experience a significant reduction in optimization execution time. You cannot specify "ObservationsIn","columns" for predictor data in a table.

Data Types: char | string

Since R2023b

Predicted response value to use for observations with missing predictor values, specified as "median", "mean", or a numeric scalar.

Value Description
"median" predict uses the median of the observed response values in the training data as the predicted response value for observations with missing predictor values.
"mean" predict uses the mean of the observed response values in the training data as the predicted response value for observations with missing predictor values.
Numeric scalar predict uses this value as the predicted response value for observations with missing predictor values.

Example: PredictionForMissingValue="mean"

Example: PredictionForMissingValue=NaN

Data Types: single | double | char | string

Output Arguments

expand all

Predicted responses, returned as a n_-by-L numeric matrix. n is the number of observations in X and L is the number of regularization strengths in Mdl.Lambda. `` YHat(i,j) is the response for observation _`i`_ using the linear regression model that has regularization strength Mdl.Lambda(j_) ``.

The predicted response using the model with regularization strength j is y^j=xβj+bj.

Examples

expand all

Simulate 10000 observations from this model

y=x100+2x200+e.

rng(1) % For reproducibility n = 1e4; d = 1e3; nz = 0.1; X = sprandn(n,d,nz); Y = X(:,100) + 2X(:,200) + 0.3randn(n,1);

Train a linear regression model. Reserve 30% of the observations as a holdout sample.

CVMdl = fitrlinear(X,Y,'Holdout',0.3); Mdl = CVMdl.Trained{1}

Mdl = RegressionLinear ResponseName: 'Y' ResponseTransform: 'none' Beta: [1000×1 double] Bias: -0.0066 Lambda: 1.4286e-04 Learner: 'svm'

Properties, Methods

CVMdl is a RegressionPartitionedLinear model. It contains the property Trained, which is a 1-by-1 cell array holding a RegressionLinear model that the software trained using the training set.

Extract the training and test data from the partition definition.

trainIdx = training(CVMdl.Partition); testIdx = test(CVMdl.Partition);

Predict the training- and test-sample responses.

yHatTrain = predict(Mdl,X(trainIdx,:)); yHatTest = predict(Mdl,X(testIdx,:));

Because there is one regularization strength in Mdl, yHatTrain and yHatTest are numeric vectors.

Predict responses from the best-performing, linear regression model that uses a lasso-penalty and least squares.

Simulate 10000 observations as in Predict Test-Sample Responses.

rng(1) % For reproducibility n = 1e4; d = 1e3; nz = 0.1; X = sprandn(n,d,nz); Y = X(:,100) + 2X(:,200) + 0.3randn(n,1);

Create a set of 15 logarithmically-spaced regularization strengths from 10-5 through 10-1.

Lambda = logspace(-5,-1,15);

Cross-validate the models. To increase execution speed, transpose the predictor data and specify that the observations are in columns. Optimize the objective function using SpaRSA.

X = X'; CVMdl = fitrlinear(X,Y,'ObservationsIn','columns','KFold',5,'Lambda',Lambda,... 'Learner','leastsquares','Solver','sparsa','Regularization','lasso');

numCLModels = numel(CVMdl.Trained)

CVMdl is a RegressionPartitionedLinear model. Because fitrlinear implements 5-fold cross-validation, CVMdl contains 5 RegressionLinear models that the software trains on each fold.

Display the first trained linear regression model.

Mdl1 = RegressionLinear ResponseName: 'Y' ResponseTransform: 'none' Beta: [1000×15 double] Bias: [-0.0049 -0.0049 -0.0049 -0.0049 -0.0049 -0.0048 -0.0044 -0.0037 -0.0030 -0.0031 -0.0033 -0.0036 -0.0041 -0.0051 -0.0071] Lambda: [1.0000e-05 1.9307e-05 3.7276e-05 7.1969e-05 1.3895e-04 2.6827e-04 5.1795e-04 1.0000e-03 0.0019 0.0037 0.0072 0.0139 0.0268 0.0518 0.1000] Learner: 'leastsquares'

Properties, Methods

Mdl1 is a RegressionLinear model object. fitrlinear constructed Mdl1 by training on the first four folds. Because Lambda is a sequence of regularization strengths, you can think of Mdl1 as 11 models, one for each regularization strength in Lambda.

Estimate the cross-validated MSE.

Higher values of Lambda lead to predictor variable sparsity, which is a good quality of a regression model. For each regularization strength, train a linear regression model using the entire data set and the same options as when you cross-validated the models. Determine the number of nonzero coefficients per model.

Mdl = fitrlinear(X,Y,'ObservationsIn','columns','Lambda',Lambda,... 'Learner','leastsquares','Solver','sparsa','Regularization','lasso'); numNZCoeff = sum(Mdl.Beta~=0);

In the same figure, plot the cross-validated MSE and frequency of nonzero coefficients for each regularization strength. Plot all variables on the log scale.

figure; [h,hL1,hL2] = plotyy(log10(Lambda),log10(mse),... log10(Lambda),log10(numNZCoeff)); hL1.Marker = 'o'; hL2.Marker = 'o'; ylabel(h(1),'log_{10} MSE') ylabel(h(2),'log_{10} nonzero-coefficient frequency') xlabel('log_{10} Lambda') hold off

Figure contains 2 axes objects. Axes object 1 with xlabel log_{10} Lambda, ylabel log_{10} MSE contains an object of type line. Axes object 2 with ylabel log_{10} nonzero-coefficient frequency contains an object of type line.

Choose the index of the regularization strength that balances predictor variable sparsity and low MSE (for example, Lambda(10)).

Extract the model with corresponding to the minimal MSE.

MdlFinal = selectModels(Mdl,idxFinal)

MdlFinal = RegressionLinear ResponseName: 'Y' ResponseTransform: 'none' Beta: [1000×1 double] Bias: -0.0050 Lambda: 0.0037 Learner: 'leastsquares'

Properties, Methods

idxNZCoeff = find(MdlFinal.Beta~=0)

EstCoeff = Mdl.Beta(idxNZCoeff)

EstCoeff = 2×1

1.0051
1.9965

MdlFinal is a RegressionLinear model with one regularization strength. The nonzero coefficients EstCoeff are close to the coefficients that simulated the data.

Simulate 10 new observations, and predict corresponding responses using the best-performing model.

XNew = sprandn(d,10,nz); YHat = predict(MdlFinal,XNew,'ObservationsIn','columns');

Alternative Functionality

To integrate the prediction of a linear regression model into Simulink®, you can use the RegressionLinear Predict block in the Statistics and Machine Learning Toolbox™ library or a MATLAB® Function block with the predict function. For examples, see Predict Responses Using RegressionLinear Predict Block and Predict Class Labels Using MATLAB Function Block.

When deciding which approach to use, consider the following:

Extended Capabilities

expand all

Thepredict function supports tall arrays with the following usage notes and limitations:

For more information, see Tall Arrays.

Usage notes and limitations:

For more information, see Introduction to Code Generation.

Version History

Introduced in R2016a

expand all

Starting in R2024a, predict accepts GPU array input arguments with some limitations.

Starting in R2023b, when you predict or compute the loss, some regression models allow you to specify the predicted response value for observations with missing predictor values. Specify the PredictionForMissingValue name-value argument to use a numeric scalar, the training set median, or the training set mean as the predicted value. When computing the loss, you can also specify to omit observations with missing predictor values.

This table lists the object functions that support thePredictionForMissingValue name-value argument. By default, the functions use the training set median as the predicted response value for observations with missing predictor values.

Model Type Model Objects Object Functions
Gaussian process regression (GPR) model RegressionGP, CompactRegressionGP loss, predict, resubLoss, resubPredict
RegressionPartitionedGP kfoldLoss, kfoldPredict
Gaussian kernel regression model RegressionKernel loss, predict
RegressionPartitionedKernel kfoldLoss, kfoldPredict
Linear regression model RegressionLinear loss, predict
RegressionPartitionedLinear kfoldLoss, kfoldPredict
Neural network regression model RegressionNeuralNetwork, CompactRegressionNeuralNetwork loss, predict, resubLoss, resubPredict
RegressionPartitionedNeuralNetwork kfoldLoss, kfoldPredict
Support vector machine (SVM) regression model RegressionSVM, CompactRegressionSVM loss, predict, resubLoss, resubPredict
RegressionPartitionedSVM kfoldLoss, kfoldPredict

In previous releases, the regression model loss and predict functions listed above used NaN predicted response values for observations with missing predictor values. The software omitted observations with missing predictor values from the resubstitution ("resub") and cross-validation ("kfold") computations for prediction and loss.