lars_path_gram (original) (raw)
sklearn.linear_model.lars_path_gram(Xy, Gram, *, n_samples, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=np.float64(2.220446049250313e-16), copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False)[source]#
The lars_path in the sufficient stats mode.
The optimization objective for the case method=’lasso’ is:
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method=’lar’, the objective function is only known in the form of an implicit equation (see discussion in [1]).
Read more in the User Guide.
Parameters:
Xyndarray of shape (n_features,)
Xy = X.T @ y
.
Gramndarray of shape (n_features, n_features)
Gram = X.T @ X
.
n_samplesint
Equivalent size of sample.
max_iterint, default=500
Maximum number of iterations to perform, set to infinity for no limit.
alpha_minfloat, default=0
Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.
method{‘lar’, ‘lasso’}, default=’lar’
Specifies the returned model. Select 'lar'
for Least Angle Regression, 'lasso'
for the Lasso.
copy_Xbool, default=True
If False
, X
is overwritten.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol
parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Grambool, default=True
If False
, Gram
is overwritten.
verboseint, default=0
Controls output verbosity.
return_pathbool, default=True
If return_path==True
returns the entire path, else returns only the last point of the path.
return_n_iterbool, default=False
Whether to return the number of iterations.
positivebool, default=False
Restrict coefficients to be >= 0. This option is only allowed with method ‘lasso’. Note that the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min()
when fit_path=True
) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent lasso_path function.
Returns:
alphasndarray of shape (n_alphas + 1,)
Maximum of covariances (in absolute value) at each iteration.n_alphas
is either max_iter
, n_features
or the number of nodes in the path with alpha >= alpha_min
, whichever is smaller.
activendarray of shape (n_alphas,)
Indices of active variables at the end of the path.
coefsndarray of shape (n_features, n_alphas + 1)
Coefficients along the path.
n_iterint
Number of iterations run. Returned only if return_n_iter
is set to True.
References
Examples
from sklearn.linear_model import lars_path_gram from sklearn.datasets import make_regression X, y, true_coef = make_regression( ... n_samples=100, n_features=5, n_informative=2, coef=True, random_state=0 ... ) true_coef array([ 0. , 0. , 0. , 97.9, 45.7]) alphas, _, estimated_coef = lars_path_gram(X.T @ y, X.T @ X, n_samples=100) alphas.shape (3,) estimated_coef array([[ 0. , 0. , 0. ], [ 0. , 0. , 0. ], [ 0. , 0. , 0. ], [ 0. , 46.96, 97.99], [ 0. , 0. , 45.70]])