sklearn.covariance.GraphicalLasso — scikit-learn 0.20.4 documentation (original) (raw)
class sklearn.covariance.
GraphicalLasso
(alpha=0.01, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, assume_centered=False)[source]¶
Sparse inverse covariance estimation with an l1-penalized estimator.
Read more in the User Guide.
Parameters: | alpha : positive float, default 0.01 The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. mode : {‘cd’, ‘lars’}, default ‘cd’ The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable. tol : positive float, default 1e-4 The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. enet_tol : positive float, optional The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. max_iter : integer, default 100 The maximum number of iterations. verbose : boolean, default False If verbose is True, the objective function and dual gap are plotted at each iteration. assume_centered : boolean, default False If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. |
---|---|
Attributes: | covariance_ : array-like, shape (n_features, n_features) Estimated covariance matrix precision_ : array-like, shape (n_features, n_features) Estimated pseudo inverse matrix. n_iter_ : int Number of iterations run. |
Methods
error_norm(comp_cov[, norm, scaling, squared]) | Computes the Mean Squared Error between two covariance estimators. |
---|---|
fit(X[, y]) | Fits the GraphicalLasso model to X. |
get_params([deep]) | Get parameters for this estimator. |
get_precision() | Getter for the precision matrix. |
mahalanobis(X) | Computes the squared Mahalanobis distances of given observations. |
score(X_test[, y]) | Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. |
set_params(**params) | Set the parameters of this estimator. |
__init__
(alpha=0.01, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, assume_centered=False)[source]¶
error_norm
(comp_cov, norm='frobenius', scaling=True, squared=True)[source]¶
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm).
Parameters: | comp_cov : array-like, shape = [n_features, n_features] The covariance to compare with. norm : str The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_). scaling : bool If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled. squared : bool Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. |
---|---|
Returns: | The Mean Squared Error (in the sense of the Frobenius norm) between `self` and `comp_cov` covariance estimators. |
Fits the GraphicalLasso model to X.
Parameters: | X : ndarray, shape (n_samples, n_features) Data from which to compute the covariance estimate y : (ignored) |
---|
get_params
(deep=True)[source]¶
Get parameters for this estimator.
Parameters: | deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. |
---|---|
Returns: | params : mapping of string to any Parameter names mapped to their values. |
Getter for the precision matrix.
Returns: | precision_ : array-like The precision matrix associated to the current covariance object. |
---|
Computes the squared Mahalanobis distances of given observations.
Parameters: | X : array-like, shape = [n_samples, n_features] The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. |
---|---|
Returns: | dist : array, shape = [n_samples,] Squared Mahalanobis distances of the observations. |
score
(X_test, y=None)[source]¶
Computes the log-likelihood of a Gaussian data set withself.covariance_ as an estimator of its covariance matrix.
Parameters: | X_test : array-like, shape = [n_samples, n_features] Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering). y not used, present for API consistence purpose. |
---|---|
Returns: | res : float The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. |
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.
Returns: | self |
---|