sklearn.decomposition.DictionaryLearning — scikit-learn 0.20.4 documentation (original) (raw)
Parameters:
n_components : int,
number of dictionary elements to extract
alpha : float,
sparsity controlling parameter
max_iter : int,
maximum number of iterations to perform
tol : float,
tolerance for numerical error
fit_algorithm : {‘lars’, ‘cd’}
lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
New in version 0.17: cd coordinate descent method to improve speed.
transform_algorithm : {‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}
Algorithm used to transform the data lars: uses the least angle regression method (linear_model.lars_path) lasso_lars: uses Lars to compute the Lasso solution lasso_cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). lasso_lars will be faster if the estimated components are sparse. omp: uses orthogonal matching pursuit to estimate the sparse solution threshold: squashes to zero all coefficients less than alpha from the projection dictionary * X'
New in version 0.17: lasso_cd coordinate descent method to improve speed.
transform_n_nonzero_coefs : int, 0.1 * n_features
by default
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm=’lars’ and algorithm=’omp’and is overridden by alpha in the omp case.
transform_alpha : float, 1. by default
If algorithm=’lasso_lars’ or algorithm=’lasso_cd’, alpha is the penalty applied to the L1 norm. If algorithm=’threshold’, alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm=’omp’, alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overridesn_nonzero_coefs.
n_jobs : int or None, optional (default=None)
Number of parallel jobs to run.None
means 1 unless in a joblib.parallel_backend context.-1
means using all processors. See Glossaryfor more details.
code_init : array of shape (n_samples, n_components),
initial value for the code, for warm restart
dict_init : array of shape (n_components, n_features),
initial values for the dictionary, for warm restart
verbose : bool, optional (default: False)
To control the verbosity of the procedure.
split_sign : bool, False by default
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
positive_code : bool
Whether to enforce positivity when finding the code.
New in version 0.20.
positive_dict : bool
Whether to enforce positivity when finding the dictionary
New in version 0.20.