LatentDirichletAllocation (original) (raw)

class sklearn.decomposition.LatentDirichletAllocation(n_components=10, *, doc_topic_prior=None, topic_word_prior=None, learning_method='batch', learning_decay=0.7, learning_offset=10.0, max_iter=10, batch_size=128, evaluate_every=-1, total_samples=1000000.0, perp_tol=0.1, mean_change_tol=0.001, max_doc_update_iter=100, n_jobs=None, verbose=0, random_state=None)[source]#

Latent Dirichlet Allocation with online variational Bayes algorithm.

The implementation is based on [1] and [2].

Added in version 0.17.

Read more in the User Guide.

Parameters:

n_componentsint, default=10

Number of topics.

Changed in version 0.19: n_topics was renamed to n_components

doc_topic_priorfloat, default=None

Prior of document topic distribution theta. If the value is None, defaults to 1 / n_components. In [1], this is called alpha.

topic_word_priorfloat, default=None

Prior of topic word distribution beta. If the value is None, defaults to 1 / n_components. In [1], this is called eta.

learning_method{‘batch’, ‘online’}, default=’batch’

Method used to update _component. Only used in fit method. In general, if the data size is large, the online update will be much faster than the batch update.

Valid options:

Changed in version 0.20: The default learning method is now "batch".

learning_decayfloat, default=0.7

It is a parameter that control learning rate in the online learning method. The value should be set between (0.5, 1.0] to guarantee asymptotic convergence. When the value is 0.0 and batch_size isn_samples, the update method is same as batch learning. In the literature, this is called kappa.

learning_offsetfloat, default=10.0

A (positive) parameter that downweights early iterations in online learning. It should be greater than 1.0. In the literature, this is called tau_0.

max_iterint, default=10

The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not thepartial_fit method.

batch_sizeint, default=128

Number of documents to use in each EM iteration. Only used in online learning.

evaluate_everyint, default=-1

How often to evaluate perplexity. Only used in fit method. set it to 0 or negative number to not evaluate perplexity in training at all. Evaluating perplexity can help you check convergence in training process, but it will also increase total training time. Evaluating perplexity in every iteration might increase training time up to two-fold.

total_samplesint, default=1e6

Total number of documents. Only used in the partial_fit method.

perp_tolfloat, default=1e-1

Perplexity tolerance. Only used when evaluate_every is greater than 0.

mean_change_tolfloat, default=1e-3

Stopping tolerance for updating document topic distribution in E-step.

max_doc_update_iterint, default=100

Max number of iterations for updating document topic distribution in the E-step.

n_jobsint, default=None

The number of jobs to use in the E-step.None means 1 unless in a joblib.parallel_backend context.-1 means using all processors. See Glossaryfor more details.

verboseint, default=0

Verbosity level.

random_stateint, RandomState instance or None, default=None

Pass an int for reproducible results across multiple function calls. See Glossary.

Attributes:

**components_**ndarray of shape (n_components, n_features)

Variational parameters for topic word distribution. Since the complete conditional for topic word distribution is a Dirichlet,components_[i, j] can be viewed as pseudocount that represents the number of times word j was assigned to topic i. It can also be viewed as distribution over the words for each topic after normalization:model.components_ / model.components_.sum(axis=1)[:, np.newaxis].

**exp_dirichlet_component_**ndarray of shape (n_components, n_features)

Exponential value of expectation of log topic word distribution. In the literature, this is exp(E[log(beta)]).

**n_batch_iter_**int

Number of iterations of the EM step.

**n_features_in_**int

Number of features seen during fit.

Added in version 0.24.

**feature_names_in_**ndarray of shape (n_features_in_,)

Names of features seen during fit. Defined only when Xhas feature names that are all strings.

Added in version 1.0.

**n_iter_**int

Number of passes over the dataset.

**bound_**float

Final perplexity score on training set.

**doc_topic_prior_**float

Prior of document topic distribution theta. If the value is None, it is 1 / n_components.

**random_state_**RandomState instance

RandomState instance that is generated either from a seed, the random number generator or by np.random.

**topic_word_prior_**float

Prior of topic word distribution beta. If the value is None, it is1 / n_components.

References

[1] (1,2,3)

“Online Learning for Latent Dirichlet Allocation”, Matthew D. Hoffman, David M. Blei, Francis Bach, 2010blei-lab/onlineldavb

[2]

“Stochastic Variational Inference”, Matthew D. Hoffman, David M. Blei, Chong Wang, John Paisley, 2013

Examples

from sklearn.decomposition import LatentDirichletAllocation from sklearn.datasets import make_multilabel_classification

This produces a feature matrix of token counts, similar to what

CountVectorizer would produce on text.

X, _ = make_multilabel_classification(random_state=0) lda = LatentDirichletAllocation(n_components=5, ... random_state=0) lda.fit(X) LatentDirichletAllocation(...)

get topics for some given samples:

lda.transform(X[-2:]) array([[0.00360392, 0.25499205, 0.0036211 , 0.64236448, 0.09541846], [0.15297572, 0.00362644, 0.44412786, 0.39568399, 0.003586 ]])

fit(X, y=None)[source]#

Learn model for the data X with variational Bayes method.

When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update.

Parameters:

X{array-like, sparse matrix} of shape (n_samples, n_features)

Document word matrix.

yIgnored

Not used, present here for API consistency by convention.

Returns:

self

Fitted estimator.

fit_transform(X, y=None, *, normalize=True)[source]#

Fit to data, then transform it.

Fits transformer to X and y and returns a transformed version of X.

Parameters:

Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

normalizebool, default=True

Whether to normalize the document topic distribution in transform.

Returns:

X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names_out(input_features=None)[source]#

Get output feature names for transformation.

The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: ["class_name0", "class_name1", "class_name2"].

Parameters:

input_featuresarray-like of str or None, default=None

Only used to validate feature names with the names seen in fit.

Returns:

feature_names_outndarray of str objects

Transformed feature names.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:

deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

paramsdict

Parameter names mapped to their values.

partial_fit(X, y=None)[source]#

Online VB with Mini-Batch update.

Parameters:

X{array-like, sparse matrix} of shape (n_samples, n_features)

Document word matrix.

yIgnored

Not used, present here for API consistency by convention.

Returns:

self

Partially fitted estimator.

perplexity(X, sub_sampling=False)[source]#

Calculate approximate perplexity for data X.

Perplexity is defined as exp(-1. * log-likelihood per word)

Changed in version 0.19: doc_topic_distr argument has been deprecated and is ignored because user no longer has access to unnormalized distribution

Parameters:

X{array-like, sparse matrix} of shape (n_samples, n_features)

Document word matrix.

sub_samplingbool

Do sub-sampling or not.

Returns:

scorefloat

Perplexity score.

score(X, y=None)[source]#

Calculate approximate log-likelihood as score.

Parameters:

X{array-like, sparse matrix} of shape (n_samples, n_features)

Document word matrix.

yIgnored

Not used, present here for API consistency by convention.

Returns:

scorefloat

Use approximate bound as score.

set_output(*, transform=None)[source]#

Set output container.

See Introducing the set_output APIfor an example on how to use the API.

Parameters:

transform{“default”, “pandas”, “polars”}, default=None

Configure output of transform and fit_transform.

Added in version 1.4: "polars" option was added.

Returns:

selfestimator instance

Estimator instance.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:

**paramsdict

Estimator parameters.

Returns:

selfestimator instance

Estimator instance.

set_transform_request(*, normalize: bool | None | str = '$UNCHANGED$') → LatentDirichletAllocation[source]#

Request metadata passed to the transform method.

Note that this method is only relevant ifenable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside aPipeline. Otherwise it has no effect.

Parameters:

normalizestr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for normalize parameter in transform.

Returns:

selfobject

The updated object.

transform(X, *, normalize=True)[source]#

Transform data X according to the fitted model.

Changed in version 0.18: doc_topic_distr is now normalized.

Parameters:

X{array-like, sparse matrix} of shape (n_samples, n_features)

Document word matrix.

normalizebool, default=True

Whether to normalize the document topic distribution.

Returns:

doc_topic_distrndarray of shape (n_samples, n_components)

Document topic distribution for X.