lightgbm.cv — LightGBM 4.6.0.99 documentation (original) (raw)

lightgbm.cv(params, train_set, num_boost_round=100, folds=None, nfold=5, stratified=True, shuffle=True, metrics=None, feval=None, init_model=None, fpreproc=None, seed=0, callbacks=None, eval_train_metric=False, return_cvbooster=False)[source]

Perform the cross-validation with given parameters.

Parameters:

Note

A custom objective function can be provided for the objective parameter. It should accept two parameters: preds, train_data and return (grad, hess).

predsnumpy 1-D array or numpy 2-D array (for multi-class task)

The predicted values. Predicted values are returned before any transformation, e.g. they are raw margin instead of probability of positive class for binary task.

train_dataDataset

The training dataset.

gradnumpy 1-D array or numpy 2-D array (for multi-class task)

The value of the first order derivative (gradient) of the loss with respect to the elements of preds for each sample point.

hessnumpy 1-D array or numpy 2-D array (for multi-class task)

The value of the second order derivative (Hessian) of the loss with respect to the elements of preds for each sample point.

For multi-class task, preds are numpy 2-D array of shape = [n_samples, n_classes], and grad and hess should be returned in the same format.

Returns:

eval_results – History of evaluation results of each metric. The dictionary has the following format: {‘valid metric1-mean’: [values], ‘valid metric1-stdv’: [values], ‘valid metric2-mean’: [values], ‘valid metric2-stdv’: [values], …}. If return_cvbooster=True, also returns trained boosters wrapped in a CVBooster object via cvbooster key. If eval_train_metric=True, also returns the train metric history. In this case, the dictionary has the following format: {‘train metric1-mean’: [values], ‘valid metric1-mean’: [values], ‘train metric2-mean’: [values], ‘valid metric2-mean’: [values], …}.

Return type:

dict