check_scoring (original) (raw)
sklearn.metrics.check_scoring(estimator=None, scoring=None, *, allow_none=False, raise_exc=True)[source]#
Determine scorer from user options.
A TypeError will be thrown if the estimator cannot be scored.
Parameters:
estimatorestimator object implementing ‘fit’ or None, default=None
The object to use to fit the data. If None
, then this function may error depending on allow_none
.
scoringstr, callable, list, tuple, set, or dict, default=None
Scorer to use. If scoring
represents a single score, one can use:
- a single string (see The scoring parameter: defining model evaluation rules);
- a callable (see Callable scorers) that returns a single value.
If scoring
represents multiple scores, one can use:
- a list, tuple or set of unique strings;
- a callable returning a dictionary where the keys are the metric names and the values are the metric scorers;
- a dictionary with metric names as keys and callables a values. The callables need to have the signature
callable(estimator, X, y)
.
If None, the provided estimator object’s score
method is used.
allow_nonebool, default=False
Whether to return None or raise an error if no scoring
is specified and the estimator has no score
method.
raise_excbool, default=True
Whether to raise an exception (if a subset of the scorers in multimetric scoring fails) or to return an error code.
- If set to
True
, raises the failing scorer’s exception. - If set to
False
, a formatted string of the exception details is passed as result of the failing scorer(s).
This applies if scoring
is list, tuple, set, or dict. Ignored if scoring
is a str or a callable.
Added in version 1.6.
Returns:
scoringcallable
A scorer callable object / function with signature scorer(estimator, X, y)
.
Examples
from sklearn.datasets import load_iris from sklearn.metrics import check_scoring from sklearn.tree import DecisionTreeClassifier X, y = load_iris(return_X_y=True) classifier = DecisionTreeClassifier(max_depth=2).fit(X, y) scorer = check_scoring(classifier, scoring='accuracy') scorer(classifier, X, y) 0.96...
from sklearn.metrics import make_scorer, accuracy_score, mean_squared_log_error X, y = load_iris(return_X_y=True) y *= -1 clf = DecisionTreeClassifier().fit(X, y) scoring = { ... "accuracy": make_scorer(accuracy_score), ... "mean_squared_log_error": make_scorer(mean_squared_log_error), ... } scoring_call = check_scoring(estimator=clf, scoring=scoring, raise_exc=False) scores = scoring_call(clf, X, y) scores {'accuracy': 1.0, 'mean_squared_log_error': 'Traceback ...'}