ELI5 top-level API — ELI5 0.15.0 documentation (original) (raw)
The following functions are exposed to a top level, e.g.eli5.explain_weights
.
explain_weights(estimator, **kwargs)[source]
explain_weights(estimator: BaseEstimator, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(ovr: OneVsRestClassifier, **kwargs)
explain_weights(clf: RidgeClassifierCV, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: RidgeClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: LinearSVC, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: Perceptron, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: PassiveAggressiveClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: SGDClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: LogisticRegressionCV, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: LogisticRegression, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(clf: OneClassSVM, *args, **kwargs)
explain_weights(clf: NuSVC, *args, **kwargs)
explain_weights(clf: SVC, *args, **kwargs)
explain_weights(estimator: AdaBoostRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: AdaBoostClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: GradientBoostingRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: GradientBoostingClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: ExtraTreesRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: ExtraTreesClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: RandomForestRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: RandomForestClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights(estimator: DecisionTreeRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, **export_graphviz_kwargs)
explain_weights(estimator: DecisionTreeClassifier, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, **export_graphviz_kwargs)
explain_weights(reg: NuSVR, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: SVR, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: TheilSenRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: SGDRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: RidgeCV, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: Ridge, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: PassiveAggressiveRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: OrthogonalMatchingPursuitCV, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: OrthogonalMatchingPursuit, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: LinearSVR, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: LinearRegression, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: LassoCV, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: Lars, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: HuberRegressor, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: ElasticNetCV, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(reg: ElasticNet, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None, feature_re=None, feature_filter=None)
explain_weights(estimator: Pipeline, feature_names=None, **kwargs)
explain_weights(estimator: PermutationImportance, vec=None, top=20, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None)
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
explain_weights()
Return an explanation of estimator parameters (weights).
explain_weights() is not doing any work itself, it dispatches to a concrete implementation based on estimator type.
Parameters:
- estimator (object) – Estimator instance. This argument must be positional.
- top (int or (int, int) tuple, optional) – Number of features to show. When
top
is int,top
features with a highest absolute values are shown. When it is (pos, neg) tuple, no more thanpos
positive features and no more thanneg
negative features is shown.None
value means no limit.
This argument may be supported or not, depending on estimator type. - target_names (list[str] or {‘old_name’: ‘new_name’} dict, optional) – Names of targets or classes. This argument can be used to provide human-readable class/target names for estimators which don’t expose clss names themselves. It can be also used to rename estimator-provided classes before displaying them.
This argument may be supported or not, depending on estimator type. - targets (list, optional) – Order of class/target names to show. This argument can be also used to show information only for a subset of classes. It should be a list of class / target names which match either names provided by an estimator or names defined in
target_names
parameter.
This argument may be supported or not, depending on estimator type. - feature_names (list, optional) – A list of feature names. It allows to specify feature names when they are not provided by an estimator object.
This argument may be supported or not, depending on estimator type. - feature_re (str, optional) – Only feature names which match
feature_re
regex are returned (more precisely,re.search(feature_re, x)
is checked). - feature_filter (Callable[[str], bool], optional) – Only feature names for which
feature_filter
function returns True are returned. - **kwargs (dict) – Keyword arguments. All keyword arguments are passed to concrete explain_weights… implementations.
Returns:
Explanation – Explanation
result. Use one of the formatting functions fromeli5.formatters to print it in a human-readable form.
Explanation instances have repr which works well with IPython notebook, but it can be a better idea to useeli5.show_weights() instead of eli5.explain_weights()if you work with IPython: eli5.show_weights() allows to customize formatting without a need to import eli5.formatters functions.
explain_prediction(estimator, doc, **kwargs)[source]
explain_prediction(estimator: BaseEstimator, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: OneVsRestClassifier, doc, **kwargs)
explain_prediction(clf: RidgeClassifierCV, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: RidgeClassifier, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: LinearSVC, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: Perceptron, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: PassiveAggressiveClassifier, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: SGDClassifier, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: LogisticRegressionCV, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: LogisticRegression, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: OneClassSVM, doc, *args, **kwargs)
explain_prediction(clf: SVC, doc, *args, **kwargs)
explain_prediction(clf: NuSVC, doc, *args, **kwargs)
explain_prediction(reg: NuSVR, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: SVR, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: TheilSenRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: SGDRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: RidgeCV, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: Ridge, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: PassiveAggressiveRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: OrthogonalMatchingPursuitCV, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: OrthogonalMatchingPursuit, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: LinearSVR, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: LinearRegression, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: LassoCV, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: Lars, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: HuberRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: ElasticNetCV, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: ElasticNet, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: RandomForestClassifier, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: GradientBoostingClassifier, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: ExtraTreesClassifier, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(clf: DecisionTreeClassifier, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: RandomForestRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: GradientBoostingRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: ExtraTreesRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction(reg: DecisionTreeRegressor, doc, vec=None, top=None, top_targets=None, target_names=None, targets=None, feature_names=None, feature_re=None, feature_filter=None, vectorized=False)
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction()
explain_prediction(logprobs: ChoiceLogprobs, doc=None)
explain_prediction(completion: ChatCompletion, doc=None)
explain_prediction(client: OpenAI, doc: str | list[dict], *, model: str, **kwargs)
Return an explanation of an estimator prediction.
explain_prediction() is not doing any work itself, it dispatches to a concrete implementation based on estimator type.
Parameters:
- estimator (object) – Estimator instance. This argument must be positional.
- doc (object) – Example to run estimator on. Estimator makes a prediction for this example, and explain_prediction() tries to show information about this prediction. Pass a single element, not a one-element array: if you fitted your estimator on
X
, that would beX[i]
for most containers, andX.iloc[i]
forpandas.DataFrame
. - top (int or (int, int) tuple, optional) – Number of features to show. When
top
is int,top
features with a highest absolute values are shown. When it is (pos, neg) tuple, no more thanpos
positive features and no more thanneg
negative features is shown.None
value means no limit (default).
This argument may be supported or not, depending on estimator type. - top_targets (int, optional) – Number of targets to show. When
top_targets
is provided, only specified number of targets with highest scores are shown. Negative value means targets with lowest scores are shown. Must not be given withtargets
argument.None
value means no limit: all targets are shown (default).
This argument may be supported or not, depending on estimator type. - target_names (list[str] or {‘old_name’: ‘new_name’} dict, optional) – Names of targets or classes. This argument can be used to provide human-readable class/target names for estimators which don’t expose class names themselves. It can be also used to rename estimator-provided classes before displaying them.
This argument may be supported or not, depending on estimator type. - targets (list, optional) – Order of class/target names to show. This argument can be also used to show information only for a subset of classes. It should be a list of class / target names which match either names provided by an estimator or names defined in
target_names
parameter. Must not be given withtop_targets
argument.
In case of binary classification you can use this argument to set the class which probability or score should be displayed, with an appropriate explanation. By default a result for predicted class is shown. For example, you can usetargets=[True]
to always show result for a positive class, even if the predicted label is False.
This argument may be supported or not, depending on estimator type. - feature_names (list, optional) – A list of feature names. It allows to specify feature names when they are not provided by an estimator object.
This argument may be supported or not, depending on estimator type. - feature_re (str, optional) – Only feature names which match
feature_re
regex are returned (more precisely,re.search(feature_re, x)
is checked). - feature_filter (Callable[[str, float], bool], optional) – Only feature names for which
feature_filter
function returns True are returned. It must accept feature name and feature value. Missing features always have a NaN value. - **kwargs (dict) – Keyword arguments. All keyword arguments are passed to concrete explain_prediction… implementations.
Returns:
Explanation – Explanation result. Use one of the formatting functions fromeli5.formatters to print it in a human-readable form.
Explanation instances have repr which works well with IPython notebook, but it can be a better idea to useeli5.show_prediction() instead of eli5.explain_prediction()if you work with IPython: eli5.show_prediction() allows to customize formatting without a need to import eli5.formattersfunctions.
show_weights(estimator, **kwargs)[source]
Return an explanation of estimator parameters (weights) as an IPython.display.HTML object. Use this function to show classifier weights in IPython.
show_weights() accepts alleli5.explain_weights() arguments and alleli5.formatters.html.format_as_html()keyword arguments, so it is possible to get explanation and customize formatting in a single call.
Parameters:
- estimator (object) – Estimator instance. This argument must be positional.
- top (int or (int, int) tuple, optional) – Number of features to show. When
top
is int,top
features with a highest absolute values are shown. When it is (pos, neg) tuple, no more thanpos
positive features and no more thanneg
negative features is shown.None
value means no limit.
This argument may be supported or not, depending on estimator type. - target_names (list[str] or {‘old_name’: ‘new_name’} dict, optional) – Names of targets or classes. This argument can be used to provide human-readable class/target names for estimators which don’t expose clss names themselves. It can be also used to rename estimator-provided classes before displaying them.
This argument may be supported or not, depending on estimator type. - targets (list, optional) – Order of class/target names to show. This argument can be also used to show information only for a subset of classes. It should be a list of class / target names which match either names provided by an estimator or names defined in
target_names
parameter.
This argument may be supported or not, depending on estimator type. - feature_names (list, optional) – A list of feature names. It allows to specify feature names when they are not provided by an estimator object.
This argument may be supported or not, depending on estimator type. - feature_re (str, optional) – Only feature names which match
feature_re
regex are shown (more precisely,re.search(feature_re, x)
is checked). - feature_filter (Callable[[str], bool], optional) – Only feature names for which
feature_filter
function returns True are shown. - show (List[str], optional) – List of sections to show. Allowed values:
- ‘targets’ - per-target feature weights;
- ‘transition_features’ - transition features of a CRF model;
- ‘feature_importances’ - feature importances of a decision tree or an ensemble-based estimator;
- ‘decision_tree’ - decision tree in a graphical form;
- ‘method’ - a string with explanation method;
- ‘description’ - description of explanation method and its caveats.
eli5.formatters.fields
provides constants that cover common cases:INFO
(method and description),WEIGHTS
(all the rest), andALL
(all).
- horizontal_layout (bool) – When True, feature weight tables are printed horizontally (left to right); when False, feature weight tables are printed vertically (top to down). Default is True.
- highlight_spaces (bool or None, optional) – Whether to highlight spaces in feature names. This is useful if you work with text and have ngram features which may include spaces at left or right. Default is None, meaning that the value used is set automatically based on vectorizer and feature values.
- include_styles (bool) – Most styles are inline, but some are included separately in