Use GPU to speedup SHAP value computation — xgboost 3.1.0-dev documentation (original) (raw)

xgboost

Note

Go to the endto download the full example code.

Demonstrates using GPU acceleration to compute SHAP values for feature importance.

import shap from sklearn.datasets import fetch_california_housing

import xgboost as xgb

Fetch dataset using sklearn

data = fetch_california_housing() print(data.DESCR) X = data.data y = data.target

num_round = 500

param = { "eta": 0.05, "max_depth": 10, "tree_method": "hist", "device": "cuda", }

GPU accelerated training

dtrain = xgb.DMatrix(X, label=y, feature_names=data.feature_names) model = xgb.train(param, dtrain, num_round)

Compute shap values using GPU with xgboost

model.set_param({"device": "cuda"}) shap_values = model.predict(dtrain, pred_contribs=True)

Compute shap interaction values using GPU

shap_interaction_values = model.predict(dtrain, pred_interactions=True)

shap will call the GPU accelerated version as long as the device parameter is set to

"cuda"

explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X)

visualize the first prediction's explanation

shap.force_plot( explainer.expected_value, shap_values[0, :], X[0, :], feature_names=data.feature_names, matplotlib=True, )

Show a summary of feature importance

shap.summary_plot(shap_values, X, plot_type="bar", feature_names=data.feature_names)

Gallery generated by Sphinx-Gallery