Intercept — xgboost 3.1.0-dev documentation (original) (raw)

Added in version 2.0.0.

Since 2.0.0, XGBoost supports estimating the model intercept (named base_score) automatically based on targets upon training. The behavior can be controlled by settingbase_score to a constant value. The following snippet disables the automatic estimation:

import xgboost as xgb

reg = xgb.XGBRegressor() reg.set_params(base_score=0.5)

In addition, here 0.5 represents the value after applying the inverse link function. See the end of the document for a description.

Other than the base_score, users can also provide global bias via the data fieldbase_margin, which is a vector or a matrix depending on the task. With multi-output and multi-class, the base_margin is a matrix with size (n_samples, n_targets) or(n_samples, n_classes).

import xgboost as xgb from sklearn.datasets import make_regression

X, y = make_regression()

reg = xgb.XGBRegressor() reg.fit(X, y)

Request for raw prediction

m = reg.predict(X, output_margin=True)

reg_1 = xgb.XGBRegressor()

Feed the prediction into the next model

reg_1.fit(X, y, base_margin=m) reg_1.predict(X, base_margin=m)

It specifies the bias for each sample and can be used for stacking an XGBoost model on top of other models, see Demo for boosting from prediction for a worked example. When base_margin is specified, it automatically overrides the base_scoreparameter. If you are stacking XGBoost models, then the usage should be relatively straightforward, with the previous model providing raw prediction and a new model using the prediction as bias. For more customized inputs, users need to take extra care of the link function. Let \(F\) be the model and \(g\) be the link function, sincebase_score is overridden when sample-specific base_margin is available, we will omit it here:

\[g(E[y_i]) = F(x_i)\]

When base margin \(b\) is provided, it’s added to the raw model output \(F\):

\[g(E[y_i]) = F(x_i) + b_i\]

and the output of the final model is:

\[g^{-1}(F(x_i) + b_i)\]

Using the gamma deviance objective reg:gamma as an example, which has a log link function, hence:

\[\begin{split}\ln{(E[y_i])} = F(x_i) + b_i \\ E[y_i] = \exp{(F(x_i) + b_i)}\end{split}\]

As a result, if you are feeding outputs from models like GLM with a corresponding objective function, make sure the outputs are not yet transformed by the inverse link (activation).

In the case of base_score (intercept), it can be accessed throughsave_config() after estimation. Unlike the base_margin, the returned value represents a value after applying inverse link. With logistic regression and the logit link function as an example, given the base_score as 0.5,\(g(intercept) = logit(0.5) = 0\) is added to the raw model output:

\[E[y_i] = g^{-1}{(F(x_i) + g(intercept))}\]

and 0.5 is the same as \(base\_score = g^{-1}(0) = 0.5\). This is more intuitive if you remove the model and consider only the intercept, which is estimated before the model is fitted:

\[\begin{split}E[y] = g^{-1}{(g(intercept))} \\ E[y] = intercept\end{split}\]

For some objectives like MAE, there are close solutions, while for others it’s estimated with one step Newton method.

Offset

The base_margin is a form of offset in GLM. Using the Poisson objective as an example, we might want to model the rate instead of the count:

\[rate = \frac{count}{exposure}\]

And the offset is defined as log link applied to the exposure variable:\(\ln{exposure}\). Let \(c\) be the count and \(\gamma\) be the exposure, substituting the response \(y\) in our previous formulation of base margin:

\[g(\frac{E[c_i]}{\gamma_i}) = F(x_i)\]

Substitute \(g\) with \(\ln\) for Poisson regression:

\[\ln{\frac{E[c_i]}{\gamma_i}} = F(x_i)\]

We have:

\[\begin{split}E[c_i] &= \exp{(F(x_i) + \ln{\gamma_i})} \\ E[c_i] &= g^{-1}(F(x_i) + g(\gamma_i))\end{split}\]

As you can see, we can use the base_margin for modeling with offset similar to GLMs