Working with InferenceData — ArviZ dev documentation (original) (raw)

Here we present a collection of common manipulations you can use while working with InferenceData.

import arviz as az import numpy as np import xarray as xr

xr.set_options(display_expand_data=False, display_expand_attrs=False);

display_expand_data=False makes the default view for xarray.DataArray fold the data values to a single line. To explore the values, click on the icon on the left of the view, right under the xarray.DataArray text. It has no effect on Dataset objects that already default to folded views.

display_expand_attrs=False folds the attributes in both DataArray and Dataset objects to keep the views shorter. In this page we print DataArrays and Datasets several times and they always have the same attributes.

Get the dataset corresponding to a single group#

post = idata.posterior post

<xarray.Dataset> Size: 165kB Dimensions: (chain: 4, draw: 500, school: 8) Coordinates:

Tip

You’ll have noticed we stored the posterior group in a new variable: post. As .copy() was not called, now using idata.posterior or post is equivalent.

Use this to keep your code short yet easy to read. Store the groups you’ll need very often as separate variables to use explicitly, but don’t delete the InferenceData parent. You’ll need it for many ArviZ functions to work properly. For example: plot_pair() needs data from sample_stats group to show divergences, compare() needs data from both log_likelihood and posterior groups, plot_loo_pit() needs not 2 but 3 groups: log_likelihood, posterior_predictive and posterior.

Add a new variable#

post["log_tau"] = np.log(post["tau"]) idata.posterior

<xarray.Dataset> Size: 181kB Dimensions: (chain: 4, draw: 500, school: 8) Coordinates:

Combine chains and draws#

<xarray.Dataset> Size: 225kB Dimensions: (sample: 2000, school: 8) Coordinates:

You can also use xarray.Dataset.stack() if you only want to combine the chain and draw dimensions. arviz.extract() is a convenience function aimed at taking care of the most common subsetting operations with MCMC samples. It can:

Get a random subset of the samples#

<xarray.Dataset> Size: 12kB Dimensions: (sample: 100, school: 8) Coordinates:

Tip

Use a random seed to get the same subset from multiple groups: az.extract(idata, num_samples=100, rng=3) and az.extract(idata, group="log_likelihood", num_samples=100, rng=3) will continue to have matching samples.

Obtain a NumPy array for a given parameter#

Let’s say we want to get the values for mu as a NumPy array.

array([7.87179637, 3.38455431, 9.10047569, ..., 1.76673325, 3.48611194, 3.40446391])

Get the dimension lengths#

Let’s check how many groups are in our hierarchical model.

len(idata.observed_data.school)

Get coordinate values#

What are the names of the groups in our hierarchical model? You can access them from the coordinate name school in this case:

idata.observed_data.school

<xarray.DataArray 'school' (school: 8)> Size: 512B 'Choate' 'Deerfield' 'Phillips Andover' ... "St. Paul's" 'Mt. Hermon' Coordinates:

Get a subset of chains#

Let’s keep only chain 0 and 2 here. For the subset to take effect on all relevant InferenceData groups: posterior, sample_stats, log_likelihood, posterior_predictive we will use the arviz.InferenceData.sel(), the method of InferenceData instead of xarray.Dataset.sel().

Remove the first n draws (burn-in)#

Let’s say we want to remove the first 100 samples, from all the chains and all InferenceData groups with draws.

idata.sel(draw=slice(100, None))

If you check the burnin object you will see that the groups posterior, posterior_predictive, prior and sample_stats have 400 draws compared to idata that has 500. The group observed_data has not been affected because it does not have the draw dimension. Alternatively, you can specify which group or groups you want to change.

idata.sel(draw=slice(100, None), groups="posterior")

Compute posterior mean values along draw and chain dimensions#

To compute the mean value of the posterior samples, do the following:

<xarray.Dataset> Size: 32B Dimensions: () Data variables: mu float64 8B 4.486 theta float64 8B 4.912 tau float64 8B 4.124 log_tau float64 8B 1.173

This computes the mean along all dimensions. This is probably what you want for mu and tau, which have two dimensions (chain and draw), but maybe not what you expected for theta, which has one more dimension school.

You can specify along which dimension you want to compute the mean (or other functions).

post.mean(dim=["chain", "draw"])

<xarray.Dataset> Size: 600B Dimensions: (school: 8) Coordinates:

Compute and store posterior pushforward quantities#

We use “posterior pushfoward quantities” to refer to quantities that are not variables in the posterior but deterministic computations using posterior variables.

You can use xarray for these pushforward operations and store them as a new variable in the posterior group. You’ll then be able to plot them with ArviZ functions, calculate stats and diagnostics on them (like the mcse()) or save and share the inferencedata object with the pushforward quantities included.

Compute the rolling mean of \(\log(\tau)\) with xarray.DataArray.rolling(), storing the result in the posterior:

post["mlogtau"] = post["log_tau"].rolling({"draw": 50}).mean()

Using xarray for pusforward calculations has all the advantages of working with xarray. It also inherits the disadvantages of working with xarray, but we believe those to be outweighed by the advantages, and we have already shown how to extract the data as NumPy arrays. Working with InferenceData is working mainly with xarray objects and this is what is shown in this guide.

Some examples of these advantages are specifying operations with named dimensions instead of positional ones (as seen in some previous sections), automatic alignment and broadcasting of arrays (as we’ll see now), or integration with Dask (as shown in the Dask for ArviZ guide).

In this cell you will compute pairwise differences between schools on their mean effects (variable theta). To do so, substract the variable theta after renaming the school dimension to the original variable. Xarray then aligns and broadcasts the two variables because they have different dimensions, and the result is a 4d variable with all the pointwise differences.

Eventually, store the result in the theta_school_diff variable:

post["theta_school_diff"] = post.theta - post.theta.rename(school="school_bis")

Note

This same operation using NumPy would require manual alignment of the two arrays to make sure they broadcast correctly. The could would be something like:

theta_school_diff = theta[:, :, :, None] - theta[:, :, None, :]

The theta_school_diff variable in the posterior has kept the named dimensions and coordinates:

<xarray.Dataset> Size: 1MB Dimensions: (chain: 4, draw: 500, school: 8, school_bis: 8) Coordinates:

Advanced subsetting#

To select the value corresponding to the difference between the Choate and Deerfield schools do:

post["theta_school_diff"].sel(school="Choate", school_bis="Deerfield")

<xarray.DataArray 'theta_school_diff' (chain: 4, draw: 500)> Size: 16kB 2.415 2.156 -0.04943 1.228 3.384 9.662 ... -1.656 -0.4021 1.524 -3.372 -6.305 Coordinates:

For more advanced subsetting (the equivalent to what is sometimes called “fancy indexing” in NumPy), you need to provide the indices as DataArray objects:

<xarray.DataArray 'theta_school_diff' (chain: 4, draw: 500, pairwise_school_diff: 3)> Size: 48kB 2.415 -6.741 -1.84 2.156 -3.474 3.784 ... -2.619 6.923 -6.305 1.667 -6.641 Coordinates:

Using lists or NumPy arrays instead of DataArrays does colum/row based indexing. As you can see, the result has 9 values of theta_school_diff instead of the 3 pairs of difference we selected in the previous cell:

post["theta_school_diff"].sel( school=["Choate", "Hotchkiss", "Mt. Hermon"], school_bis=["Deerfield", "Choate", "Lawrenceville"], )

<xarray.DataArray 'theta_school_diff' (chain: 4, draw: 500, school: 3, school_bis: 3)> Size: 144kB 2.415 0.0 -4.581 -4.326 -6.741 -11.32 ... 1.667 -6.077 -5.203 1.102 -6.641 Coordinates:

...
[ 5.69743498, 3.61925306, 12.43928685],
[ -1.71667129, -3.7948532 , 5.02518058]],
[[ 4.51853032, 0. , -1.82804532],
[ -5.85478068, -10.373311 , -12.20135632],
[ 3.75473422, -0.7637961 , -2.59184142]],
...,
[[ 1.5236138 , 0. , -1.17473412],
[ 0.63878633, -0.88482747, -2.05956159],
[ -4.63196981, -6.15558361, -7.33031773]],
[[ -3.37150005, 0. , 2.6537924 ],
[ -5.99094351, -2.61944346, 0.03434894],
[ 0.89803106, 4.26953111, 6.92332351]],
[[ -6.30547202, 0. , -7.74350435],
[ -4.63868099, 1.66679103, -6.07671332],
[ -5.2033772 , 1.10209482, -6.64140953]]]])

Add new chains using concat#

After checking the mcse() and realizing you need more samples, you rerun the model with two chains and obtain an idata_rerun object.

idata_rerun = ( idata.sel(chain=[0, 1]) .copy() .assign_coords(coords={"chain": [4, 5]}, groups="posterior_groups") )

You can combine the two into a single InferenceData object using arviz.concat():

idata_complete = az.concat(idata, idata_rerun, dim="chain") idata_complete.posterior.sizes["chain"]

Add groups to InferenceData objects#

You can also add new groups to InferenceData objects with the extend() (if the new groups are already in an InferenceData object) or with add_groups() (if the new groups are dictionaries or xarray.Dataset objects).

rng = np.random.default_rng(3) idata.add_groups( {"predictions": {"obs": rng.normal(size=(4, 500, 2))}}, dims={"obs": ["new_school"]}, coords={"new_school": ["Essex College", "Moordale"]}, ) idata

Add Transformations to Multiple Groups#

You can also add transformations to Multiple InferenceData Groups using arviz.InferenceData.map(). It takes a function as an input and applies the function groupwise to the selected InferenceData groups and overwrites the group with the result of the function.

selected_groups = ["posterior", "prior"]

def calc_mean(dataset, *args, **kwargs): result = dataset.mean(dim="chain", *args, **kwargs) return result

means = idata.map(calc_mean, groups=selected_groups, inplace=False) means

You can also pass a lambda function in map.

idata_shifted_obs = idata.map(lambda x: x + 3, groups="posterior") idata_shifted_obs

You can also add extra coordinates using map.

_upper = np.array([ x.upper() for x in idata.observed_data.school.values ]).T idata_with_upper = idata.map( lambda ds, **kwargs: ds.assign_coords(**kwargs), groups="observed_vars", upper=("Upper", _upper), ) idata_with_upper