Neuroimaging in Python — NiBabel 5.4.0.dev1+g3b1c7b37 documentation (original) (raw)

spaces

Routines to work with spaces

A space is defined by coordinate axes.

A voxel space can be expressed by a shape implying an array, where the axes are the axes of the array.

A mapped voxel space (mapped voxels) is either:

slice2volume(index, axis[, shape]) Affine expressing selection of a single slice from 3D volume
vox2out_vox(mapped_voxels[, voxel_sizes]) output-aligned shape, affine for input implied by mapped_voxels

slice2volume

nibabel.spaces.slice2volume(index, axis, shape=None)

Affine expressing selection of a single slice from 3D volume

Imagine we have taken a slice from an image data array, s = data[:, :, index]. This function returns the affine to map the array coordinates ofs to the array coordinates of data.

This can be useful for resampling a single slice from a volume. For example, to resample slice k in the space of img1 from the matching spatial voxel values in img2, you might do something like:

slice_shape = img1.shape[:2] slice_aff = slice2volume(k, 2) whole_aff = np.linalg.inv(img2.affine).dot(img1.affine.dot(slice_aff))

and then use whole_aff in scipy.ndimage.affine_transform:

rzs, trans = to_matvec(whole_aff) data = img2.get_fdata() new_slice = scipy.ndimage.affine_transform(data, rzs, trans, slice_shape)

Parameters:

indexint

index of selected slice

axis{0, 1, 2}

axis to which index applies

Returns:

slice_affshape (4, 3) affine

Affine relating input coordinates in a slice to output coordinates in the embedded volume

vox2out_vox

nibabel.spaces.vox2out_vox(mapped_voxels, voxel_sizes=None)

output-aligned shape, affine for input implied by mapped_voxels

The input (voxel) space, and the affine mapping to output space, are given in mapped_voxels.

The output space is implied by the affine, we don’t need to know what that is, we just return something with the same (implied) output space.

Our job is to work out another voxel space where the voxel array axes and the output axes are aligned (top left 3 x 3 of affine is diagonal with all positive entries) and which contains all the voxels of the implied input image at their correct output space positions, once resampled into the output voxel space.

Parameters:

mapped_voxelsobject or length 2 sequence

If object, has attributes shape giving input voxel shape, andaffine giving mapping of input voxels to output space. If length 2 sequence, elements are (shape, affine) with same meaning as above. The affine is a (4, 4) array-like.

voxel_sizesNone or sequence

Gives the diagonal entries of output_affine (except the trailing 1 for the homogeneous coordinates) (output_affine == np.diag(voxel_sizes + [1])). If None, return identity output_affine.

Returns:

output_shapesequence

Shape of output image that has voxel axes aligned to original image output space axes, and encloses all the voxel data from the original image implied by input shape.

output_affine(4, 4) array

Affine of output image that has voxel axes aligned to the output axes implied by input affine. Top-left 3 x 3 part of affine is diagonal with all positive entries. The entries come from voxel_sizes if specified, or are all 1. If the image is < 3D, then the missing dimensions will have a 1 in the matching diagonal.