Frequently Asked Questions (original) (raw)

Your documentation keeps mentioning pandas. What is pandas?#

pandas is a very popular data analysis package in Python with wide usage in many fields. Our API is heavily inspired by pandas — this is why there are so many references to pandas.

Do I need to know pandas to use xarray?#

No! Our API is heavily inspired by pandas so while knowing pandas will let you become productive more quickly, knowledge of pandas is not necessary to use xarray.

Should I use xarray instead of pandas?#

It’s not an either/or choice! xarray provides robust support for converting back and forth between the tabular data-structures of pandas and its own multi-dimensional data-structures.

That said, you should only bother with xarray if some aspect of data is fundamentally multi-dimensional. If your data is unstructured or one-dimensional, pandas is usually the right choice: it has better performance for common operations such as groupby and you’ll find far more usage examples online.

Why is pandas not enough?#

pandas is a fantastic library for analysis of low-dimensional labelled data - if it can be sensibly described as “rows and columns”, pandas is probably the right choice. However, sometimes we want to use higher dimensional arrays (ndim > 2), or arrays for which the order of dimensions (e.g., columns vs rows) shouldn’t really matter. For example, the images of a movie can be natively represented as an array with four dimensions: time, row, column and color.

pandas has historically supported N-dimensional panels, but deprecated them in version 0.20 in favor of xarray data structures. There are now built-in methods on both sides to convert between pandas and xarray, allowing for more focused development effort. Xarray objects have a much richer model of dimensionality - if you were using Panels:

You can read about switching from Panels to xarray here. pandas gets a lot of things right, but many science, engineering and complex analytics use cases need fully multi-dimensional data structures.

How do xarray data structures differ from those found in pandas?#

The main distinguishing feature of xarray’s DataArray over labeled arrays in pandas is that dimensions can have names (e.g., “time”, “latitude”, “longitude”). Names are much easier to keep track of than axis numbers, and xarray uses dimension names for indexing, aggregation and broadcasting. Not only can you write x.sel(time='2000-01-01') and x.mean(dim='time'), but operations like x - x.mean(dim='time') always work, no matter the order of the “time” dimension. You never need to reshape arrays (e.g., withnp.newaxis) to align them for arithmetic operations in xarray.

Why don’t aggregations return Python scalars?#

Xarray tries hard to be self-consistent: operations on a DataArray (resp.Dataset) return another DataArray (resp. Dataset) object. In particular, operations returning scalar values (e.g. indexing or aggregations like mean or sum applied to all axes) will also return xarray objects.

Unfortunately, this means we sometimes have to explicitly cast our results from xarray when using them in other libraries. As an illustration, the following code fragment

In [1]: arr = xr.DataArray([1, 2, 3])

In [2]: pd.Series({"x": arr[0], "mean": arr.mean(), "std": arr.std()}) Out[2]: x <xarray.DataArray ()> Size: 8B\narray(1) mean <xarray.DataArray ()> Size: 8B\narray(2.) std <xarray.DataArray ()> Size: 8B\narray(0.81649658) dtype: object

does not yield the pandas DataFrame we expected. We need to specify the type conversion ourselves:

In [3]: pd.Series({"x": arr[0], "mean": arr.mean(), "std": arr.std()}, dtype=float) Out[3]: x 1.000000 mean 2.000000 std 0.816497 dtype: float64

Alternatively, we could use the item method or the float constructor to convert values one at a time

In [4]: pd.Series({"x": arr[0].item(), "mean": float(arr.mean())}) Out[4]: x 1.0 mean 2.0 dtype: float64

What is your approach to metadata?#

We are firm believers in the power of labeled data! In addition to dimensions and coordinates, xarray supports arbitrary metadata in the form of global (Dataset) and variable specific (DataArray) attributes (attrs).

Automatic interpretation of labels is powerful but also reduces flexibility. With xarray, we draw a firm line between labels that the library understands (dims and coords) and labels for users and user code (attrs). For example, we do not automatically interpret and enforce units or CF conventions. (An exception is serialization to and from netCDF files.)

An implication of this choice is that we do not propagate attrs through most operations unless explicitly flagged (some methods have a keep_attrsoption, and there is a global flag, accessible with xarray.set_options(), for setting this to be always True or False). Similarly, xarray does not check for conflicts between attrs when combining arrays and datasets, unless explicitly requested with the option compat='identical'. The guiding principle is that metadata should not be allowed to get in the way.

In general xarray uses the capabilities of the backends for reading and writing attributes. That has some implications on roundtripping. One example for such inconsistency is that size-1 lists will roundtrip as single element (for netcdf4 backends).

What other projects leverage xarray?#

See section Xarray related projects.

How do I open format X file as an xarray dataset?#

To open format X file in xarray, you need to know the format of the data you want to read. If the format is supported, you can use the appropriate function provided by xarray. The following table provides functions used for different file formats in xarray, as well as links to other packages that can be used:

If you are unable to open a file in xarray:

Xarray provides a default engine to read files, which is usually determined by the file extension or type. If you don’t specify the engine, xarray will try to guess it based on the file extension or type, and may fall back to a different engine if it cannot determine the correct one.

Therefore, it’s good practice to always specify the engine explicitly, to ensure that the correct backend is used and especially when working with complex data formats or non-standard file extensions.

xarray.backends.list_engines() is a function in xarray that returns a dictionary of available engines and their BackendEntrypoint objects.

You can use the engine argument to specify the backend when calling open_dataset() or other reading functions in xarray, as shown below:

NetCDF#

If you are reading a netCDF file with a “.nc” extension, the default engine is netcdf4. However if you have files with non-standard extensions or if the file format is ambiguous. Specify the engine explicitly, to ensure that the correct backend is used.

Use open_dataset() to open a NetCDF file and return an xarray Dataset object.

import xarray as xr

use xarray to open the file and return an xarray.Dataset object using netcdf4 engine

ds = xr.open_dataset("/path/to/my/file.nc", engine="netcdf4")

Print Dataset object

print(ds)

use xarray to open the file and return an xarray.Dataset object using scipy engine

ds = xr.open_dataset("/path/to/my/file.nc", engine="scipy")

We recommend installing scipy via conda using the below given code:

HDF5#

Use open_dataset() to open an HDF5 file and return an xarray Dataset object.

You should specify the engine keyword argument when reading HDF5 files with xarray, as there are multiple backends that can be used to read HDF5 files, and xarray may not always be able to automatically detect the correct one based on the file extension or file format.

To read HDF5 files with xarray, you can use the open_dataset() function from the h5netcdf backend, as follows:

import xarray as xr

Open HDF5 file as an xarray Dataset

ds = xr.open_dataset("path/to/hdf5/file.hdf5", engine="h5netcdf")

Print Dataset object

print(ds)

We recommend you to install h5netcdf library using the below given code:

conda install -c conda-forge h5netcdf

If you want to use the netCDF4 backend to read a file with a “.h5” extension (which is typically associated with HDF5 file format), you can specify the engine argument as follows:

ds = xr.open_dataset("path/to/file.h5", engine="netcdf4")

GRIB#

You should specify the engine keyword argument when reading GRIB files with xarray, as there are multiple backends that can be used to read GRIB files, and xarray may not always be able to automatically detect the correct one based on the file extension or file format.

Use the open_dataset() function from the cfgrib package to open a GRIB file as an xarray Dataset.

import xarray as xr

define the path to your GRIB file and the engine you want to use to open the file

use open_dataset() to open the file with the specified engine and return an xarray.Dataset object

ds = xr.open_dataset("path/to/your/file.grib", engine="cfgrib")

Print Dataset object

print(ds)

We recommend installing cfgrib via conda using the below given code:

conda install -c conda-forge cfgrib

CSV#

By default, xarray uses the built-in pandas library to read CSV files. In general, you don’t need to specify the engine keyword argument when reading CSV files with xarray, as the default pandas engine is usually sufficient for most use cases. If you are working with very large CSV files or if you need to perform certain types of data processing that are not supported by the default pandas engine, you may want to use a different backend. In such cases, you can specify the engine argument when reading the CSV file with xarray.

To read CSV files with xarray, use the open_dataset() function and specify the path to the CSV file as follows:

import xarray as xr import pandas as pd

Load CSV file into pandas DataFrame using the "c" engine

df = pd.read_csv("your_file.csv", engine="c")

Convert :py:func:pandas DataFrame to xarray.Dataset

ds = xr.Dataset.from_dataframe(df)

Prints the resulting xarray dataset

print(ds)

Zarr#

When opening a Zarr dataset with xarray, the engine is automatically detected based on the file extension or the type of input provided. If the dataset is stored in a directory with a “.zarr” extension, xarray will automatically use the “zarr” engine.

To read zarr files with xarray, use the open_dataset() function and specify the path to the zarr file as follows:

import xarray as xr

use xarray to open the file and return an xarray.Dataset object using zarr engine

ds = xr.open_dataset("path/to/your/file.zarr", engine="zarr")

Print Dataset object

print(ds)

We recommend installing zarr via conda using the below given code:

conda install -c conda-forge zarr

There may be situations where you need to specify the engine manually using the engine keyword argument. For example, if you have a Zarr dataset stored in a file with a different extension (e.g., “.npy”), you will need to specify the engine as “zarr” explicitly when opening the dataset.

Some packages may have additional functionality beyond what is shown here. You can refer to the documentation for each package for more information.

How does xarray handle missing values?#

xarray can handle missing values using ``np.nan``

How should I cite xarray?#

If you are using xarray and would like to cite it in academic publication, we would certainly appreciate it. We recommend two citations.

  1. At a minimum, we recommend citing the xarray overview journal article, published in the Journal of Open Research Software.
  1. You may also want to cite a specific version of the xarray package. We provide a Zenodo citation and DOIfor this purpose:

    https://zenodo.org/badge/doi/10.5281/zenodo.598201.svg

    An example BibTeX entry:

    @misc{xarray_v0_8_0,
    author = {Stephan Hoyer and Clark Fitzgerald and Joe Hamman and others},

 title  = {xarray: v0.8.0},  
 month  = aug,  
 year   = 2016,  
 doi    = {10.5281/zenodo.59499},  
 url    = {https://doi.org/10.5281/zenodo.59499}  
}

How stable is Xarray’s API?#

Xarray tries very hard to maintain backwards compatibility in our API reference between released versions. Whilst we do occasionally make breaking changes in order to improve the library, we signpost changes with DeprecationWarnings for many releases in advance. (An exception is bugs - whose behaviour we try to fix as soon as we notice them.) Our test-driven development practices helps to ensure any accidental regressions are caught. This philosophy applies to everything in the public API.

What parts of xarray are considered public API?#

As a rule, only functions/methods documented in our API reference are considered part of xarray’s public API. Everything else (in particular, everything inxarray.core that is not also exposed in the top level xarray namespace) is considered a private implementation detail that may change at any time.

Objects that exist to facilitate xarray’s fluent interface on DataArray andDataset objects are a special case. For convenience, we document them in the API docs, but only their methods and the DataArray/Datasetmethods/properties to construct them (e.g., .plot(), .groupby(),.str) are considered public API. Constructors and other details of the internal classes used to implemented them (i.e.,xarray.plot.plotting._PlotMethods, xarray.core.groupby.DataArrayGroupBy,xarray.core.accessor_str.StringAccessor) are not.