Sparse data structures — pandas 0.24.0rc1 documentation (original) (raw)

Note

The SparsePanel class has been removed in 0.19.0

We have implemented “sparse” versions of Series and DataFrame. These are not sparse in the typical “mostly 0”. Rather, you can view these objects as being “compressed” where any data matching a specific value (NaN / missing value, though any value can be chosen) is omitted. A special SparseIndex object tracks where data has been “sparsified”. This will make much more sense with an example. All of the standard pandas data structures have a to_sparse method:

In [1]: ts = pd.Series(np.random.randn(10))

In [2]: ts[2:-2] = np.nan

In [3]: sts = ts.to_sparse()

In [4]: sts Out[4]: 0 0.469112 1 -0.282863 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 -0.861849 9 -2.104569 dtype: Sparse[float64, nan] BlockIndex Block locations: array([0, 8], dtype=int32) Block lengths: array([2, 2], dtype=int32)

The to_sparse method takes a kind argument (for the sparse index, see below) and a fill_value. So if we had a mostly zero Series, we could convert it to sparse with fill_value=0:

In [5]: ts.fillna(0).to_sparse(fill_value=0) Out[5]: 0 0.469112 1 -0.282863 2 0.000000 3 0.000000 4 0.000000 5 0.000000 6 0.000000 7 0.000000 8 -0.861849 9 -2.104569 dtype: Sparse[float64, 0] BlockIndex Block locations: array([0, 8], dtype=int32) Block lengths: array([2, 2], dtype=int32)

The sparse objects exist for memory efficiency reasons. Suppose you had a large, mostly NA DataFrame:

In [6]: df = pd.DataFrame(np.random.randn(10000, 4))

In [7]: df.iloc[:9998] = np.nan

In [8]: sdf = df.to_sparse()

In [9]: sdf Out[9]: 0 1 2 3 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN 5 NaN NaN NaN NaN 6 NaN NaN NaN NaN ... ... ... ... ... 9993 NaN NaN NaN NaN 9994 NaN NaN NaN NaN 9995 NaN NaN NaN NaN 9996 NaN NaN NaN NaN 9997 NaN NaN NaN NaN 9998 0.509184 -0.774928 -1.369894 -0.382141 9999 0.280249 -1.648493 1.490865 -0.890819

[10000 rows x 4 columns]

In [10]: sdf.density Out[10]: 0.0002

As you can see, the density (% of values that have not been “compressed”) is extremely low. This sparse object takes up much less memory on disk (pickled) and in the Python interpreter. Functionally, their behavior should be nearly identical to their dense counterparts.

Any sparse object can be converted back to the standard dense form by callingto_dense:

In [11]: sts.to_dense() Out[11]: 0 0.469112 1 -0.282863 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 -0.861849 9 -2.104569 dtype: float64

Sparse Accessor

New in version 0.24.0.

Pandas provides a .sparse accessor, similar to .str for string data, .catfor categorical data, and .dt for datetime-like data. This namespace provides attributes and methods that are specific to sparse data.

In [12]: s = pd.Series([0, 0, 1, 2], dtype="Sparse[int]")

In [13]: s.sparse.density Out[13]: 0.5

In [14]: s.sparse.fill_value Out[14]: 0

This accessor is available only on data with SparseDtype, and on the Seriesclass itself for creating a Series with sparse data from a scipy COO matrix with.

SparseArray

SparseArray is the base layer for all of the sparse indexed data structures. It is a 1-dimensional ndarray-like object storing only values distinct from the fill_value:

In [15]: arr = np.random.randn(10)

In [16]: arr[2:5] = np.nan

In [17]: arr[7:8] = np.nan

In [18]: sparr = pd.SparseArray(arr)

In [19]: sparr Out[19]: [-1.95566352972, -1.6588664276, nan, nan, nan, 1.15893288864, 0.145297113733, nan, 0.606027190513, 1.33421134013] Fill: nan IntIndex Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)

Like the indexed objects (SparseSeries, SparseDataFrame), a SparseArraycan be converted back to a regular ndarray by calling to_dense:

In [20]: sparr.to_dense() Out[20]: array([-1.9557, -1.6589, nan, nan, nan, 1.1589, 0.1453, nan, 0.606 , 1.3342])

SparseIndex objects

Two kinds of SparseIndex are implemented, block and integer. We recommend using block as it’s more memory efficient. The integer format keeps an arrays of all of the locations where the data are not equal to the fill value. The block format tracks only the locations and sizes of blocks of data.

Sparse Dtypes

Sparse data should have the same dtype as its dense representation. Currently,float64, int64 and bool dtypes are supported. Depending on the original dtype, fill_value default changes:

In [21]: s = pd.Series([1, np.nan, np.nan])

In [22]: s Out[22]: 0 1.0 1 NaN 2 NaN dtype: float64

In [23]: s.to_sparse() Out[23]: 0 1.0 1 NaN 2 NaN dtype: Sparse[float64, nan] BlockIndex Block locations: array([0], dtype=int32) Block lengths: array([1], dtype=int32)

In [24]: s = pd.Series([1, 0, 0])

In [25]: s Out[25]: 0 1 1 0 2 0 dtype: int64

In [26]: s.to_sparse() Out[26]: 0 1 1 0 2 0 dtype: Sparse[int64, 0] BlockIndex Block locations: array([0], dtype=int32) Block lengths: array([1], dtype=int32)

In [27]: s = pd.Series([True, False, True])

In [28]: s Out[28]: 0 True 1 False 2 True dtype: bool

In [29]: s.to_sparse() Out[29]: 0 True 1 False 2 True dtype: Sparse[bool, False] BlockIndex Block locations: array([0, 2], dtype=int32) Block lengths: array([1, 1], dtype=int32)

You can change the dtype using .astype(), the result is also sparse. Note that.astype() also affects to the fill_value to keep its dense representation.

In [30]: s = pd.Series([1, 0, 0, 0, 0])

In [31]: s Out[31]: 0 1 1 0 2 0 3 0 4 0 dtype: int64

In [32]: ss = s.to_sparse()

In [33]: ss Out[33]: 0 1 1 0 2 0 3 0 4 0 dtype: Sparse[int64, 0] BlockIndex Block locations: array([0], dtype=int32) Block lengths: array([1], dtype=int32)

In [34]: ss.astype(np.float64) Out[34]: 0 1.0 1 0.0 2 0.0 3 0.0 4 0.0 dtype: Sparse[float64, 0.0] BlockIndex Block locations: array([0], dtype=int32) Block lengths: array([1], dtype=int32)

It raises if any value cannot be coerced to specified dtype.

In [1]: ss = pd.Series([1, np.nan, np.nan]).to_sparse() Out[1]: 0 1.0 1 NaN 2 NaN dtype: float64 BlockIndex Block locations: array([0], dtype=int32) Block lengths: array([1], dtype=int32)

In [2]: ss.astype(np.int64) Out[2]: ValueError: unable to coerce current fill_value nan to int64 dtype

Sparse Calculation

You can apply NumPy ufuncs to SparseArray and get a SparseArray as a result.

In [35]: arr = pd.SparseArray([1., np.nan, np.nan, -2., np.nan])

In [36]: np.abs(arr) Out[36]: [1.0, nan, nan, 2.0, nan] Fill: nan IntIndex Indices: array([0, 3], dtype=int32)

The ufunc is also applied to fill_value. This is needed to get the correct dense result.

In [37]: arr = pd.SparseArray([1., -1, -1, -2., -1], fill_value=-1)

In [38]: np.abs(arr) Out[38]: [1.0, 1, 1, 2.0, 1] Fill: 1 IntIndex Indices: array([0, 3], dtype=int32)

In [39]: np.abs(arr).to_dense() Out[39]: array([ 1., 1., 1., 2., 1.])

Interaction with scipy.sparse

SparseDataFrame

New in version 0.20.0.

Pandas supports creating sparse dataframes directly from scipy.sparse matrices.

In [40]: from scipy.sparse import csr_matrix

In [41]: arr = np.random.random(size=(1000, 5))

In [42]: arr[arr < .9] = 0

In [43]: sp_arr = csr_matrix(arr)

In [44]: sp_arr Out[44]: <1000x5 sparse matrix of type '<class 'numpy.float64'>' with 517 stored elements in Compressed Sparse Row format>

In [45]: sdf = pd.SparseDataFrame(sp_arr)

In [46]: sdf Out[46]: 0 1 2 3 4 0 0.956380 NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN 4 0.999552 NaN NaN 0.956153 NaN 5 NaN NaN NaN NaN NaN 6 0.913638 NaN NaN NaN NaN .. ... .. ... ... .. 993 NaN NaN NaN NaN NaN 994 NaN NaN NaN NaN NaN 995 NaN NaN NaN 0.998834 NaN 996 NaN NaN NaN NaN NaN 997 NaN NaN NaN NaN NaN 998 NaN NaN 0.95659 NaN NaN 999 NaN NaN NaN NaN NaN

[1000 rows x 5 columns]

All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying data as needed. To convert a SparseDataFrame back to sparse SciPy matrix in COO format, you can use the SparseDataFrame.to_coo() method:

In [47]: sdf.to_coo() Out[47]: <1000x5 sparse matrix of type '<class 'numpy.float64'>' with 517 stored elements in COOrdinate format>

SparseSeries

A SparseSeries.to_coo() method is implemented for transforming a SparseSeries indexed by a MultiIndex to a scipy.sparse.coo_matrix.

The method requires a MultiIndex with two or more levels.

In [48]: s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])

In [49]: s.index = pd.MultiIndex.from_tuples([(1, 2, 'a', 0), ....: (1, 2, 'a', 1), ....: (1, 1, 'b', 0), ....: (1, 1, 'b', 1), ....: (2, 1, 'b', 0), ....: (2, 1, 'b', 1)], ....: names=['A', 'B', 'C', 'D']) ....:

In [50]: s Out[50]: A B C D 1 2 a 0 3.0 1 NaN 1 b 0 1.0 1 3.0 2 1 b 0 NaN 1 NaN dtype: float64

SparseSeries

In [51]: ss = s.to_sparse()

In [52]: ss Out[52]: A B C D 1 2 a 0 3.0 1 NaN 1 b 0 1.0 1 3.0 2 1 b 0 NaN 1 NaN dtype: Sparse[float64, nan] BlockIndex Block locations: array([0, 2], dtype=int32) Block lengths: array([1, 2], dtype=int32)

In the example below, we transform the SparseSeries to a sparse representation of a 2-d array by specifying that the first and second MultiIndex levels define labels for the rows and the third and fourth levels define labels for the columns. We also specify that the column and row labels should be sorted in the final sparse representation.

In [53]: A, rows, columns = ss.to_coo(row_levels=['A', 'B'], ....: column_levels=['C', 'D'], ....: sort_labels=True) ....:

In [54]: A Out[54]: <3x4 sparse matrix of type '<class 'numpy.float64'>' with 3 stored elements in COOrdinate format>

In [55]: A.todense() Out[55]: matrix([[ 0., 0., 1., 3.], [ 3., 0., 0., 0.], [ 0., 0., 0., 0.]])

In [56]: rows Out[56]: [(1, 1), (1, 2), (2, 1)]

In [57]: columns Out[57]: [('a', 0), ('a', 1), ('b', 0), ('b', 1)]

Specifying different row and column labels (and not sorting them) yields a different sparse matrix:

In [58]: A, rows, columns = ss.to_coo(row_levels=['A', 'B', 'C'], ....: column_levels=['D'], ....: sort_labels=False) ....:

In [59]: A Out[59]: <3x2 sparse matrix of type '<class 'numpy.float64'>' with 3 stored elements in COOrdinate format>

In [60]: A.todense() Out[60]: matrix([[ 3., 0.], [ 1., 3.], [ 0., 0.]])

In [61]: rows Out[61]: [(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]

In [62]: columns Out[62]: [0, 1]

A convenience method SparseSeries.from_coo() is implemented for creating a SparseSeries from a scipy.sparse.coo_matrix.

In [63]: from scipy import sparse

In [64]: A = sparse.coo_matrix(([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])), ....: shape=(3, 4)) ....:

In [65]: A Out[65]: <3x4 sparse matrix of type '<class 'numpy.float64'>' with 3 stored elements in COOrdinate format>

In [66]: A.todense() Out[66]: matrix([[ 0., 0., 1., 2.], [ 3., 0., 0., 0.], [ 0., 0., 0., 0.]])

The default behaviour (with dense_index=False) simply returns a SparseSeries containing only the non-null entries.

In [67]: ss = pd.SparseSeries.from_coo(A)

In [68]: ss Out[68]: 0 2 1.0 3 2.0 1 0 3.0 dtype: Sparse[float64, nan] BlockIndex Block locations: array([0], dtype=int32) Block lengths: array([3], dtype=int32)

Specifying dense_index=True will result in an index that is the Cartesian product of the row and columns coordinates of the matrix. Note that this will consume a significant amount of memory (relative to dense_index=False) if the sparse matrix is large (and sparse) enough.

In [69]: ss_dense = pd.SparseSeries.from_coo(A, dense_index=True)

In [70]: ss_dense Out[70]: 0 0 NaN 1 NaN 2 1.0 3 2.0 1 0 3.0 1 NaN 2 NaN 3 NaN 2 0 NaN 1 NaN 2 NaN 3 NaN dtype: Sparse[float64, nan] BlockIndex Block locations: array([2], dtype=int32) Block lengths: array([3], dtype=int32)