What’s New in 0.24.0 (January XX, 2019) — pandas 0.24.0rc1 documentation (original) (raw)

These are the changes in pandas 0.24.0. See Release Notes for a full changelog including other versions of pandas.

New features

Accessing the values in a Series or Index

Series.array and Index.array have been added for extracting the array backing aSeries or Index. (GH19954, GH23623)

In [1]: idx = pd.period_range('2000', periods=4)

In [2]: idx.array Out[2]: ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04'] Length: 4, dtype: period[D]

In [3]: pd.Series(idx).array Out[3]: ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04'] Length: 4, dtype: period[D]

Historically, this would have been done with series.values, but with.values it was unclear whether the returned value would be the actual array, some transformation of it, or one of pandas custom arrays (likeCategorical). For example, with PeriodIndex, .values generates a new ndarray of period objects each time.

In [4]: id(idx.values) Out[4]: 140390592249488

In [5]: id(idx.values) Out[5]: 140390592249328

If you need an actual NumPy array, use Series.to_numpy() or Index.to_numpy().

In [6]: idx.to_numpy() Out[6]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)

In [7]: pd.Series(idx).to_numpy() Out[7]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)

For Series and Indexes backed by normal NumPy arrays, Series.array will return a new arrays.PandasArray, which is a thin (no-copy) wrapper around anumpy.ndarray. arrays.PandasArray isn’t especially useful on its own, but it does provide the same interface as any extension array defined in pandas or by a third-party library.

In [8]: ser = pd.Series([1, 2, 3])

In [9]: ser.array Out[9]: [1, 2, 3] Length: 3, dtype: int64

In [10]: ser.to_numpy() Out[10]: array([1, 2, 3])

We haven’t removed or deprecated Series.values or DataFrame.values, but we highly recommend and using .array or .to_numpy() instead.

See Dtypes and Attributes and Underlying Data for more.

ExtensionArray operator support

A Series based on an ExtensionArray now supports arithmetic and comparison operators (GH19577). There are two approaches for providing operator support for an ExtensionArray:

  1. Define each of the operators on your ExtensionArray subclass.
  2. Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the ExtensionArray.

See the ExtensionArray Operator Support documentation section for details on both ways of adding operator support.

Optional Integer NA Support

Pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of extension types. Here is an example of the usage.

We can construct a Series with the specified dtype. The dtype string Int64 is a pandas ExtensionDtype. Specifying a list or array using the traditional missing value marker of np.nan will infer to integer dtype. The display of the Series will also use the NaN to indicate missing values in string outputs. (GH20700, GH20747, GH22441, GH21789, GH22346)

In [11]: s = pd.Series([1, 2, np.nan], dtype='Int64')

In [12]: s Out[12]: 0 1 1 2 2 NaN Length: 3, dtype: Int64

Operations on these dtypes will propagate NaN as other pandas operations.

arithmetic

In [13]: s + 1 Out[13]: 0 2 1 3 2 NaN Length: 3, dtype: Int64

comparison

In [14]: s == 1 Out[14]: 0 True 1 False 2 False Length: 3, dtype: bool

indexing

In [15]: s.iloc[1:3] Out[15]: 1 2 2 NaN Length: 2, dtype: Int64

operate with other dtypes

In [16]: s + s.iloc[1:3].astype('Int8') Out[16]: 0 NaN 1 4 2 NaN Length: 3, dtype: Int64

coerce when needed

In [17]: s + 0.01 Out[17]: 0 1.01 1 2.01 2 NaN Length: 3, dtype: float64

These dtypes can operate as part of of DataFrame.

In [18]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})

In [19]: df Out[19]: A B C 0 1 1 a 1 2 1 a 2 NaN 3 b

[3 rows x 3 columns]

In [20]: df.dtypes Out[20]: A Int64 B int64 C object Length: 3, dtype: object

These dtypes can be merged & reshaped & casted.

In [21]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes Out[21]: A Int64 B int64 C object Length: 3, dtype: object

In [22]: df['A'].astype(float) Out[22]: 0 1.0 1 2.0 2 NaN Name: A, Length: 3, dtype: float64

Reduction and groupby operations such as ‘sum’ work.

In [23]: df.sum() Out[23]: A 3 B 5 C aab Length: 3, dtype: object

In [24]: df.groupby('B').A.sum() Out[24]: B 1 3 3 0 Name: A, Length: 2, dtype: Int64

Warning

The Integer NA support currently uses the capitalized dtype version, e.g. Int8 as compared to the traditional int8. This may be changed at a future date.

See Nullable Integer Data Type for more.

Array

A new top-level method array() has been added for creating 1-dimensional arrays (GH22860). This can be used to create any extension array, including extension arrays registered by 3rd party libraries. See

See Dtypes for more on extension arrays.

In [25]: pd.array([1, 2, np.nan], dtype='Int64') Out[25]: [1, 2, NaN] Length: 3, dtype: Int64

In [26]: pd.array(['a', 'b', 'c'], dtype='category') Out[26]: [a, b, c] Categories (3, object): [a, b, c]

Passing data for which there isn’t dedicated extension type (e.g. float, integer, etc.) will return a new arrays.PandasArray, which is just a thin (no-copy) wrapper around a numpy.ndarray that satisfies the extension array interface.

In [27]: pd.array([1, 2, 3]) Out[27]: [1, 2, 3] Length: 3, dtype: int64

On their own, a arrays.PandasArray isn’t a very useful object. But if you need write low-level code that works generically for anyExtensionArray, arrays.PandasArraysatisfies that need.

Notice that by default, if no dtype is specified, the dtype of the returned array is inferred from the data. In particular, note that the first example of[1, 2, np.nan] would have returned a floating-point array, since NaNis a float.

In [28]: pd.array([1, 2, np.nan]) Out[28]: [1.0, 2.0, nan] Length: 3, dtype: float64

read_html Enhancements

read_html() previously ignored colspan and rowspan attributes. Now it understands them, treating them as sequences of cells with the same value. (GH17054)

In [29]: result = pd.read_html(""" ....:

....: ....: ....: ....: ....: ....: ....: ....: ....: ....: ....:
ABC
12
""") ....:

Previous Behavior:

In [13]: result Out [13]: [ A B C 0 1 2 NaN]

New Behavior:

In [30]: result Out[30]: [ A B C 0 1 1 2

[1 rows x 3 columns]]

Storing Interval and Period Data in Series and DataFrame

Interval and Period data may now be stored in a Series or DataFrame, in addition to anIntervalIndex and PeriodIndex like previously (GH19453, GH22862).

In [31]: ser = pd.Series(pd.interval_range(0, 5))

In [32]: ser Out[32]: 0 (0, 1] 1 (1, 2] 2 (2, 3] 3 (3, 4] 4 (4, 5] Length: 5, dtype: interval

In [33]: ser.dtype Out[33]: interval[int64]

For periods:

In [34]: pser = pd.Series(pd.date_range("2000", freq="D", periods=5))

In [35]: pser Out[35]: 0 2000-01-01 1 2000-01-02 2 2000-01-03 3 2000-01-04 4 2000-01-05 Length: 5, dtype: datetime64[ns]

In [36]: pser.dtype Out[36]: dtype('<M8[ns]')

Previously, these would be cast to a NumPy array with object dtype. In general, this should result in better performance when storing an array of intervals or periods in a Series or column of a DataFrame.

Use Series.array to extract the underlying array of intervals or periods from the Series:

In [37]: ser.array Out[37]: IntervalArray([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]], closed='right', dtype='interval[int64]')

In [38]: pser.array Out[38]: ['2000-01-01 00:00:00', '2000-01-02 00:00:00', '2000-01-03 00:00:00', '2000-01-04 00:00:00', '2000-01-05 00:00:00'] Length: 5, dtype: datetime64[ns]

New Styler.pipe() method

The Styler class has gained apipe() method. This provides a convenient way to apply users’ predefined styling functions, and can help reduce “boilerplate” when using DataFrame styling functionality repeatedly within a notebook. (GH23229)

In [39]: df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})

In [40]: def format_and_align(styler): ....: return (styler.format({'N': '{:,}', 'X': '{:.1%}'}) ....: .set_properties(**{'text-align': 'right'})) ....:

In [41]: df.style.pipe(format_and_align).set_caption('Summary of results.') Out[41]: <pandas.io.formats.style.Styler at 0x7faf31f49588>

Similar methods already exist for other classes in pandas, including DataFrame.pipe(),pandas.core.groupby.GroupBy.pipe(), and pandas.core.resample.Resampler.pipe().

Joining with two multi-indexes

DataFrame.merge() and DataFrame.join() can now be used to join multi-indexed Dataframe instances on the overlaping index levels (GH6360)

See the Merge, join, and concatenate documentation section.

In [42]: index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'), ....: ('K1', 'X2')], ....: names=['key', 'X']) ....:

In [43]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'], ....: 'B': ['B0', 'B1', 'B2']}, index=index_left) ....:

In [44]: index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'), ....: ('K2', 'Y2'), ('K2', 'Y3')], ....: names=['key', 'Y']) ....:

In [45]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'], ....: 'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right) ....:

In [46]: left.join(right) Out[46]: A B C D key X Y
K0 X0 Y0 A0 B0 C0 D0 X1 Y0 A1 B1 C0 D0 K1 X2 Y1 A2 B2 C1 D1

[3 rows x 4 columns]

For earlier versions this can be done using the following.

In [47]: pd.merge(left.reset_index(), right.reset_index(), ....: on=['key'], how='inner').set_index(['key', 'X', 'Y']) ....: Out[47]: A B C D key X Y
K0 X0 Y0 A0 B0 C0 D0 X1 Y0 A1 B1 C0 D0 K1 X2 Y1 A2 B2 C1 D1

[3 rows x 4 columns]

Renaming names in a MultiIndex

DataFrame.rename_axis() now supports index and columns arguments and Series.rename_axis() supports index argument (GH19978)

This change allows a dictionary to be passed so that some of the names of a MultiIndex can be changed.

Example:

In [48]: mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')], ....: names=['AB', 'CD', 'EF']) ....:

In [49]: df = pd.DataFrame([i for i in range(len(mi))], index=mi, columns=['N'])

In [50]: df Out[50]: N AB CD EF
A C E 0 F 1 D E 2 F 3 B C E 4 F 5 D E 6 F 7

[8 rows x 1 columns]

In [51]: df.rename_axis(index={'CD': 'New'}) Out[51]: N AB New EF
A C E 0 F 1 D E 2 F 3 B C E 4 F 5 D E 6 F 7

[8 rows x 1 columns]

See the Advanced documentation on renaming for more details.

Other Enhancements

Backwards incompatible API changes

Percentage change on groupby

Fixed a bug where calling pancas.core.groupby.SeriesGroupBy.pct_change() or pandas.core.groupby.DataFrameGroupBy.pct_change() would previously work across groups when calculating the percent change, where it now correctly works per group (GH21200, GH21235).

In [52]: df = pd.DataFrame({'grp': ['a', 'a', 'b'], 'foo': [1.0, 1.1, 2.2]})

In [53]: df Out[53]: grp foo 0 a 1.0 1 a 1.1 2 b 2.2

[3 rows x 2 columns]

Previous behavior:

In [1]: df.groupby('grp').pct_change() Out[1]: foo 0 NaN 1 0.1 2 1.0

New behavior:

In [54]: df.groupby('grp').pct_change() Out[54]: foo 0 NaN 1 0.1 2 NaN

[3 rows x 1 columns]

Dependencies have increased minimum versions

We have updated our minimum supported versions of dependencies (GH21242, GH18742, GH23774). If installed, we now require:

Package Minimum Version Required
numpy 1.12.0 X
bottleneck 1.2.0
fastparquet 0.2.1
matplotlib 2.0.0
numexpr 2.6.1
pandas-gbq 0.8.0
pyarrow 0.7.0
pytables 3.4.2
scipy 0.18.1
xlrd 1.0.0
pytest (dev) 3.6

Additionally we no longer depend on feather-format for feather based storage and replaced it with references to pyarrow (GH21639 and GH23053).

os.linesep is used for line_terminator of DataFrame.to_csv

DataFrame.to_csv() now uses os.linesep() rather than '\n'for the default line terminator (GH20353). This change only affects when running on Windows, where '\r\n' was used for line terminator even when '\n' was passed in line_terminator.

Previous Behavior on Windows:

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})

In [2]: # When passing file PATH to to_csv, ...: # line_terminator does not work, and csv is saved with '\r\n'. ...: # Also, this converts all '\n's in the data to '\r\n'. ...: data.to_csv("test.csv", index=False, line_terminator='\n')

In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'

In [4]: # When passing file OBJECT with newline option to ...: # to_csv, line_terminator works. ...: with open("test2.csv", mode='w', newline='\n') as f: ...: data.to_csv(f, index=False, line_terminator='\n')

In [5]: with open("test2.csv", mode='rb') as f: ...: print(f.read()) Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'

New Behavior on Windows:

Passing line_terminator explicitly, set thes line terminator to that character.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})

In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')

In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'

On Windows, the value of os.linesep is '\r\n', so if line_terminator is not set, '\r\n' is used for line terminator.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})

In [2]: data.to_csv("test.csv", index=False)

In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'

For file objects, specifying newline is not sufficient to set the line terminator. You must pass in the line_terminator explicitly, even in this case.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})

In [2]: with open("test2.csv", mode='w', newline='\n') as f: ...: data.to_csv(f, index=False)

In [3]: with open("test2.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'

Proper handling of np.NaN in a string data-typed column with the Python engine

There was bug in read_excel() and read_csv() with the Python engine, where missing values turned to 'nan' with dtype=str andna_filter=True. Now, these missing values are converted to the string missing indicator, np.nan. (GH20377)

Previous Behavior:

In [5]: data = 'a,b,c\n1,,3\n4,5,6' In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True) In [7]: df.loc[0, 'b'] Out[7]: 'nan'

New Behavior:

In [55]: data = 'a,b,c\n1,,3\n4,5,6'

In [56]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)

In [57]: df.loc[0, 'b'] Out[57]: nan

Notice how we now instead output np.nan itself instead of a stringified form of it.

Parsing Datetime Strings with Timezone Offsets

Previously, parsing datetime strings with UTC offsets with to_datetime()or DatetimeIndex would automatically convert the datetime to UTC without timezone localization. This is inconsistent from parsing the same datetime string with Timestamp which would preserve the UTC offset in the tz attribute. Now, to_datetime() preserves the UTC offset in the tz attribute when all the datetime strings have the same UTC offset (GH17697, GH11736, GH22457)

Previous Behavior:

In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30") Out[2]: Timestamp('2015-11-18 10:00:00')

In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30") Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone)

In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"]) Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)

New Behavior:

In [58]: pd.to_datetime("2015-11-18 15:30:00+05:30") Out[58]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

In [59]: pd.Timestamp("2015-11-18 15:30:00+05:30") Out[59]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz

In [60]: pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2) Out[60]: DatetimeIndex(['2015-11-18 15:30:00+05:30', '2015-11-18 15:30:00+05:30'], dtype='datetime64[ns, pytz.FixedOffset(330)]', freq=None)

Parsing datetime strings with different UTC offsets will now create an Index ofdatetime.datetime objects with different UTC offsets

In [61]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30", ....: "2015-11-18 16:30:00+06:30"]) ....:

In [62]: idx Out[62]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object')

In [63]: idx[0] Out[63]: datetime.datetime(2015, 11, 18, 15, 30, tzinfo=tzoffset(None, 19800))

In [64]: idx[1] Out[64]: datetime.datetime(2015, 11, 18, 16, 30, tzinfo=tzoffset(None, 23400))

Passing utc=True will mimic the previous behavior but will correctly indicate that the dates have been converted to UTC

In [65]: pd.to_datetime(["2015-11-18 15:30:00+05:30", ....: "2015-11-18 16:30:00+06:30"], utc=True) ....: Out[65]: DatetimeIndex(['2015-11-18 10:00:00+00:00', '2015-11-18 10:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None)

Time values in dt.end_time and to_timestamp(how='end')

The time values in Period and PeriodIndex objects are now set to ‘23:59:59.999999999’ when calling Series.dt.end_time, Period.end_time,PeriodIndex.end_time, Period.to_timestamp() with how='end', or PeriodIndex.to_timestamp() with how='end' (GH17157)

Previous Behavior:

In [2]: p = pd.Period('2017-01-01', 'D') In [3]: pi = pd.PeriodIndex([p])

In [4]: pd.Series(pi).dt.end_time[0] Out[4]: Timestamp(2017-01-01 00:00:00)

In [5]: p.end_time Out[5]: Timestamp(2017-01-01 23:59:59.999999999)

New Behavior:

Calling Series.dt.end_time will now result in a time of ‘23:59:59.999999999’ as is the case with Period.end_time, for example

In [66]: p = pd.Period('2017-01-01', 'D')

In [67]: pi = pd.PeriodIndex([p])

In [68]: pd.Series(pi).dt.end_time[0] Out[68]: Timestamp('2017-01-01 23:59:59.999999999')

In [69]: p.end_time Out[69]: Timestamp('2017-01-01 23:59:59.999999999')

Datetime w/tz and unique

The return type of Series.unique() for datetime with timezone values has changed from an numpy.ndarray of Timestamp objects to a arrays.DatetimeArray (GH24024).

In [70]: ser = pd.Series([pd.Timestamp('2000', tz='UTC'), ....: pd.Timestamp('2000', tz='UTC')]) ....:

Previous Behavior:

In [3]: ser.unique() Out[3]: array([Timestamp('2000-01-01 00:00:00+0000', tz='UTC')], dtype=object)

New Behavior:

In [71]: ser.unique() Out[71]: ['2000-01-01 00:00:00+00:00'] Length: 1, dtype: datetime64[ns, UTC]

Sparse Data Structure Refactor

SparseArray, the array backing SparseSeries and the columns in a SparseDataFrame, is now an extension array (GH21978, GH19056, GH22835). To conform to this interface and for consistency with the rest of pandas, some API breaking changes were made:

Some new warnings are issued for operations that require or are likely to materialize a large dense array:

In addition to these API breaking changes, many Performance Improvements and Bug Fixes have been made.

Finally, a Series.sparse accessor was added to provide sparse-specific methods like Series.sparse.from_coo().

In [72]: s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')

In [73]: s.sparse.density Out[73]: 0.6

get_dummies() always returns a DataFrame

Previously, when sparse=True was passed to get_dummies(), the return value could be either a DataFrame or a SparseDataFrame, depending on whether all or a just a subset of the columns were dummy-encoded. Now, a DataFrame is always returned (GH24284).

Previous Behavior

The first get_dummies() returns a DataFrame because the column Ais not dummy encoded. When just ["B", "C"] are passed to get_dummies, then all the columns are dummy-encoded, and a SparseDataFrame was returned.

In [2]: df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']})

In [3]: type(pd.get_dummies(df, sparse=True)) Out[3]: pandas.core.frame.DataFrame

In [4]: type(pd.get_dummies(df[['B', 'C']], sparse=True)) Out[4]: pandas.core.sparse.frame.SparseDataFrame

New Behavior

Now, the return type is consistently a DataFrame.

In [74]: type(pd.get_dummies(df, sparse=True)) Out[74]: pandas.core.frame.DataFrame

In [75]: type(pd.get_dummies(df[['B', 'C']], sparse=True)) Out[75]: pandas.core.sparse.frame.SparseDataFrame

Note

There’s no difference in memory usage between a SparseDataFrameand a DataFrame with sparse values. The memory usage will be the same as in the previous version of pandas.

Raise ValueError in DataFrame.to_dict(orient='index')

Bug in DataFrame.to_dict() raises ValueError when used withorient='index' and a non-unique index instead of losing data (GH22801)

In [76]: df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])

In [77]: df Out[77]: a b A 1 0.50 A 2 0.75

[2 rows x 2 columns]

In [78]: df.to_dict(orient='index')

ValueError Traceback (most recent call last) in ----> 1 df.to_dict(orient='index')

/pandas/pandas/core/frame.py in to_dict(self, orient, into) 1303 if not self.index.is_unique: 1304 raise ValueError( -> 1305 "DataFrame index must be unique for orient='index'." 1306 ) 1307 return into_c((t[0], dict(zip(self.columns, t[1:])))

ValueError: DataFrame index must be unique for orient='index'.

Tick DateOffset Normalize Restrictions

Creating a Tick object (Day, Hour, Minute,Second, Milli, Micro, Nano) withnormalize=True is no longer supported. This prevents unexpected behavior where addition could fail to be monotone or associative. (GH21427)

Previous Behavior:

In [2]: ts = pd.Timestamp('2018-06-11 18:01:14')

In [3]: ts Out[3]: Timestamp('2018-06-11 18:01:14')

In [4]: tic = pd.offsets.Hour(n=2, normalize=True) ...:

In [5]: tic Out[5]: <2 * Hours>

In [6]: ts + tic Out[6]: Timestamp('2018-06-11 00:00:00')

In [7]: ts + tic + tic + tic == ts + (tic + tic + tic) Out[7]: False

New Behavior:

In [79]: ts = pd.Timestamp('2018-06-11 18:01:14')

In [80]: tic = pd.offsets.Hour(n=2)

In [81]: ts + tic + tic + tic == ts + (tic + tic + tic) Out[81]: True

Period Subtraction

Subtraction of a Period from another Period will give a DateOffset. instead of an integer (GH21314)

Previous Behavior:

In [2]: june = pd.Period('June 2018')

In [3]: april = pd.Period('April 2018')

In [4]: june - april Out [4]: 2

New Behavior:

In [82]: june = pd.Period('June 2018')

In [83]: april = pd.Period('April 2018')

In [84]: june - april Out[84]: <2 * MonthEnds>

Similarly, subtraction of a Period from a PeriodIndex will now return an Index of DateOffset objects instead of an Int64Index

Previous Behavior:

In [2]: pi = pd.period_range('June 2018', freq='M', periods=3)

In [3]: pi - pi[0] Out[3]: Int64Index([0, 1, 2], dtype='int64')

New Behavior:

In [85]: pi = pd.period_range('June 2018', freq='M', periods=3)

In [86]: pi - pi[0] Out[86]: Index([<0 * MonthEnds>, , <2 * MonthEnds>], dtype='object')

Addition/Subtraction of NaN from DataFrame

Adding or subtracting NaN from a DataFrame column withtimedelta64[ns] dtype will now raise a TypeError instead of returning all-NaT. This is for compatibility with TimedeltaIndex andSeries behavior (GH22163)

In [87]: df = pd.DataFrame([pd.Timedelta(days=1)])

In [88]: df Out[88]: 0 0 1 days

[1 rows x 1 columns]

Previous Behavior:

In [4]: df = pd.DataFrame([pd.Timedelta(days=1)])

In [5]: df - np.nan Out[5]: 0 0 NaT

New Behavior:

In [2]: df - np.nan ... TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'

DataFrame Comparison Operations Broadcasting Changes

Previously, the broadcasting behavior of DataFrame comparison operations (==, !=, …) was inconsistent with the behavior of arithmetic operations (+, -, …). The behavior of the comparison operations has been changed to match the arithmetic operations in these cases. (GH22880)

The affected cases are:

In [89]: arr = np.arange(6).reshape(3, 2)

In [90]: df = pd.DataFrame(arr)

In [91]: df Out[91]: 0 1 0 0 1 1 2 3 2 4 5

[3 rows x 2 columns]

Previous Behavior:

In [5]: df == arr[[0], :] ...: # comparison previously broadcast where arithmetic would raise Out[5]: 0 1 0 True True 1 False False 2 False False In [6]: df + arr[[0], :] ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)

In [7]: df == (1, 2) ...: # length matches number of columns; ...: # comparison previously raised where arithmetic would broadcast ... ValueError: Invalid broadcasting comparison [(1, 2)] with block values In [8]: df + (1, 2) Out[8]: 0 1 0 1 3 1 3 5 2 5 7

In [9]: df == (1, 2, 3) ...: # length matches number of rows ...: # comparison previously broadcast where arithmetic would raise Out[9]: 0 1 0 False True 1 True False 2 False False In [10]: df + (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3

New Behavior:

Comparison operations and arithmetic operations both broadcast.

In [92]: df == arr[[0], :] Out[92]: 0 1 0 True True 1 False False 2 False False

[3 rows x 2 columns]

In [93]: df + arr[[0], :] Out[93]: 0 1 0 0 2 1 2 4 2 4 6

[3 rows x 2 columns]

Comparison operations and arithmetic operations both broadcast.

In [94]: df == (1, 2) Out[94]: 0 1 0 False False 1 False False 2 False False

[3 rows x 2 columns]

In [95]: df + (1, 2) Out[95]: 0 1 0 1 3 1 3 5 2 5 7

[3 rows x 2 columns]

Comparison operations and arithmetic opeartions both raise ValueError.

In [6]: df == (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3

In [7]: df + (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3

DataFrame Arithmetic Operations Broadcasting Changes

DataFrame arithmetic operations when operating with 2-dimensionalnp.ndarray objects now broadcast in the same way as np.ndarraybroadcast. (GH23000)

In [96]: arr = np.arange(6).reshape(3, 2)

In [97]: df = pd.DataFrame(arr)

In [98]: df Out[98]: 0 1 0 0 1 1 2 3 2 4 5

[3 rows x 2 columns]

Previous Behavior:

In [5]: df + arr[[0], :] # 1 row, 2 columns ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2) In [6]: df + arr[:, [1]] # 1 column, 3 rows ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)

New Behavior:

In [99]: df + arr[[0], :] # 1 row, 2 columns Out[99]: 0 1 0 0 2 1 2 4 2 4 6

[3 rows x 2 columns]

In [100]: df + arr[:, [1]] # 1 column, 3 rows Out[100]: 0 1 0 1 2 1 5 6 2 9 10

[3 rows x 2 columns]

ExtensionType Changes

Equality and Hashability

Pandas now requires that extension dtypes be hashable. The base class implements a default __eq__ and __hash__. If you have a parametrized dtype, you should update the ExtensionDtype._metadata tuple to match the signature of your__init__ method. See pandas.api.extensions.ExtensionDtype for more (GH22476).

Reshaping changes

Dtype changes

Other changes

Bug Fixes

Series and Index Data-Dtype Incompatibilities

Series and Index constructors now raise when the data is incompatible with a passed dtype= (GH15832)

Previous Behavior:

In [4]: pd.Series([-1], dtype="uint64") Out [4]: 0 18446744073709551615 dtype: uint64

New Behavior:

In [4]: pd.Series([-1], dtype="uint64") Out [4]: ... OverflowError: Trying to coerce negative values to unsigned integers

Crosstab Preserves Dtypes

crosstab() will preserve now dtypes in some cases that previously would cast from integer dtype to floating dtype (GH22019)

Previous Behavior:

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2], 'b': [3, 3, 4, 4, 4], ...: 'c': [1, 1, np.nan, 1, 1]}) In [4]: pd.crosstab(df.a, df.b, normalize='columns') Out[4]: b 3 4 a 1 0.5 0.0 2 0.5 1.0

New Behavior:

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2], ...: 'b': [3, 3, 4, 4, 4], ...: 'c': [1, 1, np.nan, 1, 1]}) In [4]: pd.crosstab(df.a, df.b, normalize='columns')

Concatenation Changes

Calling pandas.concat() on a Categorical of ints with NA values now causes them to be processed as objects when concatenating with anything other than another Categorical of ints (GH19214)

In [101]: s = pd.Series([0, 1, np.nan])

In [102]: c = pd.Series([0, 1, np.nan], dtype="category")

Previous Behavior

In [3]: pd.concat([s, c]) Out[3]: 0 0.0 1 1.0 2 NaN 0 0.0 1 1.0 2 NaN dtype: float64

New Behavior

In [103]: pd.concat([s, c]) Out[103]: 0 0 1 1 2 NaN 0 0 1 1 2 NaN Length: 6, dtype: object

Datetimelike API Changes

Other API Changes

Deprecations

Integer Addition/Subtraction with Datetimes and Timedeltas is Deprecated

In the past, users could—in some cases—add or subtract integers or integer-dtype arrays from Timestamp, DatetimeIndex and TimedeltaIndex.

This usage is now deprecated. Instead add or subtract integer multiples of the object’s freq attribute (GH21939, GH23878).

Previous Behavior:

In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour()) In [6]: ts + 2 Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')

In [7]: tdi = pd.timedelta_range('1D', periods=2) In [8]: tdi - np.array([2, 1]) Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)

In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D') In [10]: dti + pd.Index([1, 2]) Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)

New Behavior:

In [104]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())

In [105]: ts + 2 * ts.freq Out[105]: Timestamp('1994-05-06 14:15:16', freq='H')

In [106]: tdi = pd.timedelta_range('1D', periods=2)

In [107]: tdi - np.array([2 * tdi.freq, 1 * tdi.freq]) Out[107]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)

In [108]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')

In [109]: dti + pd.Index([1 * dti.freq, 2 * dti.freq]) Out[109]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)

Passing Integer data and a timezone to DatetimeIndex

The behavior of DatetimeIndex when passed integer data and a timezone is changing in a future version of pandas. Previously, these were interpreted as wall times in the desired timezone. In the future, these will be interpreted as wall times in UTC, which are then converted to the desired timezone (GH24559).

The default behavior remains the same, but issues a warning:

In [3]: pd.DatetimeIndex([946684800000000000], tz="US/Central") /bin/ipython:1: FutureWarning: Passing integer-dtype data and a timezone to DatetimeIndex. Integer values will be interpreted differently in a future version of pandas. Previously, these were viewed as datetime64[ns] values representing the wall time in the specified timezone. In the future, these will be viewed as datetime64[ns] values representing the wall time in UTC. This is similar to a nanosecond-precision UNIX epoch. To accept the future behavior, use

    pd.to_datetime(integer_data, utc=True).tz_convert(tz)

To keep the previous behavior, use

    pd.to_datetime(integer_data).tz_localize(tz)

#!/bin/python3 Out[3]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)

As the warning message explains, opt in to the future behavior by specifying that the integer values are UTC, and then converting to the final timezone:

In [110]: pd.to_datetime([946684800000000000], utc=True).tz_convert('US/Central') Out[110]: DatetimeIndex(['1999-12-31 18:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)

The old behavior can be retained with by localizing directly to the final timezone:

In [111]: pd.to_datetime([946684800000000000]).tz_localize('US/Central') Out[111]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)

Converting Timezone-Aware Series and Index to NumPy Arrays

The conversion from a Series or Index with timezone-aware datetime data will change to preserve timezones by default (GH23569).

NumPy doesn’t have a dedicated dtype for timezone-aware datetimes. In the past, converting a Series or DatetimeIndex with timezone-aware datatimes would convert to a NumPy array by

  1. converting the tz-aware data to UTC
  2. dropping the timezone-info
  3. returning a numpy.ndarray with datetime64[ns] dtype

Future versions of pandas will preserve the timezone information by returning an object-dtype NumPy array where each value is a Timestamp with the correct timezone attached

In [112]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))

In [113]: ser Out[113]: 0 2000-01-01 00:00:00+01:00 1 2000-01-02 00:00:00+01:00 Length: 2, dtype: datetime64[ns, CET]

The default behavior remains the same, but issues a warning

In [8]: np.asarray(ser) /bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.

    To accept the future behavior, pass 'dtype=object'.
    To keep the old behavior, pass 'dtype="datetime64[ns]"'.

#!/bin/python3 Out[8]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')

The previous or future behavior can be obtained, without any warnings, by specifying the dtype

Previous Behavior

In [114]: np.asarray(ser, dtype='datetime64[ns]') Out[114]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')

Future Behavior

New behavior

In [115]: np.asarray(ser, dtype=object) Out[115]: array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'), Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')], dtype=object)

Or by using Series.to_numpy()

In [116]: ser.to_numpy() Out[116]: array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'), Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')], dtype=object)

In [117]: ser.to_numpy(dtype="datetime64[ns]") Out[117]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')

All the above applies to a DatetimeIndex with tz-aware values as well.

Removal of prior version deprecations/changes

Performance Improvements

Bug Fixes

Categorical

Datetimelike

Timedelta

Timezones

Offsets

Numeric

Conversion

Strings

Interval

Indexing

Missing

MultiIndex

I/O

Plotting

Groupby/Resample/Rolling

Reshaping

Sparse

Style

Build Changes

Other

Contributors

A total of 334 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.