What’s new in 0.24.0 (January 25, 2019) — pandas 3.0.0rc0+33.g1fd184de2a documentation (original) (raw)
Warning
The 0.24.x series of releases will be the last to support Python 2. Future feature releases will support Python 3 only. See Dropping Python 2.7 for more details.
This is a major release from 0.23.4 and includes a number of API changes, new features, enhancements, and performance improvements along with a large number of bug fixes.
Highlights include:
- Optional Integer NA Support
- New APIs for accessing the array backing a Series or Index
- A new top-level method for creating arrays
- Store Interval and Period data in a Series or DataFrame
- Support for joining on two MultiIndexes
Check the API Changes and deprecations before updating.
These are the changes in pandas 0.24.0. See Release notes for a full changelog including other versions of pandas.
Enhancements#
Optional integer NA support#
pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of extension types.
Note
IntegerArray is currently experimental. Its API or implementation may change without warning.
We can construct a Series with the specified dtype. The dtype string Int64 is a pandas ExtensionDtype. Specifying a list or array using the traditional missing value marker of np.nan will infer to integer dtype. The display of the Series will also use the NaN to indicate missing values in string outputs. (GH 20700, GH 20747, GH 22441, GH 21789, GH 22346)
In [1]: s = pd.Series([1, 2, pd.NA], dtype='Int64')
In [2]: s Out[2]: 0 1 1 2 2 dtype: Int64
Operations on these dtypes will propagate NaN as other pandas operations.
arithmetic
In [3]: s + 1 Out[3]: 0 2 1 3 2 dtype: Int64
comparison
In [4]: s == 1 Out[4]: 0 True 1 False 2 dtype: boolean
indexing
In [5]: s.iloc[1:3] Out[5]: 1 2 2 dtype: Int64
operate with other dtypes
In [6]: s + s.iloc[1:3].astype('Int8') Out[6]: 0 1 4 2 dtype: Int64
coerce when needed
In [7]: s + 0.01 Out[7]: 0 1.01 1 2.01 2 dtype: Float64
These dtypes can operate as part of a DataFrame.
In [8]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})
In [9]: df Out[9]: A B C 0 1 1 a 1 2 1 a 2 3 b
In [10]: df.dtypes Out[10]: A Int64 B int64 C str dtype: object
These dtypes can be merged, reshaped, and casted.
In [11]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes Out[11]: A Int64 B int64 C str dtype: object
In [12]: df['A'].astype(float) Out[12]: 0 1.0 1 2.0 2 NaN Name: A, dtype: float64
Reduction and groupby operations such as sum work.
In [13]: df.sum() Out[13]: A 3 B 5 C aab dtype: object
In [14]: df.groupby('B').A.sum() Out[14]: B 1 3 3 0 Name: A, dtype: Int64
Warning
The Integer NA support currently uses the capitalized dtype version, e.g. Int8 as compared to the traditional int8. This may be changed at a future date.
See Nullable integer data type for more.
Accessing the values in a Series or Index#
Series.array and Index.array have been added for extracting the array backing aSeries or Index. (GH 19954, GH 23623)
In [15]: idx = pd.period_range('2000', periods=4)
In [16]: idx.array Out[16]: ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04'] Length: 4, dtype: period[D]
In [17]: pd.Series(idx).array Out[17]: ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04'] Length: 4, dtype: period[D]
Historically, this would have been done with series.values, but with.values it was unclear whether the returned value would be the actual array, some transformation of it, or one of pandas custom arrays (likeCategorical). For example, with PeriodIndex, .values generates a new ndarray of period objects each time.
In [18]: idx.values Out[18]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
In [19]: id(idx.values) Out[19]: 140536856877392
In [20]: id(idx.values) Out[20]: 140536856882960
If you need an actual NumPy array, use Series.to_numpy() or Index.to_numpy().
In [21]: idx.to_numpy() Out[21]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
In [22]: pd.Series(idx).to_numpy() Out[22]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
For Series and Indexes backed by normal NumPy arrays, Series.array will return a new arrays.PandasArray, which is a thin (no-copy) wrapper around anumpy.ndarray. PandasArray isn’t especially useful on its own, but it does provide the same interface as any extension array defined in pandas or by a third-party library.
In [23]: ser = pd.Series([1, 2, 3])
In [24]: ser.array Out[24]: [1, 2, 3] Length: 3, dtype: int64
In [25]: ser.to_numpy() Out[25]: array([1, 2, 3])
We haven’t removed or deprecated Series.values or DataFrame.values, but we highly recommend and using .array or .to_numpy() instead.
See Dtypes and Attributes and Underlying Data for more.
pandas.array: a new top-level method for creating arrays#
A new top-level method array() has been added for creating 1-dimensional arrays (GH 22860). This can be used to create any extension array, including extension arrays registered by 3rd party libraries. See the dtypes docs for more on extension arrays.
In [26]: pd.array([1, 2, pd.NA], dtype='Int64') Out[26]: [1, 2, ] Length: 3, dtype: Int64
In [27]: pd.array(['a', 'b', 'c'], dtype='category') Out[27]: ['a', 'b', 'c'] Categories (3, str): ['a', 'b', 'c']
Passing data for which there isn’t dedicated extension type (e.g. float, integer, etc.) will return a new arrays.PandasArray, which is just a thin (no-copy) wrapper around a numpy.ndarray that satisfies the pandas extension array interface.
In [28]: pd.array([1, 2, 3]) Out[28]: [1, 2, 3] Length: 3, dtype: Int64
On their own, a PandasArray isn’t a very useful object. But if you need write low-level code that works generically for anyExtensionArray, PandasArraysatisfies that need.
Notice that by default, if no dtype is specified, the dtype of the returned array is inferred from the data. In particular, note that the first example of[1, 2, np.nan] would have returned a floating-point array, since NaNis a float.
In [29]: pd.array([1, 2, np.nan]) Out[29]: [1, 2, ] Length: 3, dtype: Int64
Storing Interval and Period data in Series and DataFrame#
Interval and Period data may now be stored in a Series or DataFrame, in addition to anIntervalIndex and PeriodIndex like previously (GH 19453, GH 22862).
In [30]: ser = pd.Series(pd.interval_range(0, 5))
In [31]: ser Out[31]: 0 (0, 1] 1 (1, 2] 2 (2, 3] 3 (3, 4] 4 (4, 5] dtype: interval
In [32]: ser.dtype Out[32]: interval[int64, right]
For periods:
In [33]: pser = pd.Series(pd.period_range("2000", freq="D", periods=5))
In [34]: pser Out[34]: 0 2000-01-01 1 2000-01-02 2 2000-01-03 3 2000-01-04 4 2000-01-05 dtype: period[D]
In [35]: pser.dtype Out[35]: period[D]
Previously, these would be cast to a NumPy array with object dtype. In general, this should result in better performance when storing an array of intervals or periods in a Series or column of a DataFrame.
Use Series.array to extract the underlying array of intervals or periods from the Series:
In [36]: ser.array Out[36]: [(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]] Length: 5, dtype: interval[int64, right]
In [37]: pser.array Out[37]: ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04', '2000-01-05'] Length: 5, dtype: period[D]
These return an instance of arrays.IntervalArray or arrays.PeriodArray, the new extension arrays that back interval and period data.
Joining with two multi-indexes#
DataFrame.merge() and DataFrame.join() can now be used to join multi-indexed Dataframe instances on the overlapping index levels (GH 6360)
See the Merge, join, and concatenate documentation section.
In [38]: index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'), ....: ('K1', 'X2')], ....: names=['key', 'X']) ....:
In [39]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'], ....: 'B': ['B0', 'B1', 'B2']}, index=index_left) ....:
In [40]: index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'), ....: ('K2', 'Y2'), ('K2', 'Y3')], ....: names=['key', 'Y']) ....:
In [41]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'], ....: 'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right) ....:
In [42]: left.join(right)
Out[42]:
A B C D
key X Y
K0 X0 Y0 A0 B0 C0 D0
X1 Y0 A1 B1 C0 D0
K1 X2 Y1 A2 B2 C1 D1
For earlier versions this can be done using the following.
In [43]: pd.merge(left.reset_index(), right.reset_index(),
....: on=['key'], how='inner').set_index(['key', 'X', 'Y'])
....:
Out[43]:
A B C D
key X Y
K0 X0 Y0 A0 B0 C0 D0
X1 Y0 A1 B1 C0 D0
K1 X2 Y1 A2 B2 C1 D1
Function read_html enhancements#
read_html() previously ignored colspan and rowspan attributes. Now it understands them, treating them as sequences of cells with the same value. (GH 17054)
In [44]: from io import StringIO
In [45]: result = pd.read_html(StringIO(""" ....:
| A | B | C | ....:
|---|---|---|
| 1 | 2 | ....:
Previous behavior:
In [13]: result Out [13]: [ A B C 0 1 2 NaN]
New behavior:
In [46]: result Out[46]: [ A B C 0 1 1 2]
New Styler.pipe() method#
The Styler class has gained apipe() method. This provides a convenient way to apply users’ predefined styling functions, and can help reduce “boilerplate” when using DataFrame styling functionality repeatedly within a notebook. (GH 23229)
In [47]: df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})
In [48]: def format_and_align(styler): ....: return (styler.format({'N': '{:,}', 'X': '{:.1%}'}) ....: .set_properties(**{'text-align': 'right'})) ....:
In [49]: df.style.pipe(format_and_align).set_caption('Summary of results.') Out[49]: <pandas.io.formats.style.Styler at 0x7fd142947250>
Similar methods already exist for other classes in pandas, including DataFrame.pipe(),GroupBy.pipe(), and Resampler.pipe().
Renaming names in a MultiIndex#
DataFrame.rename_axis() now supports index and columns arguments and Series.rename_axis() supports index argument (GH 19978).
This change allows a dictionary to be passed so that some of the names of a MultiIndex can be changed.
Example:
In [50]: mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')], ....: names=['AB', 'CD', 'EF']) ....:
In [51]: df = pd.DataFrame(list(range(len(mi))), index=mi, columns=['N'])
In [52]: df
Out[52]:
N
AB CD EF
A C E 0
F 1
D E 2
F 3
B C E 4
F 5
D E 6
F 7
In [53]: df.rename_axis(index={'CD': 'New'})
Out[53]:
N
AB New EF
A C E 0
F 1
D E 2
F 3
B C E 4
F 5
D E 6
F 7
See the Advanced documentation on renaming for more details.
Other enhancements#
- merge() now directly allows merge between objects of type
DataFrameand namedSeries, without the need to convert theSeriesobject into aDataFramebeforehand (GH 21220) ExcelWriternow acceptsmodeas a keyword argument, enabling append to existing workbooks when using theopenpyxlengine (GH 3441)FrozenListhas gained the.union()and.difference()methods. This functionality greatly simplifies groupby’s that rely on explicitly excluding certain columns. See Splitting an object into groups for more information (GH 15475, GH 15506).- DataFrame.to_parquet() now accepts
indexas an argument, allowing the user to override the engine’s default behavior to include or omit the dataframe’s indexes from the resulting Parquet file. (GH 20768) - read_feather() now accepts
columnsas an argument, allowing the user to specify which columns should be read. (GH 24025) - DataFrame.corr() and Series.corr() now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (GH 22684)
- DataFrame.to_string() now accepts
decimalas an argument, allowing the user to specify which decimal separator should be used in the output. (GH 23614) - DataFrame.to_html() now accepts
render_linksas an argument, allowing the user to generate HTML with links to any URLs that appear in the DataFrame. See the section on writing HTML in the IO docs for example usage. (GH 2679) - pandas.read_csv() now supports pandas extension types as an argument to
dtype, allowing the user to use pandas extension types when reading CSVs. (GH 23228) - The shift() method now accepts
fill_valueas an argument, allowing the user to specify a value which will be used instead of NA/NaT in the empty periods. (GH 15486) - to_datetime() now supports the
%Zand%zdirective when passed intoformat(GH 13486) - Series.mode() and DataFrame.mode() now support the
dropnaparameter which can be used to specify whetherNaN/NaTvalues should be considered (GH 17534) - DataFrame.to_csv() and Series.to_csv() now support the
compressionkeyword when a file handle is passed. (GH 21227) - Index.droplevel() is now implemented also for flat indexes, for compatibility with MultiIndex (GH 21115)
- Series.droplevel() and DataFrame.droplevel() are now implemented (GH 20342)
- Added support for reading from/writing to Google Cloud Storage via the
gcsfslibrary (GH 19454, GH 23094) DataFrame.to_gbq()andread_gbq()signature and documentation updated to reflect changes from the pandas-gbq library version 0.8.0. Adds acredentialsargument, which enables the use of any kind ofgoogle-auth credentials. (GH 21627,GH 22557, GH 23662)- New method HDFStore.walk() will recursively walk the group hierarchy of an HDF5 file (GH 10932)
- read_html() copies cell data across
colspanandrowspan, and it treats all-thtable rows as headers ifheaderkwarg is not given and there is nothead(GH 17054) - Series.nlargest(), Series.nsmallest(), DataFrame.nlargest(), and DataFrame.nsmallest() now accept the value
"all"for thekeepargument. This keeps all ties for the nth largest/smallest value (GH 16818) - IntervalIndex has gained the set_closed() method to change the existing
closedvalue (GH 21670) - to_csv(), to_csv(), to_json(), and to_json() now support
compression='infer'to infer compression based on filename extension (GH 15008). The default compression forto_csv,to_json, andto_picklemethods has been updated to'infer'(GH 22004). - DataFrame.to_sql() now supports writing
TIMESTAMP WITH TIME ZONEtypes for supported databases. For databases that don’t support timezones, datetime data will be stored as timezone unaware local timestamps. See the Datetime data types for implications (GH 9086). - to_timedelta() now supports iso-formatted timedelta strings (GH 21877)
- Series and DataFrame now support
Iterableobjects in the constructor (GH 2193) - DatetimeIndex has gained the DatetimeIndex.timetz attribute. This returns the local time with timezone information. (GH 21358)
- round(), ceil(), and floor() for DatetimeIndex and Timestampnow support an
ambiguousargument for handling datetimes that are rounded to ambiguous times (GH 18946) and anonexistentargument for handling datetimes that are rounded to nonexistent times. See Nonexistent times when localizing (GH 22647) - The result of resample() is now iterable similar to
groupby()(GH 15314). - Series.resample() and DataFrame.resample() have gained the Resampler.quantile() (GH 15023).
- DataFrame.resample() and Series.resample() with a PeriodIndex will now respect the
baseargument in the same fashion as with a DatetimeIndex. (GH 23882) - pandas.api.types.is_list_like() has gained a keyword
allow_setswhich isTrueby default; ifFalse, all instances ofsetwill not be considered “list-like” anymore (GH 23061) - Index.to_frame() now supports overriding column name(s) (GH 22580).
- Categorical.from_codes() now can take a
dtypeparameter as an alternative to passingcategoriesandordered(GH 24398). - New attribute
__git_version__will return git commit sha of current build (GH 21295). - Compatibility with Matplotlib 3.0 (GH 22790).
- Added Interval.overlaps(), arrays.IntervalArray.overlaps(), and IntervalIndex.overlaps() for determining overlaps between interval-like objects (GH 21998)
- read_fwf() now accepts keyword
infer_nrows(GH 15138). - to_parquet() now supports writing a
DataFrameas a directory of parquet files partitioned by a subset of the columns whenengine = 'pyarrow'(GH 23283) - Timestamp.tz_localize(), DatetimeIndex.tz_localize(), and Series.tz_localize() have gained the
nonexistentargument for alternative handling of nonexistent times. See Nonexistent times when localizing (GH 8917, GH 24466) - Index.difference(), Index.intersection(), Index.union(), and Index.symmetric_difference() now have an optional
sortparameter to control whether the results should be sorted if possible (GH 17839, GH 24471) - read_excel() now accepts
usecolsas a list of column names or callable (GH 18273) - MultiIndex.to_flat_index() has been added to flatten multiple levels into a single-level Index object.
- DataFrame.to_stata() and
pandas.io.stata.StataWriter117can write mixed string columns to Stata strl format (GH 23633) - DataFrame.between_time() and DataFrame.at_time() have gained the
axisparameter (GH 8839) - DataFrame.to_records() now accepts
index_dtypesandcolumn_dtypesparameters to allow different data types in stored column and index records (GH 18146) - IntervalIndex has gained the is_overlapping attribute to indicate if the
IntervalIndexcontains any overlapping intervals (GH 23309) - pandas.DataFrame.to_sql() has gained the
methodargument to control SQL insertion clause. See the insertion method section in the documentation. (GH 8953) - DataFrame.corrwith() now supports Spearman’s rank correlation, Kendall’s tau as well as callable correlation methods. (GH 21925)
- DataFrame.to_json(), DataFrame.to_csv(), DataFrame.to_pickle(), and other export methods now support tilde(~) in path argument. (GH 23473)
Backwards incompatible API changes#
pandas 0.24.0 includes a number of API breaking changes.
Increased minimum versions for dependencies#
We have updated our minimum supported versions of dependencies (GH 21242, GH 18742, GH 23774, GH 24767). If installed, we now require:
Additionally we no longer depend on feather-format for feather based storage and replaced it with references to pyarrow (GH 21639 and GH 23053).
os.linesep is used for line_terminator of DataFrame.to_csv#
DataFrame.to_csv() now uses os.linesep() rather than '\n'for the default line terminator (GH 20353). This change only affects when running on Windows, where '\r\n' was used for line terminator even when '\n' was passed in line_terminator.
Previous behavior on Windows:
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})
In [2]: # When passing file PATH to to_csv, ...: # line_terminator does not work, and csv is saved with '\r\n'. ...: # Also, this converts all '\n's in the data to '\r\n'. ...: data.to_csv("test.csv", index=False, line_terminator='\n')
In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'
In [4]: # When passing file OBJECT with newline option to ...: # to_csv, line_terminator works. ...: with open("test2.csv", mode='w', newline='\n') as f: ...: data.to_csv(f, index=False, line_terminator='\n')
In [5]: with open("test2.csv", mode='rb') as f: ...: print(f.read()) Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
New behavior on Windows:
Passing line_terminator explicitly, set the line terminator to that character.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})
In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')
In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
On Windows, the value of os.linesep is '\r\n', so if line_terminator is not set, '\r\n' is used for line terminator.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})
In [2]: data.to_csv("test.csv", index=False)
In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
For file objects, specifying newline is not sufficient to set the line terminator. You must pass in the line_terminator explicitly, even in this case.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]})
In [2]: with open("test2.csv", mode='w', newline='\n') as f: ...: data.to_csv(f, index=False)
In [3]: with open("test2.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
Proper handling of np.nan in a string data-typed column with the Python engine#
There was bug in read_excel() and read_csv() with the Python engine, where missing values turned to 'nan' with dtype=str andna_filter=True. Now, these missing values are converted to the string missing indicator, np.nan. (GH 20377)
Previous behavior:
In [5]: data = 'a,b,c\n1,,3\n4,5,6' In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True) In [7]: df.loc[0, 'b'] Out[7]: 'nan'
New behavior:
In [54]: data = 'a,b,c\n1,,3\n4,5,6'
In [55]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [56]: df.loc[0, 'b'] Out[56]: nan
Notice how we now instead output np.nan itself instead of a stringified form of it.
Parsing datetime strings with timezone offsets#
Previously, parsing datetime strings with UTC offsets with to_datetime()or DatetimeIndex would automatically convert the datetime to UTC without timezone localization. This is inconsistent from parsing the same datetime string with Timestamp which would preserve the UTC offset in the tz attribute. Now, to_datetime() preserves the UTC offset in the tz attribute when all the datetime strings have the same UTC offset (GH 17697, GH 11736, GH 22457)
Previous behavior:
In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30") Out[2]: Timestamp('2015-11-18 10:00:00')
In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30") Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')
Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone)
In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"]) Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)
New behavior:
In [57]: pd.to_datetime("2015-11-18 15:30:00+05:30") Out[57]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
In [58]: pd.Timestamp("2015-11-18 15:30:00+05:30") Out[58]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz
In [59]: pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2) Out[59]: DatetimeIndex(['2015-11-18 15:30:00+05:30', '2015-11-18 15:30:00+05:30'], dtype='datetime64[us, UTC+05:30]', freq=None)
Parsing datetime strings with different UTC offsets will now create an Index ofdatetime.datetime objects with different UTC offsets
In [59]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"])
In[60]: idx Out[60]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object')
In[61]: idx[0] Out[61]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
In[62]: idx[1] Out[62]: Timestamp('2015-11-18 16:30:00+0630', tz='UTC+06:30')
Passing utc=True will mimic the previous behavior but will correctly indicate that the dates have been converted to UTC
In [60]: pd.to_datetime(["2015-11-18 15:30:00+05:30", ....: "2015-11-18 16:30:00+06:30"], utc=True) ....: Out[60]: DatetimeIndex(['2015-11-18 10:00:00+00:00', '2015-11-18 10:00:00+00:00'], dtype='datetime64[us, UTC]', freq=None)
Parsing mixed-timezones with read_csv()#
read_csv() no longer silently converts mixed-timezone columns to UTC (GH 24987).
Previous behavior
import io content = """
... a ... 2000-01-01T00:00:00+05:00 ... 2000-01-01T00:00:00+06:00""" df = pd.read_csv(io.StringIO(content), parse_dates=['a']) df.a 0 1999-12-31 19:00:00 1 1999-12-31 18:00:00 Name: a, dtype: datetime64[ns]
New behavior
In[64]: import io
In[65]: content = """
...: a
...: 2000-01-01T00:00:00+05:00
...: 2000-01-01T00:00:00+06:00"""
In[66]: df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
In[67]: df.a Out[67]: 0 2000-01-01 00:00:00+05:00 1 2000-01-01 00:00:00+06:00 Name: a, Length: 2, dtype: object
As can be seen, the dtype is object; each value in the column is a string. To convert the strings to an array of datetimes, the date_parser argument
In [3]: df = pd.read_csv( ...: io.StringIO(content), ...: parse_dates=['a'], ...: date_parser=lambda col: pd.to_datetime(col, utc=True), ...: )
In [4]: df.a Out[4]: 0 1999-12-31 19:00:00+00:00 1 1999-12-31 18:00:00+00:00 Name: a, dtype: datetime64[ns, UTC]
See Parsing datetime strings with timezone offsets for more.
Time values in dt.end_time and to_timestamp(how='end')#
The time values in Period and PeriodIndex objects are now set to ‘23:59:59.999999999’ when calling Series.dt.end_time, Period.end_time,PeriodIndex.end_time, Period.to_timestamp() with how='end', or PeriodIndex.to_timestamp() with how='end' (GH 17157)
Previous behavior:
In [2]: p = pd.Period('2017-01-01', 'D') In [3]: pi = pd.PeriodIndex([p])
In [4]: pd.Series(pi).dt.end_time[0] Out[4]: Timestamp(2017-01-01 00:00:00)
In [5]: p.end_time Out[5]: Timestamp(2017-01-01 23:59:59.999999999)
New behavior:
Calling Series.dt.end_time will now result in a time of ‘23:59:59.999999999’ as is the case with Period.end_time, for example
In [61]: p = pd.Period('2017-01-01', 'D')
In [62]: pi = pd.PeriodIndex([p])
In [63]: pd.Series(pi).dt.end_time[0] Out[63]: Timestamp('2017-01-01 23:59:59.999999999')
In [64]: p.end_time Out[64]: Timestamp('2017-01-01 23:59:59.999999999')
Series.unique for timezone-aware data#
The return type of Series.unique() for datetime with timezone values has changed from an numpy.ndarray of Timestamp objects to a arrays.DatetimeArray (GH 24024).
In [65]: ser = pd.Series([pd.Timestamp('2000', tz='UTC'), ....: pd.Timestamp('2000', tz='UTC')]) ....:
Previous behavior:
In [3]: ser.unique() Out[3]: array([Timestamp('2000-01-01 00:00:00+0000', tz='UTC')], dtype=object)
New behavior:
In [66]: ser.unique() Out[66]: ['2000-01-01 00:00:00+00:00'] Length: 1, dtype: datetime64[us, UTC]
Sparse data structure refactor#
SparseArray, the array backing SparseSeries and the columns in a SparseDataFrame, is now an extension array (GH 21978, GH 19056, GH 22835). To conform to this interface and for consistency with the rest of pandas, some API breaking changes were made:
SparseArrayis no longer a subclass of numpy.ndarray. To convert aSparseArrayto a NumPy array, use numpy.asarray().SparseArray.dtypeandSparseSeries.dtypeare now instances of SparseDtype, rather thannp.dtype. Access the underlying dtype withSparseDtype.subtype.numpy.asarray(sparse_array)now returns a dense array with all the values, not just the non-fill-value values (GH 14167)SparseArray.takenow matches the API of pandas.api.extensions.ExtensionArray.take() (GH 19506):- The default value of
allow_fillhas changed fromFalsetoTrue. - The
outandmodeparameters are now longer accepted (previously, this raised if they were specified). - Passing a scalar for
indicesis no longer allowed.
- The default value of
- The result of concat() with a mix of sparse and dense Series is a Series with sparse values, rather than a
SparseSeries. SparseDataFrame.combineandDataFrame.combine_firstno longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.- Setting
SparseArray.fill_valueto a fill value with a different dtype is now allowed. DataFrame[column]is now a Series with sparse values, rather than aSparseSeries, when slicing a single column with sparse values (GH 23559).- The result of Series.where() is now a
Serieswith sparse values, like with other extension arrays (GH 24077)
Some new warnings are issued for operations that require or are likely to materialize a large dense array:
- A errors.PerformanceWarning is issued when using fillna with a
method, as a dense array is constructed to create the filled array. Filling with avalueis the efficient way to fill a sparse array. - A errors.PerformanceWarning is now issued when concatenating sparse Series with differing fill values. The fill value from the first sparse array continues to be used.
In addition to these API breaking changes, many Performance Improvements and Bug Fixes have been made.
Finally, a Series.sparse accessor was added to provide sparse-specific methods like Series.sparse.from_coo().
In [67]: s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')
In [68]: s.sparse.density Out[68]: 0.6
get_dummies() always returns a DataFrame#
Previously, when sparse=True was passed to get_dummies(), the return value could be either a DataFrame or a SparseDataFrame, depending on whether all or a just a subset of the columns were dummy-encoded. Now, a DataFrame is always returned (GH 24284).
Previous behavior
The first get_dummies() returns a DataFrame because the column Ais not dummy encoded. When just ["B", "C"] are passed to get_dummies, then all the columns are dummy-encoded, and a SparseDataFrame was returned.
In [2]: df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']})
In [3]: type(pd.get_dummies(df, sparse=True)) Out[3]: pandas.DataFrame
In [4]: type(pd.get_dummies(df[['B', 'C']], sparse=True)) Out[4]: pandas.core.sparse.frame.SparseDataFrame
New behavior
Now, the return type is consistently a DataFrame.
In [69]: type(pd.get_dummies(df, sparse=True)) Out[69]: pandas.DataFrame
In [70]: type(pd.get_dummies(df[['B', 'C']], sparse=True)) Out[70]: pandas.DataFrame
Note
There’s no difference in memory usage between a SparseDataFrameand a DataFrame with sparse values. The memory usage will be the same as in the previous version of pandas.
Raise ValueError in DataFrame.to_dict(orient='index')#
Bug in DataFrame.to_dict() raises ValueError when used withorient='index' and a non-unique index instead of losing data (GH 22801)
In [71]: df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])
In [72]: df Out[72]: a b A 1 0.50 A 2 0.75
In [73]: df.to_dict(orient='index')
ValueError Traceback (most recent call last) Cell In[73], line 1 ----> 1 df.to_dict(orient='index')
File ~/work/pandas/pandas/pandas/core/frame.py:2229, in DataFrame.to_dict(self, orient, into, index) 2127 """ 2128 Convert the DataFrame to a dictionary. 2129 (...) 2225 defaultdict(<class 'list'>, {'col1': 2, 'col2': 0.75})] 2226 """ 2227 from pandas.core.methods.to_dict import to_dict -> 2229 return to_dict(self, orient, into=into, index=index)
File ~/work/pandas/pandas/pandas/core/methods/to_dict.py:259, in to_dict(df, orient, into, index) 257 elif orient == "index": 258 if not df.index.is_unique: --> 259 raise ValueError("DataFrame index must be unique for orient='index'.") 260 columns = df.columns.tolist() 261 if are_all_object_dtype_cols:
ValueError: DataFrame index must be unique for orient='index'.
Tick DateOffset normalize restrictions#
Creating a Tick object (Day, Hour, Minute,Second, Milli, Micro, Nano) withnormalize=True is no longer supported. This prevents unexpected behavior where addition could fail to be monotone or associative. (GH 21427)
Previous behavior:
In [2]: ts = pd.Timestamp('2018-06-11 18:01:14')
In [3]: ts Out[3]: Timestamp('2018-06-11 18:01:14')
In [4]: tic = pd.offsets.Hour(n=2, normalize=True) ...:
In [5]: tic Out[5]: <2 * Hours>
In [6]: ts + tic Out[6]: Timestamp('2018-06-11 00:00:00')
In [7]: ts + tic + tic + tic == ts + (tic + tic + tic) Out[7]: False
New behavior:
In [74]: ts = pd.Timestamp('2018-06-11 18:01:14')
In [75]: tic = pd.offsets.Hour(n=2)
In [76]: ts + tic + tic + tic == ts + (tic + tic + tic) Out[76]: True
Period subtraction#
Subtraction of a Period from another Period will give a DateOffset. instead of an integer (GH 21314)
Previous behavior:
In [2]: june = pd.Period('June 2018')
In [3]: april = pd.Period('April 2018')
In [4]: june - april Out [4]: 2
New behavior:
In [77]: june = pd.Period('June 2018')
In [78]: april = pd.Period('April 2018')
In [79]: june - april Out[79]: <2 * MonthEnds>
Similarly, subtraction of a Period from a PeriodIndex will now return an Index of DateOffset objects instead of an Int64Index
Previous behavior:
In [2]: pi = pd.period_range('June 2018', freq='M', periods=3)
In [3]: pi - pi[0] Out[3]: Int64Index([0, 1, 2], dtype='int64')
New behavior:
In [80]: pi = pd.period_range('June 2018', freq='M', periods=3)
In [81]: pi - pi[0] Out[81]: Index([<0 * MonthEnds>, , <2 * MonthEnds>], dtype='object')
Addition/subtraction of NaN from DataFrame#
Adding or subtracting NaN from a DataFrame column withtimedelta64[ns] dtype will now raise a TypeError instead of returning all-NaT. This is for compatibility with TimedeltaIndex andSeries behavior (GH 22163)
In [82]: df = pd.DataFrame([pd.Timedelta(days=1)])
In [83]: df Out[83]: 0 0 1 days
Previous behavior:
In [4]: df = pd.DataFrame([pd.Timedelta(days=1)])
In [5]: df - np.nan Out[5]: 0 0 NaT
New behavior:
In [2]: df - np.nan ... TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'
DataFrame comparison operations broadcasting changes#
Previously, the broadcasting behavior of DataFrame comparison operations (==, !=, …) was inconsistent with the behavior of arithmetic operations (+, -, …). The behavior of the comparison operations has been changed to match the arithmetic operations in these cases. (GH 22880)
The affected cases are:
- operating against a 2-dimensional
np.ndarraywith either 1 row or 1 column will now broadcast the same way anp.ndarraywould (GH 23000). - a list or tuple with length matching the number of rows in the DataFrame will now raise
ValueErrorinstead of operating column-by-column (GH 22880). - a list or tuple with length matching the number of columns in the DataFrame will now operate row-by-row instead of raising
ValueError(GH 22880).
In [84]: arr = np.arange(6).reshape(3, 2)
In [85]: df = pd.DataFrame(arr)
In [86]: df Out[86]: 0 1 0 0 1 1 2 3 2 4 5
Previous behavior:
In [5]: df == arr[[0], :] ...: # comparison previously broadcast where arithmetic would raise Out[5]: 0 1 0 True True 1 False False 2 False False In [6]: df + arr[[0], :] ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [7]: df == (1, 2) ...: # length matches number of columns; ...: # comparison previously raised where arithmetic would broadcast ... ValueError: Invalid broadcasting comparison [(1, 2)] with block values In [8]: df + (1, 2) Out[8]: 0 1 0 1 3 1 3 5 2 5 7
In [9]: df == (1, 2, 3) ...: # length matches number of rows ...: # comparison previously broadcast where arithmetic would raise Out[9]: 0 1 0 False True 1 True False 2 False False In [10]: df + (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3
New behavior:
Comparison operations and arithmetic operations both broadcast.
In [87]: df == arr[[0], :] Out[87]: 0 1 0 True True 1 False False 2 False False
In [88]: df + arr[[0], :] Out[88]: 0 1 0 0 2 1 2 4 2 4 6
Comparison operations and arithmetic operations both broadcast.
In [89]: df == (1, 2) Out[89]: 0 1 0 False False 1 False False 2 False False
In [90]: df + (1, 2) Out[90]: 0 1 0 1 3 1 3 5 2 5 7
Comparison operations and arithmetic operations both raise ValueError.
In [6]: df == (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3
In [7]: df + (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3
DataFrame arithmetic operations broadcasting changes#
DataFrame arithmetic operations when operating with 2-dimensionalnp.ndarray objects now broadcast in the same way as np.ndarraybroadcast. (GH 23000)
In [91]: arr = np.arange(6).reshape(3, 2)
In [92]: df = pd.DataFrame(arr)
In [93]: df Out[93]: 0 1 0 0 1 1 2 3 2 4 5
Previous behavior:
In [5]: df + arr[[0], :] # 1 row, 2 columns ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2) In [6]: df + arr[:, [1]] # 1 column, 3 rows ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)
New behavior:
In [94]: df + arr[[0], :] # 1 row, 2 columns Out[94]: 0 1 0 0 2 1 2 4 2 4 6
In [95]: df + arr[:, [1]] # 1 column, 3 rows Out[95]: 0 1 0 1 2 1 5 6 2 9 10
Series and Index data-dtype incompatibilities#
Series and Index constructors now raise when the data is incompatible with a passed dtype= (GH 15832)
Previous behavior:
In [4]: pd.Series([-1], dtype="uint64") Out [4]: 0 18446744073709551615 dtype: uint64
New behavior:
In [4]: pd.Series([-1], dtype="uint64") Out [4]: ... OverflowError: Trying to coerce negative values to unsigned integers
Concatenation changes#
Calling pandas.concat() on a Categorical of ints with NA values now causes them to be processed as objects when concatenating with anything other than another Categorical of ints (GH 19214)
In [96]: s = pd.Series([0, 1, np.nan])
In [97]: c = pd.Series([0, 1, np.nan], dtype="category")
Previous behavior
In [3]: pd.concat([s, c]) Out[3]: 0 0.0 1 1.0 2 NaN 0 0.0 1 1.0 2 NaN dtype: float64
New behavior
In [98]: pd.concat([s, c]) Out[98]: 0 0.0 1 1.0 2 NaN 0 0.0 1 1.0 2 NaN dtype: float64
Datetimelike API changes#
- For DatetimeIndex and TimedeltaIndex with non-
Nonefreqattribute, addition or subtraction of integer-dtyped array orIndexwill return an object of the same class (GH 19959) - DateOffset objects are now immutable. Attempting to alter one of these will now raise
AttributeError(GH 21341) - PeriodIndex subtraction of another
PeriodIndexwill now return an object-dtype Index of DateOffset objects instead of raising aTypeError(GH 20049) - cut() and qcut() now returns a DatetimeIndex or TimedeltaIndex bins when the input is datetime or timedelta dtype respectively and
retbins=True(GH 19891) - DatetimeIndex.to_period() and Timestamp.to_period() will issue a warning when timezone information will be lost (GH 21333)
PeriodIndex.tz_convert()andPeriodIndex.tz_localize()have been removed (GH 21781)
Other API changes#
- A newly constructed empty DataFrame with integer as the
dtypewill now only be cast tofloat64ifindexis specified (GH 22858) - Series.str.cat() will now raise if
othersis aset(GH 23009) - Passing scalar values to DatetimeIndex or TimedeltaIndex will now raise
TypeErrorinstead ofValueError(GH 23539) max_rowsandmax_colsparameters removed fromHTMLFormattersince truncation is handled byDataFrameFormatter(GH 23818)- read_csv() will now raise a
ValueErrorif a column with missing values is declared as having dtypebool(GH 20591) - The column order of the resultant DataFrame from MultiIndex.to_frame() is now guaranteed to match the MultiIndex.names order. (GH 22420)
- Incorrectly passing a DatetimeIndex to MultiIndex.from_tuples(), rather than a sequence of tuples, now raises a
TypeErrorrather than aValueError(GH 24024) pd.offsets.generate_range()argumenttime_rulehas been removed; useoffsetinstead (GH 24157)- In 0.23.x, pandas would raise a
ValueErroron a merge of a numeric column (e.g.intdtyped column) and anobjectdtyped column (GH 9780). We have re-enabled the ability to mergeobjectand other dtypes; pandas will still raise on a merge between a numeric and anobjectdtyped column that is composed only of strings (GH 21681) - Accessing a level of a
MultiIndexwith a duplicate name (e.g. inget_level_values()) now raises aValueErrorinstead of aKeyError(GH 21678). - Invalid construction of
IntervalDtypewill now always raise aTypeErrorrather than aValueErrorif the subdtype is invalid (GH 21185) - Trying to reindex a
DataFramewith a non uniqueMultiIndexnow raises aValueErrorinstead of anException(GH 21770) - Index subtraction will attempt to operate element-wise instead of raising
TypeError(GH 19369) - pandas.io.formats.style.Styler supports a
number-formatproperty when using to_excel() (GH 22015) - DataFrame.corr() and Series.corr() now raise a
ValueErroralong with a helpful error message instead of aKeyErrorwhen supplied with an invalid method (GH 22298) shift()will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (GH 22397)- DataFrame.set_index() now gives a better (and less frequent) KeyError, raises a
ValueErrorfor incorrect types, and will not fail on duplicate column names withdrop=True. (GH 22484) - Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH 22784)
- DateOffset attribute
_cacheableand method_should_cachehave been removed (GH 23118) - Series.searchsorted(), when supplied a scalar value to search for, now returns a scalar instead of an array (GH 23801).
Categorical.searchsorted(), when supplied a scalar value to search for, now returns a scalar instead of an array (GH 23466).Categorical.searchsorted()now raises aKeyErrorrather that aValueError, if a searched for key is not found in its categories (GH 23466).- Index.hasnans() and Series.hasnans() now always return a python boolean. Previously, a python or a numpy boolean could be returned, depending on circumstances (GH 23294).
- The order of the arguments of DataFrame.to_html() and DataFrame.to_string() is rearranged to be consistent with each other. (GH 23614)
CategoricalIndex.reindex()now raises aValueErrorif the target index is non-unique and not equal to the current index. It previously only raised if the target index was not of a categorical dtype (GH 23963).- Series.to_list() and Index.to_list() are now aliases of
Series.tolistrespectivelyIndex.tolist(GH 8826) - The result of
SparseSeries.unstackis now a DataFrame with sparse values, rather than aSparseDataFrame(GH 24372). - DatetimeIndex and TimedeltaIndex no longer ignore the dtype precision. Passing a non-nanosecond resolution dtype will raise a
ValueError(GH 24753)
Extension type changes#
Equality and hashability
pandas now requires that extension dtypes be hashable (i.e. the respectiveExtensionDtype objects; hashability is not a requirement for the values of the corresponding ExtensionArray). The base class implements a default __eq__ and __hash__. If you have a parametrized dtype, you should update the ExtensionDtype._metadata tuple to match the signature of your__init__ method. See pandas.api.extensions.ExtensionDtype for more (GH 22476).
New and changed methods
dropna()has been added (GH 21185)repeat()has been added (GH 24349)- The
ExtensionArrayconstructor,_from_sequencenow take the keyword argcopy=False(GH 21185) - pandas.api.extensions.ExtensionArray.shift() added as part of the basic
ExtensionArrayinterface (GH 22387). searchsorted()has been added (GH 24350)- Support for reduction operations such as
sum,meanvia opt-in base class method override (GH 22762) ExtensionArray.isna()is allowed to return anExtensionArray(GH 22325).
Dtype changes
ExtensionDtypehas gained the ability to instantiate from string dtypes, e.g.decimalwould instantiate a registeredDecimalDtype; furthermore theExtensionDtypehas gained the methodconstruct_array_type(GH 21185)- Added
ExtensionDtype._is_numericfor controlling whether an extension dtype is considered numeric (GH 22290). - Added
pandas.api.types.register_extension_dtype()to register an extension type with pandas (GH 22664) - Updated the
.typeattribute forPeriodDtype,DatetimeTZDtype, andIntervalDtypeto be instances of the dtype (Period,Timestamp, andIntervalrespectively) (GH 22938)
Operator support
A Series based on an ExtensionArray now supports arithmetic and comparison operators (GH 19577). There are two approaches for providing operator support for an ExtensionArray:
- Define each of the operators on your
ExtensionArraysubclass. - Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the
ExtensionArray.
See the ExtensionArray Operator Support documentation section for details on both ways of adding operator support.
Other changes
- A default repr for pandas.api.extensions.ExtensionArray is now provided (GH 23601).
ExtensionArray._formatting_values()is deprecated. UseExtensionArray._formatterinstead. (GH 23601)- An
ExtensionArraywith a boolean dtype now works correctly as a boolean indexer. pandas.api.types.is_bool_dtype() now properly considers them boolean (GH 22326)
Bug fixes
- Bug in Series.get() for
SeriesusingExtensionArrayand integer index (GH 21257) - shift() now dispatches to
ExtensionArray.shift()(GH 22386) - Series.combine() works correctly with ExtensionArray inside of Series (GH 20825)
- Series.combine() with scalar argument now works for any function type (GH 21248)
- Series.astype() and DataFrame.astype() now dispatch to
ExtensionArray.astype()(GH 21185). - Slicing a single row of a
DataFramewith multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH 22784) - Bug when concatenating multiple
Serieswith different extension dtypes not casting to object dtype (GH 22994) - Series backed by an
ExtensionArraynow work with util.hash_pandas_object() (GH 23066) - DataFrame.stack() no longer converts to object dtype for DataFrames where each column has the same extension dtype. The output Series will have the same dtype as the columns (GH 23077).
- Series.unstack() and DataFrame.unstack() no longer convert extension arrays to object-dtype ndarrays. Each column in the output
DataFramewill now have the same dtype as the input (GH 23077). - Bug when grouping
Dataframe.groupby()and aggregating onExtensionArrayit was not returning the actualExtensionArraydtype (GH 23227). - Bug in pandas.merge() when merging on an extension array-backed column (GH 23020).
Deprecations#
MultiIndex.labelshas been deprecated and replaced by MultiIndex.codes. The functionality is unchanged. The new name better reflects the natures of these codes and makes theMultiIndexAPI more similar to the API for CategoricalIndex (GH 13443). As a consequence, other uses of the namelabelsinMultiIndexhave also been deprecated and replaced withcodes:- You should initialize a
MultiIndexinstance using a parameter namedcodesrather thanlabels. MultiIndex.set_labelshas been deprecated in favor of MultiIndex.set_codes().- For method MultiIndex.copy(), the
labelsparameter has been deprecated and replaced by acodesparameter.
- You should initialize a
- DataFrame.to_stata(), read_stata(),
StataReaderandStataWriterhave deprecated theencodingargument. The encoding of a Stata dta file is determined by the file type and cannot be changed (GH 21244) MultiIndex.to_hierarchical()is deprecated and will be removed in a future version (GH 21613)Series.ptp()is deprecated. Usenumpy.ptpinstead (GH 21614)Series.compress()is deprecated. UseSeries[condition]instead (GH 18262)- The signature of Series.to_csv() has been uniformed to that of DataFrame.to_csv(): the name of the first argument is now
path_or_buf, the order of subsequent arguments has changed, theheaderargument now defaults toTrue. (GH 19715) - Categorical.from_codes() has deprecated providing float values for the
codesargument. (GH 21767) - pandas.read_table() is deprecated. Instead, use read_csv() passing
sep='\t'if necessary. This deprecation has been removed in 0.25.0. (GH 21948) - Series.str.cat() has deprecated using arbitrary list-likes within list-likes. A list-like container may still contain many
Series,Indexor 1-dimensionalnp.ndarray, or alternatively, only scalar values. (GH 21950) FrozenNDArray.searchsorted()has deprecated thevparameter in favor ofvalue(GH 14645)DatetimeIndex.shift()andPeriodIndex.shift()now acceptperiodsargument instead ofnfor consistency with Index.shift() and Series.shift(). Usingnthrows a deprecation warning (GH 22458, GH 22912)- The
fastpathkeyword of the different Index constructors is deprecated (GH 23110). - Timestamp.tz_localize(), DatetimeIndex.tz_localize(), and Series.tz_localize() have deprecated the
errorsargument in favor of thenonexistentargument (GH 8917) - The class
FrozenNDArrayhas been deprecated. When unpickling,FrozenNDArraywill be unpickled tonp.ndarrayonce this class is removed (GH 9031) - The methods DataFrame.update() and
Panel.update()have deprecated theraise_conflict=False|Truekeyword in favor oferrors='ignore'|'raise'(GH 23585) - The methods Series.str.partition() and Series.str.rpartition() have deprecated the
patkeyword in favor ofsep(GH 22676) - Deprecated the
nthreadskeyword of pandas.read_feather() in favor ofuse_threadsto reflect the changes inpyarrow>=0.11.0. (GH 23053) - pandas.read_excel() has deprecated accepting
usecolsas an integer. Please pass in a list of ints from 0 tousecolsinclusive instead (GH 23527) - Constructing a TimedeltaIndex from data with
datetime64-dtyped data is deprecated, will raiseTypeErrorin a future version (GH 23539) - Constructing a DatetimeIndex from data with
timedelta64-dtyped data is deprecated, will raiseTypeErrorin a future version (GH 23675) - The
keep_tz=Falseoption (the default) of thekeep_tzkeyword ofDatetimeIndex.to_series() is deprecated (GH 17832). - Timezone converting a tz-aware
datetime.datetimeor Timestamp with Timestamp and thetzargument is now deprecated. Instead, use Timestamp.tz_convert() (GH 23579) pandas.api.types.is_period()is deprecated in favor ofpandas.api.types.is_period_dtype(GH 23917)pandas.api.types.is_datetimetz()is deprecated in favor ofpandas.api.types.is_datetime64tz(GH 23917)- Creating a TimedeltaIndex, DatetimeIndex, or PeriodIndex by passing range arguments
start,end, andperiodsis deprecated in favor of timedelta_range(), date_range(), or period_range() (GH 23919) - Passing a string alias like
'datetime64[ns, UTC]'as theunitparameter to DatetimeTZDtype is deprecated. UseDatetimeTZDtype.construct_from_stringinstead (GH 23990). - The
skipnaparameter of infer_dtype() will switch toTrueby default in a future version of pandas (GH 17066, GH 24050) - In Series.where() with Categorical data, providing an
otherthat is not present in the categories is deprecated. Convert the categorical to a different dtype or add theotherto the categories first (GH 24077). Series.clip_lower(),Series.clip_upper(),DataFrame.clip_lower()andDataFrame.clip_upper()are deprecated and will be removed in a future version. UseSeries.clip(lower=threshold),Series.clip(upper=threshold)and the equivalentDataFramemethods (GH 24203)Series.nonzero()is deprecated and will be removed in a future version (GH 18262)- Passing an integer to Series.fillna() and DataFrame.fillna() with
timedelta64[ns]dtypes is deprecated, will raiseTypeErrorin a future version. Useobj.fillna(pd.Timedelta(...))instead (GH 24694) Series.cat.categorical,Series.cat.nameandSeries.cat.indexhave been deprecated. Use the attributes onSeries.catorSeriesdirectly. (GH 24751).- Passing a dtype without a precision like
np.dtype('datetime64')ortimedelta64to Index, DatetimeIndex and TimedeltaIndex is now deprecated. Use the nanosecond-precision dtype instead (GH 24753).
Integer addition/subtraction with datetimes and timedeltas is deprecated#
In the past, users could—in some cases—add or subtract integers or integer-dtype arrays from Timestamp, DatetimeIndex and TimedeltaIndex.
This usage is now deprecated. Instead add or subtract integer multiples of the object’s freq attribute (GH 21939, GH 23878).
Previous behavior:
In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour()) In [6]: ts + 2 Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')
In [7]: tdi = pd.timedelta_range('1D', periods=2) In [8]: tdi - np.array([2, 1]) Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)
In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D') In [10]: dti + pd.Index([1, 2]) Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)
New behavior:
In [108]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
In[109]: ts + 2 * ts.freq Out[109]: Timestamp('1994-05-06 14:15:16', freq='H')
In [110]: tdi = pd.timedelta_range('1D', periods=2)
In [111]: tdi - np.array([2 * tdi.freq, 1 * tdi.freq]) Out[111]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)
In [112]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [113]: dti + pd.Index([1 * dti.freq, 2 * dti.freq]) Out[113]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)
Passing integer data and a timezone to DatetimeIndex#
The behavior of DatetimeIndex when passed integer data and a timezone is changing in a future version of pandas. Previously, these were interpreted as wall times in the desired timezone. In the future, these will be interpreted as wall times in UTC, which are then converted to the desired timezone (GH 24559).
The default behavior remains the same, but issues a warning:
In [3]: pd.DatetimeIndex([946684800000000000], tz="US/Central") /bin/ipython:1: FutureWarning: Passing integer-dtype data and a timezone to DatetimeIndex. Integer values will be interpreted differently in a future version of pandas. Previously, these were viewed as datetime64[ns] values representing the wall time in the specified timezone. In the future, these will be viewed as datetime64[ns] values representing the wall time in UTC. This is similar to a nanosecond-precision UNIX epoch. To accept the future behavior, use
pd.to_datetime(integer_data, utc=True).tz_convert(tz)
To keep the previous behavior, use
pd.to_datetime(integer_data).tz_localize(tz)#!/bin/python3 Out[3]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
As the warning message explains, opt in to the future behavior by specifying that the integer values are UTC, and then converting to the final timezone:
In [99]: pd.to_datetime([946684800000000000], utc=True).tz_convert('US/Central') Out[99]: DatetimeIndex(['1999-12-31 18:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
The old behavior can be retained with by localizing directly to the final timezone:
In [100]: pd.to_datetime([946684800000000000]).tz_localize('US/Central') Out[100]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
Converting timezone-aware Series and Index to NumPy arrays#
The conversion from a Series or Index with timezone-aware datetime data will change to preserve timezones by default (GH 23569).
NumPy doesn’t have a dedicated dtype for timezone-aware datetimes. In the past, converting a Series or DatetimeIndex with timezone-aware datatimes would convert to a NumPy array by
- converting the tz-aware data to UTC
- dropping the timezone-info
- returning a numpy.ndarray with
datetime64[ns]dtype
Future versions of pandas will preserve the timezone information by returning an object-dtype NumPy array where each value is a Timestamp with the correct timezone attached
In [101]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
In [102]: ser Out[102]: 0 2000-01-01 00:00:00+01:00 1 2000-01-02 00:00:00+01:00 dtype: datetime64[us, CET]
The default behavior remains the same, but issues a warning
In [8]: np.asarray(ser) /bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.
To accept the future behavior, pass 'dtype=object'.
To keep the old behavior, pass 'dtype="datetime64[ns]"'.#!/bin/python3 Out[8]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')
The previous or future behavior can be obtained, without any warnings, by specifying the dtype
Previous behavior
In [103]: np.asarray(ser, dtype='datetime64[ns]') Out[103]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')
Future behavior
New behavior
In [104]: np.asarray(ser, dtype=object) Out[104]: array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'), Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
Or by using Series.to_numpy()
In [105]: ser.to_numpy() Out[105]: array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'), Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
In [106]: ser.to_numpy(dtype="datetime64[ns]") Out[106]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')
All the above applies to a DatetimeIndex with tz-aware values as well.
Removal of prior version deprecations/changes#
- The
LongPanelandWidePanelclasses have been removed (GH 10892) - Series.repeat() has renamed the
repsargument torepeats(GH 14645) - Several private functions were removed from the (non-public) module
pandas.core.common(GH 22001) - Removal of the previously deprecated module
pandas.core.datetools(GH 14105, GH 14094) - Strings passed into DataFrame.groupby() that refer to both column and index levels will raise a
ValueError(GH 14432) - Index.repeat() and
MultiIndex.repeat()have renamed thenargument torepeats(GH 14645) - The
Seriesconstructor and.astypemethod will now raise aValueErrorif timestamp dtypes are passed in without a unit (e.g.np.datetime64) for thedtypeparameter (GH 15987) - Removal of the previously deprecated
as_indexerkeyword completely fromstr.match()(GH 22356, GH 6581) - The modules
pandas.types,pandas.computation, andpandas.util.decoratorshave been removed (GH 16157, GH 16250) - Removed the
pandas.formats.styleshim for pandas.io.formats.style.Styler (GH 16059) pandas.pnow,pandas.match,pandas.groupby,pd.get_store,pd.Expr, andpd.Termhave been removed (GH 15538, GH 15940)Categorical.searchsorted()and Series.searchsorted() have renamed thevargument tovalue(GH 14645)pandas.parser,pandas.lib, andpandas.tslibhave been removed (GH 15537)- Index.searchsorted() have renamed the
keyargument tovalue(GH 14645) DataFrame.consolidateandSeries.consolidatehave been removed (GH 15501)- Removal of the previously deprecated module
pandas.json(GH 19944) - The module
pandas.toolshas been removed (GH 15358, GH 16005) SparseArray.get_values()andSparseArray.to_dense()have dropped thefillparameter (GH 14686)DataFrame.sortlevelandSeries.sortlevelhave been removed (GH 15099)SparseSeries.to_dense()has dropped thesparse_onlyparameter (GH 14686)- DataFrame.astype() and Series.astype() have renamed the
raise_on_errorargument toerrors(GH 14967) is_sequence,is_any_int_dtype, andis_floating_dtypehave been removed frompandas.api.types(GH 16163, GH 16189)
Performance improvements#
- Slicing Series and DataFrames with an monotonically increasing CategoricalIndexis now very fast and has speed comparable to slicing with an
Int64Index. The speed increase is both when indexing by label (using .loc) and position(.iloc) (GH 20395) Slicing a monotonically increasing CategoricalIndex itself (i.e.ci[1000:2000]) shows similar speed improvements as above (GH 21659) - Improved performance of CategoricalIndex.equals() when comparing to another CategoricalIndex (GH 24023)
- Improved performance of Series.describe() in case of numeric dtpyes (GH 21274)
- Improved performance of
GroupBy.rank()when dealing with tied rankings (GH 21237) - Improved performance of DataFrame.set_index() with columns consisting of Period objects (GH 21582, GH 21606)
- Improved performance of Series.at() and
Index.get_value()for Extension Arrays values (e.g. Categorical) (GH 24204) - Improved performance of membership checks in Categorical and CategoricalIndex(i.e.
x in cat-style checks are much faster).CategoricalIndex.contains()is likewise much faster (GH 21369, GH 21508) - Improved performance of HDFStore.groups() (and dependent functions likeHDFStore.keys() (i.e.
x in storechecks) are much faster) (GH 21372) - Improved the performance of pandas.get_dummies() with
sparse=True(GH 21997) - Improved performance of
IndexEngine.get_indexer_non_unique()for sorted, non-unique indexes (GH 9466) - Improved performance of
PeriodIndex.unique()(GH 23083) - Improved performance of concat() for
Seriesobjects (GH 23404) - Improved performance of DatetimeIndex.normalize() and Timestamp.normalize() for timezone naive or UTC datetimes (GH 23634)
- Improved performance of DatetimeIndex.tz_localize() and various
DatetimeIndexattributes with dateutil UTC timezone (GH 23772) - Fixed a performance regression on Windows with Python 3.7 of read_csv() (GH 23516)
- Improved performance of Categorical constructor for
Seriesobjects (GH 23814) - Improved performance of where() for Categorical data (GH 24077)
- Improved performance of iterating over a Series. Using DataFrame.itertuples() now creates iterators without internally allocating lists of all elements (GH 20783)
- Improved performance of Period constructor, additionally benefitting
PeriodArrayandPeriodIndexcreation (GH 24084, GH 24118) - Improved performance of tz-aware
DatetimeArraybinary operations (GH 24491)
Bug fixes#
Categorical#
- Bug in Categorical.from_codes() where
NaNvalues incodeswere silently converted to0(GH 21767). In the future this will raise aValueError. Also changes the behavior of.from_codes([1.1, 2.0]). - Bug in
Categorical.sort_values()whereNaNvalues were always positioned in front regardless ofna_positionvalue. (GH 22556). - Bug when indexing with a boolean-valued
Categorical. Now a boolean-valuedCategoricalis treated as a boolean mask (GH 22665) - Constructing a CategoricalIndex with empty values and boolean categories was raising a
ValueErrorafter a change to dtype coercion (GH 22702). - Bug in
Categorical.take()with a user-providedfill_valuenot encoding thefill_value, which could result in aValueError, incorrect results, or a segmentation fault (GH 23296). - In Series.unstack(), specifying a
fill_valuenot present in the categories now raises aTypeErrorrather than ignoring thefill_value(GH 23284) - Bug when resampling DataFrame.resample() and aggregating on categorical data, the categorical dtype was getting lost. (GH 23227)
- Bug in many methods of the
.str-accessor, which always failed on calling theCategoricalIndex.strconstructor (GH 23555, GH 23556) - Bug in Series.where() losing the categorical dtype for categorical data (GH 24077)
- Bug in
Categorical.apply()whereNaNvalues could be handled unpredictably. They now remain unchanged (GH 24241) - Bug in Categorical comparison methods incorrectly raising
ValueErrorwhen operating against a DataFrame (GH 24630) - Bug in Categorical.set_categories() where setting fewer new categories with
rename=Truecaused a segmentation fault (GH 24675)
Datetimelike#
- Fixed bug where two DateOffset objects with different
normalizeattributes could evaluate as equal (GH 21404) - Fixed bug where Timestamp.resolution() incorrectly returned 1-microsecond
timedeltainstead of 1-nanosecond Timedelta (GH 21336, GH 21365) - Bug in to_datetime() that did not consistently return an Index when
box=Truewas specified (GH 21864) - Bug in DatetimeIndex comparisons where string comparisons incorrectly raises
TypeError(GH 22074) - Bug in DatetimeIndex comparisons when comparing against
timedelta64[ns]dtyped arrays; in some casesTypeErrorwas incorrectly raised, in others it incorrectly failed to raise (GH 22074) - Bug in DatetimeIndex comparisons when comparing against object-dtyped arrays (GH 22074)
- Bug in DataFrame with
datetime64[ns]dtype addition and subtraction withTimedelta-like objects (GH 22005, GH 22163) - Bug in DataFrame with
datetime64[ns]dtype addition and subtraction withDateOffsetobjects returning anobjectdtype instead ofdatetime64[ns]dtype (GH 21610, GH 22163) - Bug in DataFrame with
datetime64[ns]dtype comparing againstNaTincorrectly (GH 22242, GH 22163) - Bug in DataFrame with
datetime64[ns]dtype subtractingTimestamp-like object incorrectly returneddatetime64[ns]dtype instead oftimedelta64[ns]dtype (GH 8554, GH 22163) - Bug in DataFrame with
datetime64[ns]dtype subtractingnp.datetime64object with non-nanosecond unit failing to convert to nanoseconds (GH 18874, GH 22163) - Bug in DataFrame comparisons against
Timestamp-like objects failing to raiseTypeErrorfor inequality checks with mismatched types (GH 8932, GH 22163) - Bug in DataFrame with mixed dtypes including
datetime64[ns]incorrectly raisingTypeErroron equality comparisons (GH 13128, GH 22163) - Bug in DataFrame.values returning a DatetimeIndex for a single-column
DataFramewith tz-aware datetime values. Now a 2-D numpy.ndarray of Timestamp objects is returned (GH 24024) - Bug in DataFrame.eq() comparison against
NaTincorrectly returningTrueorNaN(GH 15697, GH 22163) - Bug in DatetimeIndex subtraction that incorrectly failed to raise
OverflowError(GH 22492, GH 22508) - Bug in DatetimeIndex incorrectly allowing indexing with
Timedeltaobject (GH 20464) - Bug in DatetimeIndex where frequency was being set if original frequency was
None(GH 22150) - Bug in rounding methods of DatetimeIndex (round(), ceil(), floor()) and Timestamp (round(), ceil(), floor()) could give rise to loss of precision (GH 22591)
- Bug in to_datetime() with an Index argument that would drop the
namefrom the result (GH 21697) - Bug in PeriodIndex where adding or subtracting a
timedeltaorTickobject produced incorrect results (GH 22988) - Bug in the Series repr with period-dtype data missing a space before the data (GH 23601)
- Bug in date_range() when decrementing a start date to a past end date by a negative frequency (GH 23270)
- Bug in Series.min() which would return
NaNinstead ofNaTwhen called on a series ofNaT(GH 23282) - Bug in Series.combine_first() not properly aligning categoricals, so that missing values in
selfwhere not filled by valid values fromother(GH 24147) - Bug in DataFrame.combine() with datetimelike values raising a TypeError (GH 23079)
- Bug in date_range() with frequency of
Dayor higher where dates sufficiently far in the future could wrap around to the past instead of raisingOutOfBoundsDatetime(GH 14187) - Bug in period_range() ignoring the frequency of
startandendwhen those are provided as Period objects (GH 20535). - Bug in PeriodIndex with attribute
freq.ngreater than 1 where adding a DateOffset object would return incorrect results (GH 23215) - Bug in Series that interpreted string indices as lists of characters when setting datetimelike values (GH 23451)
- Bug in DataFrame when creating a new column from an ndarray of Timestamp objects with timezones creating an object-dtype column, rather than datetime with timezone (GH 23932)
- Bug in Timestamp constructor which would drop the frequency of an input Timestamp (GH 22311)
- Bug in DatetimeIndex where calling
np.array(dtindex, dtype=object)would incorrectly return an array oflongobjects (GH 23524) - Bug in Index where passing a timezone-aware DatetimeIndex and
dtype=objectwould incorrectly raise aValueError(GH 23524) - Bug in Index where calling
np.array(dtindex, dtype=object)on a timezone-naive DatetimeIndex would return an array ofdatetimeobjects instead of Timestamp objects, potentially losing nanosecond portions of the timestamps (GH 23524) - Bug in
Categorical.__setitem__not allowing setting with anotherCategoricalwhen both are unordered and have the same categories, but in a different order (GH 24142) - Bug in date_range() where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (GH 24110)
- Bug in DatetimeIndex where constructing a DatetimeIndex from a Categorical or CategoricalIndex would incorrectly drop timezone information (GH 18664)
- Bug in DatetimeIndex and TimedeltaIndex where indexing with
Ellipsiswould incorrectly lose the index’sfreqattribute (GH 21282) - Clarified error message produced when passing an incorrect
freqargument to DatetimeIndex withNaTas the first entry in the passed data (GH 11587) - Bug in to_datetime() where
boxandutcarguments were ignored when passing a DataFrame ordictof unit mappings (GH 23760) - Bug in Series.dt where the cache would not update properly after an in-place operation (GH 24408)
- Bug in PeriodIndex where comparisons against an array-like object with length 1 failed to raise
ValueError(GH 23078) - Bug in
DatetimeIndex.astype(),PeriodIndex.astype()andTimedeltaIndex.astype()ignoring the sign of thedtypefor unsigned integer dtypes (GH 24405). - Fixed bug in Series.max() with
datetime64[ns]-dtype failing to returnNaTwhen nulls are present andskipna=Falseis passed (GH 24265) - Bug in to_datetime() where arrays of
datetimeobjects containing both timezone-aware and timezone-naivedatetimeswould fail to raiseValueError(GH 24569) - Bug in to_datetime() with invalid datetime format doesn’t coerce input to
NaTeven iferrors='coerce'(GH 24763)
Timedelta#
- Bug in DataFrame with
timedelta64[ns]dtype division byTimedelta-like scalar incorrectly returningtimedelta64[ns]dtype instead offloat64dtype (GH 20088, GH 22163) - Bug in adding a Index with object dtype to a Series with
timedelta64[ns]dtype incorrectly raising (GH 22390) - Bug in multiplying a Series with numeric dtype against a
timedeltaobject (GH 22390) - Bug in Series with numeric dtype when adding or subtracting an array or
Serieswithtimedelta64dtype (GH 22390) - Bug in Index with numeric dtype when multiplying or dividing an array with dtype
timedelta64(GH 22390) - Bug in TimedeltaIndex incorrectly allowing indexing with
Timestampobject (GH 20464) - Fixed bug where subtracting Timedelta from an object-dtyped array would raise
TypeError(GH 21980) - Fixed bug in adding a DataFrame with all-
timedelta64[ns]dtypes to a DataFrame with all-integer dtypes returning incorrect results instead of raisingTypeError(GH 22696) - Bug in TimedeltaIndex where adding a timezone-aware datetime scalar incorrectly returned a timezone-naive DatetimeIndex (GH 23215)
- Bug in TimedeltaIndex where adding
np.timedelta64('NaT')incorrectly returned an all-NaTDatetimeIndex instead of an all-NaTTimedeltaIndex (GH 23215) - Bug in Timedelta and to_timedelta() have inconsistencies in supported unit string (GH 21762)
- Bug in TimedeltaIndex division where dividing by another TimedeltaIndex raised
TypeErrorinstead of returning aFloat64Index(GH 23829, GH 22631) - Bug in TimedeltaIndex comparison operations where comparing against non-
Timedelta-like objects would raiseTypeErrorinstead of returning all-Falsefor__eq__and all-Truefor__ne__(GH 24056) - Bug in Timedelta comparisons when comparing with a
Tickobject incorrectly raisingTypeError(GH 24710)
Timezones#
- Bug in Index.shift() where an
AssertionErrorwould raise when shifting across DST (GH 8616) - Bug in Timestamp constructor where passing an invalid timezone offset designator (
Z) would not raise aValueError(GH 8910) - Bug in Timestamp.replace() where replacing at a DST boundary would retain an incorrect offset (GH 7825)
- Bug in Series.replace() with
datetime64[ns, tz]data when replacingNaT(GH 11792) - Bug in Timestamp when passing different string date formats with a timezone offset would produce different timezone offsets (GH 12064)
- Bug when comparing a tz-naive Timestamp to a tz-aware DatetimeIndex which would coerce the DatetimeIndex to tz-naive (GH 12601)
- Bug in Series.truncate() with a tz-aware DatetimeIndex which would cause a core dump (GH 9243)
- Bug in Series constructor which would coerce tz-aware and tz-naive Timestamp to tz-aware (GH 13051)
- Bug in Index with
datetime64[ns, tz]dtype that did not localize integer data correctly (GH 20964) - Bug in DatetimeIndex where constructing with an integer and tz would not localize correctly (GH 12619)
- Fixed bug where DataFrame.describe() and Series.describe() on tz-aware datetimes did not show
firstandlastresult (GH 21328) - Bug in DatetimeIndex comparisons failing to raise
TypeErrorwhen comparing timezone-awareDatetimeIndexagainstnp.datetime64(GH 22074) - Bug in
DataFrameassignment with a timezone-aware scalar (GH 19843) - Bug in DataFrame.asof() that raised a
TypeErrorwhen attempting to compare tz-naive and tz-aware timestamps (GH 21194) - Bug when constructing a DatetimeIndex with Timestamp constructed with the
replacemethod across DST (GH 18785) - Bug when setting a new value with DataFrame.loc() with a DatetimeIndex with a DST transition (GH 18308, GH 20724)
- Bug in Index.unique() that did not re-localize tz-aware dates correctly (GH 21737)
- Bug when indexing a Series with a DST transition (GH 21846)
- Bug in DataFrame.resample() and Series.resample() where an
AmbiguousTimeErrororNonExistentTimeErrorwould raise if a timezone aware timeseries ended on a DST transition (GH 19375, GH 10117) - Bug in DataFrame.drop() and Series.drop() when specifying a tz-aware Timestamp key to drop from a DatetimeIndex with a DST transition (GH 21761)
- Bug in DatetimeIndex constructor where
NaTanddateutil.tz.tzlocalwould raise anOutOfBoundsDatetimeerror (GH 23807) - Bug in DatetimeIndex.tz_localize() and Timestamp.tz_localize() with
dateutil.tz.tzlocalnear a DST transition that would return an incorrectly localized datetime (GH 23807) - Bug in Timestamp constructor where a
dateutil.tz.tzutctimezone passed with adatetime.datetimeargument would be converted to apytz.UTCtimezone (GH 23807) - Bug in to_datetime() where
utc=Truewas not respected when specifying aunitanderrors='ignore'(GH 23758) - Bug in to_datetime() where
utc=Truewas not respected when passing a Timestamp (GH 24415) - Bug in DataFrame.any() returns wrong value when
axis=1and the data is of datetimelike type (GH 23070) - Bug in DatetimeIndex.to_period() where a timezone aware index was converted to UTC first before creating PeriodIndex (GH 22905)
- Bug in DataFrame.tz_localize(), DataFrame.tz_convert(), Series.tz_localize(), and Series.tz_convert() where
copy=Falsewould mutate the original argument inplace (GH 6326) - Bug in DataFrame.max() and DataFrame.min() with
axis=1where a Series withNaNwould be returned when all columns contained the same timezone (GH 10390)
Offsets#
- Bug in
FY5253where date offsets could incorrectly raise anAssertionErrorin arithmetic operations (GH 14774) - Bug in DateOffset where keyword arguments
weekandmillisecondswere accepted and ignored. Passing these will now raiseValueError(GH 19398) - Bug in adding DateOffset with DataFrame or PeriodIndex incorrectly raising
TypeError(GH 23215) - Bug in comparing DateOffset objects with non-DateOffset objects, particularly strings, raising
ValueErrorinstead of returningFalsefor equality checks andTruefor not-equal checks (GH 23524)
Numeric#
- Bug in Series
__rmatmul__doesn’t support matrix vector multiplication (GH 21530) - Bug in factorize() fails with read-only array (GH 12813)
- Fixed bug in unique() handled signed zeros inconsistently: for some inputs 0.0 and -0.0 were treated as equal and for some inputs as different. Now they are treated as equal for all inputs (GH 21866)
- Bug in DataFrame.agg(), DataFrame.transform() and DataFrame.apply() where, when supplied with a list of functions and
axis=1(e.g.df.apply(['sum', 'mean'], axis=1)), aTypeErrorwas wrongly raised. For all three methods such calculation are now done correctly. (GH 16679). - Bug in Series comparison against datetime-like scalars and arrays (GH 22074)
- Bug in DataFrame multiplication between boolean dtype and integer returning
objectdtype instead of integer dtype (GH 22047, GH 22163) - Bug in DataFrame.apply() where, when supplied with a string argument and additional positional or keyword arguments (e.g.
df.apply('sum', min_count=1)), aTypeErrorwas wrongly raised (GH 22376) - Bug in DataFrame.astype() to extension dtype may raise
AttributeError(GH 22578) - Bug in DataFrame with
timedelta64[ns]dtype arithmetic operations withndarraywith integer dtype incorrectly treating the narray astimedelta64[ns]dtype (GH 23114) - Bug in Series.rpow() with object dtype
NaNfor1 ** NAinstead of1(GH 22922). - Series.agg() can now handle numpy NaN-aware methods like numpy.nansum() (GH 19629)
- Bug in Series.rank() and DataFrame.rank() when
pct=Trueand more than 224 rows are present resulted in percentages greater than 1.0 (GH 18271) - Calls such as DataFrame.round() with a non-unique CategoricalIndex() now return expected data. Previously, data would be improperly duplicated (GH 21809).
- Added
log10,floorandceilto the list of supported functions in DataFrame.eval() (GH 24139, GH 24353) - Logical operations
&, |, ^between Series and Index will no longer raiseValueError(GH 22092) - Checking PEP 3141 numbers in is_scalar() function returns
True(GH 22903) - Reduction methods like Series.sum() now accept the default value of
keepdims=Falsewhen called from a NumPy ufunc, rather than raising aTypeError. Full support forkeepdimshas not been implemented (GH 24356).
Conversion#
- Bug in DataFrame.combine_first() in which column types were unexpectedly converted to float (GH 20699)
- Bug in DataFrame.clip() in which column types are not preserved and casted to float (GH 24162)
- Bug in DataFrame.clip() when order of columns of dataframes doesn’t match, result observed is wrong in numeric values (GH 20911)
- Bug in DataFrame.astype() where converting to an extension dtype when duplicate column names are present causes a
RecursionError(GH 24704)
Strings#
- Bug in
Index.str.partition()was not nan-safe (GH 23558). - Bug in
Index.str.split()was not nan-safe (GH 23677). - Bug Series.str.contains() not respecting the
naargument for aCategoricaldtypeSeries(GH 22158) - Bug in
Index.str.cat()when the result contained onlyNaN(GH 24044)
Interval#
- Bug in the IntervalIndex constructor where the
closedparameter did not always override the inferredclosed(GH 19370) - Bug in the
IntervalIndexrepr where a trailing comma was missing after the list of intervals (GH 20611) - Bug in Interval where scalar arithmetic operations did not retain the
closedvalue (GH 22313) - Bug in IntervalIndex where indexing with datetime-like values raised a
KeyError(GH 20636) - Bug in
IntervalTreewhere data containingNaNtriggered a warning and resulted in incorrect indexing queries with IntervalIndex (GH 23352)
Indexing#
- Bug in DataFrame.ne() fails if columns contain column name “dtype” (GH 22383)
- The traceback from a
KeyErrorwhen asking.locfor a single missing label is now shorter and more clear (GH 21557) - PeriodIndex now emits a
KeyErrorwhen a malformed string is looked up, which is consistent with the behavior of DatetimeIndex (GH 22803) - When
.ixis asked for a missing integer label in a MultiIndex with a first level of integer type, it now raises aKeyError, consistently with the case of a flatInt64Index, rather than falling back to positional indexing (GH 21593) - Bug in Index.reindex() when reindexing a tz-naive and tz-aware DatetimeIndex (GH 8306)
- Bug in Series.reindex() when reindexing an empty series with a
datetime64[ns, tz]dtype (GH 20869) - Bug in DataFrame when setting values with
.locand a timezone aware DatetimeIndex (GH 11365) DataFrame.__getitem__now accepts dictionaries and dictionary keys as list-likes of labels, consistently withSeries.__getitem__(GH 21294)- Fixed
DataFrame[np.nan]when columns are non-unique (GH 21428) - Bug when indexing DatetimeIndex with nanosecond resolution dates and timezones (GH 11679)
- Bug where indexing with a Numpy array containing negative values would mutate the indexer (GH 21867)
- Bug where mixed indexes wouldn’t allow integers for
.at(GH 19860) Float64Index.get_locnow raisesKeyErrorwhen boolean key passed. (GH 19087)- Bug in DataFrame.loc() when indexing with an IntervalIndex (GH 19977)
- Index no longer mangles
None,NaNandNaT, i.e. they are treated as three different keys. However, for numeric Index all three are still coerced to aNaN(GH 22332) - Bug in
scalar in Indexif scalar is a float while theIndexis of integer dtype (GH 22085) - Bug in MultiIndex.set_levels() when levels value is not subscriptable (GH 23273)
- Bug where setting a timedelta column by
Indexcauses it to be casted to double, and therefore lose precision (GH 23511) - Bug in Index.union() and Index.intersection() where name of the
Indexof the result was not computed correctly for certain cases (GH 9943, GH 9862) - Bug in Index slicing with boolean Index may raise
TypeError(GH 22533) - Bug in
PeriodArray.__setitem__when accepting slice and list-like value (GH 23978) - Bug in DatetimeIndex, TimedeltaIndex where indexing with
Ellipsiswould lose theirfreqattribute (GH 21282) - Bug in
iatwhere using it to assign an incompatible value would create a new column (GH 23236)
Missing#
- Bug in DataFrame.fillna() where a
ValueErrorwould raise when one column contained adatetime64[ns, tz]dtype (GH 15522) - Bug in Series.hasnans() that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (GH 19700)
- Series.isin() now treats all NaN-floats as equal also for
np.object_-dtype. This behavior is consistent with the behavior for float64 (GH 22119) - unique() no longer mangles NaN-floats and the
NaT-object fornp.object_-dtype, i.e.NaTis no longer coerced to a NaN-value and is treated as a different entity. (GH 22295) - DataFrame and Series now properly handle numpy masked arrays with hardened masks. Previously, constructing a DataFrame or Series from a masked array with a hard mask would create a pandas object containing the underlying value, rather than the expected NaN. (GH 24574)
- Bug in DataFrame constructor where
dtypeargument was not honored when handling numpy masked record arrays. (GH 24874)
MultiIndex#
- Bug in
io.formats.style.Styler.applymap()wheresubset=with MultiIndex slice would reduce to Series (GH 19861) - Removed compatibility for MultiIndex pickles prior to version 0.8.0; compatibility with MultiIndex pickles from version 0.13 forward is maintained (GH 21654)
- MultiIndex.get_loc_level() (and as a consequence,
.locon aSeriesorDataFramewith a MultiIndex index) will now raise aKeyError, rather than returning an emptyslice, if asked a label which is present in thelevelsbut is unused (GH 22221) - MultiIndex has gained the MultiIndex.from_frame(), it allows constructing a MultiIndex object from a DataFrame (GH 22420)
- Fix
TypeErrorin Python 3 when creating MultiIndex in which some levels have mixed types, e.g. when some labels are tuples (GH 15457)
IO#
- Bug in read_csv() in which a column specified with
CategoricalDtypeof boolean categories was not being correctly coerced from string values to booleans (GH 20498) - Bug in read_csv() in which unicode column names were not being properly recognized with Python 2.x (GH 13253)
- Bug in DataFrame.to_sql() when writing timezone aware data (
datetime64[ns, tz]dtype) would raise aTypeError(GH 9086) - Bug in DataFrame.to_sql() where a naive DatetimeIndex would be written as
TIMESTAMP WITH TIMEZONEtype in supported databases, e.g. PostgreSQL (GH 23510) - Bug in read_excel() when
parse_colsis specified with an empty dataset (GH 9208) - read_html() no longer ignores all-whitespace
<tr>within<thead>when considering theskiprowsandheaderarguments. Previously, users had to decrease theirheaderandskiprowsvalues on such tables to work around the issue. (GH 21641) - read_excel() will correctly show the deprecation warning for previously deprecated
sheetname(GH 17994) - read_csv() and read_table() will throw
UnicodeErrorand not coredump on badly encoded strings (GH 22748) - read_csv() will correctly parse timezone-aware datetimes (GH 22256)
- Bug in read_csv() in which memory management was prematurely optimized for the C engine when the data was being read in chunks (GH 23509)
- Bug in read_csv() in unnamed columns were being improperly identified when extracting a multi-index (GH 23687)
- read_sas() will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (GH 21616)
- read_sas() will correctly parse sas7bdat files with many columns (GH 22628)
- read_sas() will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (GH 16615)
- Bug in read_sas() in which an incorrect error was raised on an invalid file format. (GH 24548)
- Bug in
detect_client_encoding()where potentialIOErrorgoes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (GH 21552) - Bug in DataFrame.to_html() with
index=Falsemisses truncation indicators (…) on truncated DataFrame (GH 15019, GH 22783) - Bug in DataFrame.to_html() with
index=Falsewhen both columns and row index areMultiIndex(GH 22579) - Bug in DataFrame.to_html() with
index_names=Falsedisplaying index name (GH 22747) - Bug in DataFrame.to_html() with
header=Falsenot displaying row index names (GH 23788) - Bug in DataFrame.to_html() with
sparsify=Falsethat caused it to raiseTypeError(GH 22887) - Bug in DataFrame.to_string() that broke column alignment when
index=Falseand width of first column’s values is greater than the width of first column’s header (GH 16839, GH 13032) - Bug in DataFrame.to_string() that caused representations of DataFrame to not take up the whole window (GH 22984)
- Bug in DataFrame.to_csv() where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (GH 19589).
HDFStorewill raiseValueErrorwhen theformatkwarg is passed to the constructor (GH 13291)- Bug in HDFStore.append() when appending a DataFrame with an empty string column and
min_itemsize< 8 (GH 12242) - Bug in read_csv() in which memory leaks occurred in the C engine when parsing
NaNvalues due to insufficient cleanup on completion or error (GH 21353) - Bug in read_csv() in which incorrect error messages were being raised when
skipfooterwas passed in along withnrows,iterator, orchunksize(GH 23711) - Bug in read_csv() in which MultiIndex index names were being improperly handled in the cases when they were not provided (GH 23484)
- Bug in read_csv() in which unnecessary warnings were being raised when the dialect’s values conflicted with the default arguments (GH 23761)
- Bug in read_html() in which the error message was not displaying the valid flavors when an invalid one was provided (GH 23549)
- Bug in read_excel() in which extraneous header names were extracted, even though none were specified (GH 11733)
- Bug in read_excel() in which column names were not being properly converted to string sometimes in Python 2.x (GH 23874)
- Bug in read_excel() in which
index_col=Nonewas not being respected and parsing index columns anyway (GH 18792, GH 20480) - Bug in read_excel() in which
usecolswas not being validated for proper column names when passed in as a string (GH 20480) - Bug in DataFrame.to_dict() when the resulting dict contains non-Python scalars in the case of numeric data (GH 23753)
- DataFrame.to_string(), DataFrame.to_html(), DataFrame.to_latex() will correctly format output when a string is passed as the
float_formatargument (GH 21625, GH 22270) - Bug in read_csv() that caused it to raise
OverflowErrorwhen trying to use ‘inf’ asna_valuewith integer index column (GH 17128) - Bug in read_csv() that caused the C engine on Python 3.6+ on Windows to improperly read CSV filenames with accented or special characters (GH 15086)
- Bug in read_fwf() in which the compression type of a file was not being properly inferred (GH 22199)
- Bug in
pandas.io.json.json_normalize()that caused it to raiseTypeErrorwhen two consecutive elements ofrecord_pathare dicts (GH 22706) - Bug in DataFrame.to_stata(),
pandas.io.stata.StataWriterandpandas.io.stata.StataWriter117where a exception would leave a partially written and invalid dta file (GH 23573) - Bug in DataFrame.to_stata() and
pandas.io.stata.StataWriter117that produced invalid files when using strLs with non-ASCII characters (GH 23573) - Bug in
HDFStorethat caused it to raiseValueErrorwhen reading a Dataframe in Python 3 from fixed format written in Python 2 (GH 24510) - Bug in DataFrame.to_string() and more generally in the floating
reprformatter. Zeros were not trimmed ifinfwas present in a columns while it was the case with NA values. Zeros are now trimmed as in the presence of NA (GH 24861). - Bug in the
reprwhen truncating the number of columns and having a wide last column (GH 24849).
Plotting#
- Bug in DataFrame.plot.scatter() and DataFrame.plot.hexbin() caused x-axis label and ticklabels to disappear when colorbar was on in IPython inline backend (GH 10611, GH 10678, and GH 20455)
- Bug in plotting a Series with datetimes using
matplotlib.axes.Axes.scatter()(GH 22039) - Bug in DataFrame.plot.bar() caused bars to use multiple colors instead of a single one (GH 20585)
- Bug in validating color parameter caused extra color to be appended to the given color array. This happened to multiple plotting functions using matplotlib. (GH 20726)
GroupBy/resample/rolling#
- Bug in
Rolling.min()andRolling.max()withclosed='left', a datetime-like index and only one entry in the series leading to segfault (GH 24718) - Bug in
GroupBy.first()andGroupBy.last()withas_index=Falseleading to the loss of timezone information (GH 15884) - Bug in
DateFrame.resample()when downsampling across a DST boundary (GH 8531) - Bug in date anchoring for
DateFrame.resample()with offsetDaywhen n > 1 (GH 24127) - Bug where
ValueErroris wrongly raised when callingSeriesGroupBy.count()method of aSeriesGroupBywhen the grouping variable only contains NaNs and numpy version < 1.13 (GH 21956). - Multiple bugs in
Rolling.min()withclosed='left'and a datetime-like index leading to incorrect results and also segfault. (GH 21704) - Bug in Resampler.apply() when passing positional arguments to applied func (GH 14615).
- Bug in Series.resample() when passing
numpy.timedelta64toloffsetkwarg (GH 7687). - Bug in Resampler.asfreq() when frequency of
TimedeltaIndexis a subperiod of a new frequency (GH 13022). - Bug in SeriesGroupBy.mean() when values were integral but could not fit inside of int64, overflowing instead. (GH 22487)
RollingGroupby.agg()andExpandingGroupby.agg()now support multiple aggregation functions as parameters (GH 15072)- Bug in DataFrame.resample() and Series.resample() when resampling by a weekly offset (
'W') across a DST transition (GH 9119, GH 21459) - Bug in DataFrame.expanding() in which the
axisargument was not being respected during aggregations (GH 23372) - Bug in
GroupBy.transform()which caused missing values when the input function can accept a DataFrame but renames it (GH 23455). - Bug in
GroupBy.nth()where column order was not always preserved (GH 20760) - Bug in
GroupBy.rank()withmethod='dense'andpct=Truewhen a group has only one member would raise aZeroDivisionError(GH 23666). - Calling
GroupBy.rank()with empty groups andpct=Truewas raising aZeroDivisionError(GH 22519) - Bug in DataFrame.resample() when resampling
NaTinTimeDeltaIndex(GH 13223). - Bug in DataFrame.groupby() did not respect the
observedargument when selecting a column and instead always usedobserved=False(GH 23970) - Bug in
SeriesGroupBy.pct_change()orDataFrameGroupBy.pct_change()would previously work across groups when calculating the percent change, where it now correctly works per group (GH 21200, GH 21235). - Bug preventing hash table creation with very large number (2^32) of rows (GH 22805)
- Bug in groupby when grouping on categorical causes
ValueErrorand incorrect grouping ifobserved=Trueandnanis present in categorical column (GH 24740, GH 21151).
Reshaping#
- Bug in pandas.concat() when joining resampled DataFrames with timezone aware index (GH 13783)
- Bug in pandas.concat() when joining only
Seriesthenamesargument ofconcatis no longer ignored (GH 23490) - Bug in Series.combine_first() with
datetime64[ns, tz]dtype which would return tz-naive result (GH 21469) - Bug in Series.where() and DataFrame.where() with
datetime64[ns, tz]dtype (GH 21546) - Bug in DataFrame.where() with an empty DataFrame and empty
condhaving non-bool dtype (GH 21947) - Bug in Series.mask() and DataFrame.mask() with
listconditionals (GH 21891) - Bug in DataFrame.replace() raises RecursionError when converting OutOfBounds
datetime64[ns, tz](GH 20380) GroupBy.rank()now raises aValueErrorwhen an invalid value is passed for argumentna_option(GH 22124)- Bug in get_dummies() with Unicode attributes in Python 2 (GH 22084)
- Bug in DataFrame.replace() raises
RecursionErrorwhen replacing empty lists (GH 22083) - Bug in Series.replace() and DataFrame.replace() when dict is used as the
to_replacevalue and one key in the dict is another key’s value, the results were inconsistent between using integer key and using string key (GH 20656) - Bug in DataFrame.drop_duplicates() for empty
DataFramewhich incorrectly raises an error (GH 20516) - Bug in pandas.wide_to_long() when a string is passed to the stubnames argument and a column name is a substring of that stubname (GH 22468)
- Bug in merge() when merging
datetime64[ns, tz]data that contained a DST transition (GH 18885) - Bug in merge_asof() when merging on float values within defined tolerance (GH 22981)
- Bug in pandas.concat() when concatenating a multicolumn DataFrame with tz-aware data against a DataFrame with a different number of columns (GH 22796)
- Bug in merge_asof() where confusing error message raised when attempting to merge with missing values (GH 23189)
- Bug in DataFrame.nsmallest() and DataFrame.nlargest() for dataframes that have a MultiIndex for columns (GH 23033).
- Bug in pandas.melt() when passing column names that are not present in
DataFrame(GH 23575) - Bug in
DataFrame.append()with a Series with a dateutil timezone would raise aTypeError(GH 23682) - Bug in Series construction when passing no data and
dtype=str(GH 22477) - Bug in cut() with
binsas an overlappingIntervalIndexwhere multiple bins were returned per item instead of raising aValueError(GH 23980) - Bug in pandas.concat() when joining
Seriesdatetimetz withSeriescategory would lose timezone (GH 23816) - Bug in DataFrame.join() when joining on partial MultiIndex would drop names (GH 20452).
- DataFrame.nlargest() and DataFrame.nsmallest() now returns the correct n values when keep != ‘all’ also when tied on the first columns (GH 22752)
- Constructing a DataFrame with an index argument that wasn’t already an instance of Index was broken (GH 22227).
- Bug in DataFrame prevented list subclasses to be used to construction (GH 21226)
- Bug in DataFrame.unstack() and DataFrame.pivot_table() returning a misleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (GH 20601)
- Bug in DataFrame.unstack() where a
ValueErrorwas raised when unstacking timezone aware values (GH 18338) - Bug in DataFrame.stack() where timezone aware values were converted to timezone naive values (GH 19420)
- Bug in merge_asof() where a
TypeErrorwas raised whenby_colwere timezone aware values (GH 21184) - Bug showing an incorrect shape when throwing error during
DataFrameconstruction. (GH 20742)
Sparse#
- Updating a boolean, datetime, or timedelta column to be Sparse now works (GH 22367)
- Bug in
Series.to_sparse()with Series already holding sparse data not constructing properly (GH 22389) - Providing a
sparse_indexto the SparseArray constructor no longer defaults the na-value tonp.nanfor all dtypes. The correct na_value fordata.dtypeis now used. - Bug in
SparseArray.nbytesunder-reporting its memory usage by not including the size of its sparse index. - Improved performance of Series.shift() for non-NA
fill_value, as values are no longer converted to a dense array. - Bug in
DataFrame.groupbynot includingfill_valuein the groups for non-NAfill_valuewhen grouping by a sparse column (GH 5078) - Bug in unary inversion operator (
~) on aSparseSerieswith boolean values. The performance of this has also been improved (GH 22835) - Bug in
SparseArary.unique()not returning the unique values (GH 19595) - Bug in
SparseArray.nonzero()andSparseDataFrame.dropna()returning shifted/incorrect results (GH 21172) - Bug in DataFrame.apply() where dtypes would lose sparseness (GH 23744)
- Bug in concat() when concatenating a list of Series with all-sparse values changing the
fill_valueand converting to a dense Series (GH 24371)
Style#
- background_gradient() now takes a
text_color_thresholdparameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (GH 21258) - background_gradient() now also supports tablewise application (in addition to rowwise and columnwise) with
axis=None(GH 15204) - bar() now also supports tablewise application (in addition to rowwise and columnwise) with
axis=Noneand setting clipping range withvminandvmax(GH 21548 and GH 21526).NaNvalues are also handled properly.
Build changes#
- Building pandas for development now requires
cython >= 0.28.2(GH 21688) - Testing pandas now requires
hypothesis>=3.58. You can find the Hypothesis docs here, and a pandas-specific introduction in the contributing guide. (GH 22280) - Building pandas on macOS now targets minimum macOS 10.9 if run on macOS 10.9 or above (GH 23424)
Other#
- Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before pandas. (GH 24113)
Contributors#
A total of 337 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.
- AJ Dyka +
- AJ Pryor, Ph.D +
- Aaron Critchley
- Adam Hooper
- Adam J. Stewart
- Adam Kim
- Adam Klimont +
- Addison Lynch +
- Alan Hogue +
- Alex Radu +
- Alex Rychyk
- Alex Strick van Linschoten +
- Alex Volkov +
- Alexander Buchkovsky
- Alexander Hess +
- Alexander Ponomaroff +
- Allison Browne +
- Aly Sivji
- Andrew
- Andrew Gross +
- Andrew Spott +
- Andy +
- Aniket uttam +
- Anjali2019 +
- Anjana S +
- Antti Kaihola +
- Anudeep Tubati +
- Arjun Sharma +
- Armin Varshokar
- Artem Bogachev
- ArtinSarraf +
- Barry Fitzgerald +
- Bart Aelterman +
- Ben James +
- Ben Nelson +
- Benjamin Grove +
- Benjamin Rowell +
- Benoit Paquet +
- Boris Lau +
- Brett Naul
- Brian Choi +
- C.A.M. Gerlach +
- Carl Johan +
- Chalmer Lowe
- Chang She
- Charles David +
- Cheuk Ting Ho
- Chris
- Chris Roberts +
- Christopher Whelan
- Chu Qing Hao +
- Da Cheezy Mobsta +
- Damini Satya
- Daniel Himmelstein
- Daniel Saxton +
- Darcy Meyer +
- DataOmbudsman
- David Arcos
- David Krych
- Dean Langsam +
- Diego Argueta +
- Diego Torres +
- Dobatymo +
- Doug Latornell +
- Dr. Irv
- Dylan Dmitri Gray +
- Eric Boxer +
- Eric Chea
- Erik +
- Erik Nilsson +
- Fabian Haase +
- Fabian Retkowski
- Fabien Aulaire +
- Fakabbir Amin +
- Fei Phoon +
- Fernando Margueirat +
- Florian Müller +
- Fábio Rosado +
- Gabe Fernando
- Gabriel Reid +
- Giftlin Rajaiah
- Gioia Ballin +
- Gjelt
- Gosuke Shibahara +
- Graham Inggs
- Guillaume Gay
- Guillaume Lemaitre +
- Hannah Ferchland
- Haochen Wu
- Hubert +
- HubertKl +
- HyunTruth +
- Iain Barr
- Ignacio Vergara Kausel +
- Irv Lustig +
- IsvenC +
- Jacopo Rota
- Jakob Jarmar +
- James Bourbeau +
- James Myatt +
- James Winegar +
- Jan Rudolph
- Jared Groves +
- Jason Kiley +
- Javad Noorbakhsh +
- Jay Offerdahl +
- Jeff Reback
- Jeongmin Yu +
- Jeremy Schendel
- Jerod Estapa +
- Jesper Dramsch +
- Jim Jeon +
- Joe Jevnik
- Joel Nothman
- Joel Ostblom +
- Jordi Contestí
- Jorge López Fueyo +
- Joris Van den Bossche
- Jose Quinones +
- Jose Rivera-Rubio +
- Josh
- Jun +
- Justin Zheng +
- Kaiqi Dong +
- Kalyan Gokhale
- Kang Yoosam +
- Karl Dunkle Werner +
- Karmanya Aggarwal +
- Kevin Markham +
- Kevin Sheppard
- Kimi Li +
- Koustav Samaddar +
- Krishna +
- Kristian Holsheimer +
- Ksenia Gueletina +
- Kyle Prestel +
- LJ +
- LeakedMemory +
- Li Jin +
- Licht Takeuchi
- Luca Donini +
- Luciano Viola +
- Mak Sze Chun +
- Marc Garcia
- Marius Potgieter +
- Mark Sikora +
- Markus Meier +
- Marlene Silva Marchena +
- Martin Babka +
- MatanCohe +
- Mateusz Woś +
- Mathew Topper +
- Matt Boggess +
- Matt Cooper +
- Matt Williams +
- Matthew Gilbert
- Matthew Roeschke
- Max Kanter
- Michael Odintsov
- Michael Silverstein +
- Michael-J-Ward +
- Mickaël Schoentgen +
- Miguel Sánchez de León Peque +
- Ming Li
- Mitar
- Mitch Negus
- Monson Shao +
- Moonsoo Kim +
- Mortada Mehyar
- Myles Braithwaite
- Nehil Jain +
- Nicholas Musolino +
- Nicolas Dickreuter +
- Nikhil Kumar Mengani +
- Nikoleta Glynatsi +
- Ondrej Kokes
- Pablo Ambrosio +
- Pamela Wu +
- Parfait G +
- Patrick Park +
- Paul
- Paul Ganssle
- Paul Reidy
- Paul van Mulbregt +
- Phillip Cloud
- Pietro Battiston
- Piyush Aggarwal +
- Prabakaran Kumaresshan +
- Pulkit Maloo
- Pyry Kovanen
- Rajib Mitra +
- Redonnet Louis +
- Rhys Parry +
- Rick +
- Robin
- Roei.r +
- RomainSa +
- Roman Imankulov +
- Roman Yurchak +
- Ruijing Li +
- Ryan +
- Ryan Nazareth +
- Rüdiger Busche +
- SEUNG HOON, SHIN +
- Sandrine Pataut +
- Sangwoong Yoon
- Santosh Kumar +
- Saurav Chakravorty +
- Scott McAllister +
- Sean Chan +
- Shadi Akiki +
- Shengpu Tang +
- Shirish Kadam +
- Simon Hawkins +
- Simon Riddell +
- Simone Basso
- Sinhrks
- Soyoun(Rose) Kim +
- Srinivas Reddy Thatiparthy (శ్రీనివాస్ రెడ్డి తాటిపర్తి) +
- Stefaan Lippens +
- Stefano Cianciulli
- Stefano Miccoli +
- Stephen Childs
- Stephen Pascoe
- Steve Baker +
- Steve Cook +
- Steve Dower +
- Stéphan Taljaard +
- Sumin Byeon +
- Sören +
- Tamas Nagy +
- Tanya Jain +
- Tarbo Fukazawa
- Thein Oo +
- Thiago Cordeiro da Fonseca +
- Thierry Moisan
- Thiviyan Thanapalasingam +
- Thomas Lentali +
- Tim D. Smith +
- Tim Swast
- Tom Augspurger
- Tomasz Kluczkowski +
- Tony Tao +
- Triple0 +
- Troels Nielsen +
- Tuhin Mahmud +
- Tyler Reddy +
- Uddeshya Singh
- Uwe L. Korn +
- Vadym Barda +
- Varad Gunjal +
- Victor Maryama +
- Victor Villas
- Vincent La
- Vitória Helena +
- Vu Le
- Vyom Jain +
- Weiwen Gu +
- Wenhuan
- Wes Turner
- Wil Tan +
- William Ayd
- Yeojin Kim +
- Yitzhak Andrade +
- Yuecheng Wu +
- Yuliya Dovzhenko +
- Yury Bayda +
- Zac Hatfield-Dodds +
- aberres +
- aeltanawy +
- ailchau +
- alimcmaster1
- alphaCTzo7G +
- amphy +
- araraonline +
- azure-pipelines[bot] +
- benarthur91 +
- bk521234 +
- cgangwar11 +
- chris-b1
- cxl923cc +
- dahlbaek +
- dannyhyunkim +
- darke-spirits +
- david-liu-brattle-1
- davidmvalente +
- deflatSOCO
- doosik_bae +
- dylanchase +
- eduardo naufel schettino +
- euri10 +
- evangelineliu +
- fengyqf +
- fjdiod
- fl4p +
- fleimgruber +
- gfyoung
- h-vetinari
- harisbal +
- henriqueribeiro +
- himanshu awasthi
- hongshaoyang +
- igorfassen +
- jalazbe +
- jbrockmendel
- jh-wu +
- justinchan23 +
- louispotok
- marcosrullan +
- miker985
- nicolab100 +
- nprad
- nsuresh +
- ottiP
- pajachiet +
- raguiar2 +
- ratijas +
- realead +
- robbuckley +
- saurav2608 +
- sideeye +
- ssikdar1
- svenharris +
- syutbai +
- testvinder +
- thatneat
- tmnhat2001
- tomascassidy +
- tomneep
- topper-123
- vkk800 +
- winlu +
- ym-pett +
- yrhooke +
- ywpark1 +
- zertrin
- zhezherun +