Version 0.21.0 (October 27, 2017) — pandas 2.2.3 documentation (original) (raw)

This is a major release from 0.20.3 and includes a number of API changes, deprecations, new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version.

Highlights include:

Check the API Changes and deprecations before updating.

What’s new in v0.21.0

New features#

Integration with Apache Parquet file format#

Integration with Apache Parquet, including a new top-level read_parquet() and DataFrame.to_parquet() method, see here (GH 15838, GH 17438).

Apache Parquet provides a cross-language, binary file format for reading and writing data frames efficiently. Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas dtypes, including extension dtypes such as datetime with timezones.

This functionality depends on either the pyarrow or fastparquet library. For more details, see the IO docs on Parquet.

Method infer_objects type conversion#

The DataFrame.infer_objects() and Series.infer_objects()methods have been added to perform dtype inference on object columns, replacing some of the functionality of the deprecated convert_objectsmethod. See the documentation herefor more details. (GH 11221)

This method only performs soft conversions on object columns, converting Python objects to native types, but not any coercive conversions. For example:

In [1]: df = pd.DataFrame({'A': [1, 2, 3], ...: 'B': np.array([1, 2, 3], dtype='object'), ...: 'C': ['1', '2', '3']}) ...:

In [2]: df.dtypes Out[2]: A int64 B object C object Length: 3, dtype: object

In [3]: df.infer_objects().dtypes Out[3]: A int64 B int64 C object Length: 3, dtype: object

Note that column 'C' was not converted - only scalar numeric types will be converted to a new type. Other types of conversion should be accomplished using the to_numeric() function (or to_datetime(), to_timedelta()).

In [4]: df = df.infer_objects()

In [5]: df['C'] = pd.to_numeric(df['C'], errors='coerce')

In [6]: df.dtypes Out[6]: A int64 B int64 C int64 Length: 3, dtype: object

Improved warnings when attempting to create columns#

New users are often puzzled by the relationship between column operations and attribute access on DataFrame instances (GH 7175). One specific instance of this confusion is attempting to create a new column by setting an attribute on the DataFrame:

In [1]: df = pd.DataFrame({'one': [1., 2., 3.]}) In [2]: df.two = [4, 5, 6]

This does not raise any obvious exceptions, but also does not create a new column:

In [3]: df Out[3]: one 0 1.0 1 2.0 2 3.0

Setting a list-like data structure into a new attribute now raises a UserWarning about the potential for unexpected behavior. See Attribute Access.

Method drop now also accepts index/columns keywords#

The drop() method has gained index/columns keywords as an alternative to specifying the axis. This is similar to the behavior of reindex(GH 12392).

For example:

In [7]: df = pd.DataFrame(np.arange(8).reshape(2, 4), ...: columns=['A', 'B', 'C', 'D']) ...:

In [8]: df Out[8]: A B C D 0 0 1 2 3 1 4 5 6 7

[2 rows x 4 columns]

In [9]: df.drop(['B', 'C'], axis=1) Out[9]: A D 0 0 3 1 4 7

[2 rows x 2 columns]

the following is now equivalent

In [10]: df.drop(columns=['B', 'C']) Out[10]: A D 0 0 3 1 4 7

[2 rows x 2 columns]

Methods rename, reindex now also accept axis keyword#

The DataFrame.rename() and DataFrame.reindex() methods have gained the axis keyword to specify the axis to target with the operation (GH 12392).

Here’s rename:

In [11]: df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})

In [12]: df.rename(str.lower, axis='columns') Out[12]: a b 0 1 4 1 2 5 2 3 6

[3 rows x 2 columns]

In [13]: df.rename(id, axis='index') Out[13]: A B 140639502074064 1 4 140639502074096 2 5 140639502074128 3 6

[3 rows x 2 columns]

And reindex:

In [14]: df.reindex(['A', 'B', 'C'], axis='columns') Out[14]: A B C 0 1 4 NaN 1 2 5 NaN 2 3 6 NaN

[3 rows x 3 columns]

In [15]: df.reindex([0, 1, 3], axis='index') Out[15]: A B 0 1.0 4.0 1 2.0 5.0 3 NaN NaN

[3 rows x 2 columns]

The “index, columns” style continues to work as before.

In [16]: df.rename(index=id, columns=str.lower) Out[16]: a b 140639502074064 1 4 140639502074096 2 5 140639502074128 3 6

[3 rows x 2 columns]

In [17]: df.reindex(index=[0, 1, 3], columns=['A', 'B', 'C']) Out[17]: A B C 0 1.0 4.0 NaN 1 2.0 5.0 NaN 3 NaN NaN NaN

[3 rows x 3 columns]

We highly encourage using named arguments to avoid confusion when using either style.

CategoricalDtype for specifying categoricals#

pandas.api.types.CategoricalDtype has been added to the public API and expanded to include the categories and ordered attributes. ACategoricalDtype can be used to specify the set of categories and orderedness of an array, independent of the data. This can be useful for example, when converting string data to a Categorical (GH 14711,GH 15078, GH 16015, GH 17643):

In [18]: from pandas.api.types import CategoricalDtype

In [19]: s = pd.Series(['a', 'b', 'c', 'a']) # strings

In [20]: dtype = CategoricalDtype(categories=['a', 'b', 'c', 'd'], ordered=True)

In [21]: s.astype(dtype) Out[21]: 0 a 1 b 2 c 3 a Length: 4, dtype: category Categories (4, object): ['a' < 'b' < 'c' < 'd']

One place that deserves special mention is in read_csv(). Previously, withdtype={'col': 'category'}, the returned values and categories would always be strings.

In [22]: data = 'A,B\na,1\nb,2\nc,3'

In [23]: pd.read_csv(StringIO(data), dtype={'B': 'category'}).B.cat.categories Out[23]: Index(['1', '2', '3'], dtype='object')

Notice the “object” dtype.

With a CategoricalDtype of all numerics, datetimes, or timedeltas, we can automatically convert to the correct type

In [24]: dtype = {'B': CategoricalDtype([1, 2, 3])}

In [25]: pd.read_csv(StringIO(data), dtype=dtype).B.cat.categories Out[25]: Index([1, 2, 3], dtype='int64')

The values have been correctly interpreted as integers.

The .dtype property of a Categorical, CategoricalIndex or aSeries with categorical type will now return an instance ofCategoricalDtype. While the repr has changed, str(CategoricalDtype()) is still the string 'category'. We’ll take this moment to remind users that the_preferred_ way to detect categorical data is to usepandas.api.types.is_categorical_dtype(), and not str(dtype) == 'category'.

See the CategoricalDtype docs for more.

GroupBy objects now have a pipe method#

GroupBy objects now have a pipe method, similar to the one onDataFrame and Series, that allow for functions that take aGroupBy to be composed in a clean, readable syntax. (GH 17871)

For a concrete example on combining .groupby and .pipe , imagine having a DataFrame with columns for stores, products, revenue and sold quantity. We’d like to do a groupwise calculation of prices (i.e. revenue/quantity) per store and per product. We could do this in a multi-step operation, but expressing it in terms of piping can make the code more readable.

First we set the data:

In [26]: import numpy as np

In [27]: n = 1000

In [28]: df = pd.DataFrame({'Store': np.random.choice(['Store_1', 'Store_2'], n), ....: 'Product': np.random.choice(['Product_1', ....: 'Product_2', ....: 'Product_3' ....: ], n), ....: 'Revenue': (np.random.random(n) * 50 + 10).round(2), ....: 'Quantity': np.random.randint(1, 10, size=n)}) ....:

In [29]: df.head(2) Out[29]: Store Product Revenue Quantity 0 Store_2 Product_2 32.09 7 1 Store_1 Product_3 14.20 1

[2 rows x 4 columns]

Now, to find prices per store/product, we can simply do:

In [30]: (df.groupby(['Store', 'Product']) ....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum()) ....: .unstack().round(2)) ....: Out[30]: Product Product_1 Product_2 Product_3 Store
Store_1 6.73 6.72 7.14 Store_2 7.59 6.98 7.23

[2 rows x 3 columns]

See the documentation for more.

Categorical.rename_categories accepts a dict-like#

rename_categories() now accepts a dict-like argument fornew_categories. The previous categories are looked up in the dictionary’s keys and replaced if found. The behavior of missing and extra keys is the same as in DataFrame.rename().

In [31]: c = pd.Categorical(['a', 'a', 'b'])

In [32]: c.rename_categories({"a": "eh", "b": "bee"}) Out[32]: ['eh', 'eh', 'bee'] Categories (2, object): ['eh', 'bee']

Warning

To assist with upgrading pandas, rename_categories treats Series as list-like. Typically, Series are considered to be dict-like (e.g. in.rename, .map). In a future version of pandas rename_categorieswill change to treat them as dict-like. Follow the warning message’s recommendations for writing future-proof code.

In [33]: c.rename_categories(pd.Series([0, 1], index=['a', 'c'])) FutureWarning: Treating Series 'new_categories' as a list-like and using the values. In a future version, 'rename_categories' will treat Series like a dictionary. For dict-like, use 'new_categories.to_dict()' For list-like, use 'new_categories.values'. Out[33]: [0, 0, 1] Categories (2, int64): [0, 1]

Other enhancements#

New functions or methods#

New keywords#

Various enhancements#

Backwards incompatible API changes#

Dependencies have increased minimum versions#

We have updated our minimum supported versions of dependencies (GH 15206, GH 15543, GH 15214). If installed, we now require:

Package Minimum Version Required
Numpy 1.9.0 X
Matplotlib 1.4.3
Scipy 0.14.0
Bottleneck 1.0.0

Additionally, support has been dropped for Python 3.4 (GH 15251).

Sum/prod of all-NaN or empty Series/DataFrames is now consistently NaN#

Note

The changes described here have been partially reverted. See the v0.22.0 Whatsnew for more.

The behavior of sum and prod on all-NaN Series/DataFrames no longer depends on whether bottleneck is installed, and return value of sum and prod on an empty Series has changed (GH 9422, GH 15507).

Calling sum or prod on an empty or all-NaN Series, or columns of a DataFrame, will result in NaN. See the docs.

In [33]: s = pd.Series([np.nan])

Previously WITHOUT bottleneck installed:

In [2]: s.sum() Out[2]: np.nan

Previously WITH bottleneck:

In [2]: s.sum() Out[2]: 0.0

New behavior, without regard to the bottleneck installation:

In [34]: s.sum() Out[34]: 0.0

Note that this also changes the sum of an empty Series. Previously this always returned 0 regardless of a bottleneck installation:

In [1]: pd.Series([]).sum() Out[1]: 0

but for consistency with the all-NaN case, this was changed to return 0 as well:

In [2]: pd.Series([]).sum() Out[2]: 0

Indexing with a list with missing labels is deprecated#

Previously, selecting with a list of labels, where one or more labels were missing would always succeed, returning NaN for missing labels. This will now show a FutureWarning. In the future this will raise a KeyError (GH 15747). This warning will trigger on a DataFrame or a Series for using .loc[] or [[]] when passing a list-of-labels with at least 1 missing label.

In [35]: s = pd.Series([1, 2, 3])

In [36]: s Out[36]: 0 1 1 2 2 3 Length: 3, dtype: int64

Previous behavior

In [4]: s.loc[[1, 2, 3]] Out[4]: 1 2.0 2 3.0 3 NaN dtype: float64

Current behavior

In [4]: s.loc[[1, 2, 3]] Passing list-likes to .loc or [] with any missing label will raise KeyError in the future, you can use .reindex() as an alternative.

See the documentation here: https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike

Out[4]: 1 2.0 2 3.0 3 NaN dtype: float64

The idiomatic way to achieve selecting potentially not-found elements is via .reindex()

In [37]: s.reindex([1, 2, 3]) Out[37]: 1 2.0 2 3.0 3 NaN Length: 3, dtype: float64

Selection with all keys found is unchanged.

In [38]: s.loc[[1, 2]] Out[38]: 1 2 2 3 Length: 2, dtype: int64

NA naming changes#

In order to promote more consistency among the pandas API, we have added additional top-level functions isna() and notna() that are aliases for isnull() and notnull(). The naming scheme is now more consistent with methods like .dropna() and .fillna(). Furthermore in all cases where .isnull() and .notnull() methods are defined, these have additional methods named .isna() and .notna(), these are included for classes Categorical,Index, Series, and DataFrame. (GH 15001).

The configuration option pd.options.mode.use_inf_as_null is deprecated, and pd.options.mode.use_inf_as_na is added as a replacement.

Iteration of Series/Index will now return Python scalars#

Previously, when using certain iteration methods for a Series with dtype int or float, you would receive a numpy scalar, e.g. a np.int64, rather than a Python int. Issue (GH 10904) corrected this for Series.tolist() and list(Series). This change makes all iteration methods consistent, in particular, for __iter__() and .map(); note that this only affects int/float dtypes. (GH 13236, GH 13258, GH 14216).

In [39]: s = pd.Series([1, 2, 3])

In [40]: s Out[40]: 0 1 1 2 2 3 Length: 3, dtype: int64

Previously:

In [2]: type(list(s)[0]) Out[2]: numpy.int64

New behavior:

In [41]: type(list(s)[0]) Out[41]: int

Furthermore this will now correctly box the results of iteration for DataFrame.to_dict() as well.

In [42]: d = {'a': [1], 'b': ['b']}

In [43]: df = pd.DataFrame(d)

Previously:

In [8]: type(df.to_dict()['a'][0]) Out[8]: numpy.int64

New behavior:

In [44]: type(df.to_dict()['a'][0]) Out[44]: int

Indexing with a Boolean Index#

Previously when passing a boolean Index to .loc, if the index of the Series/DataFrame had boolean labels, you would get a label based selection, potentially duplicating result labels, rather than a boolean indexing selection (where True selects elements), this was inconsistent how a boolean numpy array indexed. The new behavior is to act like a boolean numpy array indexer. (GH 17738)

Previous behavior:

In [45]: s = pd.Series([1, 2, 3], index=[False, True, False])

In [46]: s Out[46]: False 1 True 2 False 3 Length: 3, dtype: int64

In [59]: s.loc[pd.Index([True, False, True])] Out[59]: True 2 False 1 False 3 True 2 dtype: int64

Current behavior

In [47]: s.loc[pd.Index([True, False, True])] Out[47]: False 1 False 3 Length: 2, dtype: int64

Furthermore, previously if you had an index that was non-numeric (e.g. strings), then a boolean Index would raise a KeyError. This will now be treated as a boolean indexer.

Previously behavior:

In [48]: s = pd.Series([1, 2, 3], index=['a', 'b', 'c'])

In [49]: s Out[49]: a 1 b 2 c 3 Length: 3, dtype: int64

In [39]: s.loc[pd.Index([True, False, True])] KeyError: "None of [Index([True, False, True], dtype='object')] are in the [index]"

Current behavior

In [50]: s.loc[pd.Index([True, False, True])] Out[50]: a 1 c 3 Length: 2, dtype: int64

PeriodIndex resampling#

In previous versions of pandas, resampling a Series/DataFrame indexed by a PeriodIndex returned a DatetimeIndex in some cases (GH 12884). Resampling to a multiplied frequency now returns a PeriodIndex (GH 15944). As a minor enhancement, resampling a PeriodIndex can now handle NaT values (GH 13224)

Previous behavior:

In [1]: pi = pd.period_range('2017-01', periods=12, freq='M')

In [2]: s = pd.Series(np.arange(12), index=pi)

In [3]: resampled = s.resample('2Q').mean()

In [4]: resampled Out[4]: 2017-03-31 1.0 2017-09-30 5.5 2018-03-31 10.0 Freq: 2Q-DEC, dtype: float64

In [5]: resampled.index Out[5]: DatetimeIndex(['2017-03-31', '2017-09-30', '2018-03-31'], dtype='datetime64[ns]', freq='2Q-DEC')

New behavior:

In [1]: pi = pd.period_range('2017-01', periods=12, freq='M')

In [2]: s = pd.Series(np.arange(12), index=pi)

In [3]: resampled = s.resample('2Q').mean()

In [4]: resampled Out[4]: 2017Q1 2.5 2017Q3 8.5 Freq: 2Q-DEC, dtype: float64

In [5]: resampled.index Out[5]: PeriodIndex(['2017Q1', '2017Q3'], dtype='period[2Q-DEC]')

Upsampling and calling .ohlc() previously returned a Series, basically identical to calling .asfreq(). OHLC upsampling now returns a DataFrame with columns open, high, low and close (GH 13083). This is consistent with downsampling and DatetimeIndex behavior.

Previous behavior:

In [1]: pi = pd.period_range(start='2000-01-01', freq='D', periods=10)

In [2]: s = pd.Series(np.arange(10), index=pi)

In [3]: s.resample('H').ohlc() Out[3]: 2000-01-01 00:00 0.0 ... 2000-01-10 23:00 NaN Freq: H, Length: 240, dtype: float64

In [4]: s.resample('M').ohlc() Out[4]: open high low close 2000-01 0 9 0 9

New behavior:

In [56]: pi = pd.period_range(start='2000-01-01', freq='D', periods=10)

In [57]: s = pd.Series(np.arange(10), index=pi)

In [58]: s.resample('H').ohlc() Out[58]: open high low close 2000-01-01 00:00 0.0 0.0 0.0 0.0 2000-01-01 01:00 NaN NaN NaN NaN 2000-01-01 02:00 NaN NaN NaN NaN 2000-01-01 03:00 NaN NaN NaN NaN 2000-01-01 04:00 NaN NaN NaN NaN ... ... ... ... ... 2000-01-10 19:00 NaN NaN NaN NaN 2000-01-10 20:00 NaN NaN NaN NaN 2000-01-10 21:00 NaN NaN NaN NaN 2000-01-10 22:00 NaN NaN NaN NaN 2000-01-10 23:00 NaN NaN NaN NaN

[240 rows x 4 columns]

In [59]: s.resample('M').ohlc() Out[59]: open high low close 2000-01 0 9 0 9

[1 rows x 4 columns]

Improved error handling during item assignment in pd.eval#

eval() will now raise a ValueError when item assignment malfunctions, or inplace operations are specified, but there is no item assignment in the expression (GH 16732)

In [51]: arr = np.array([1, 2, 3])

Previously, if you attempted the following expression, you would get a not very helpful error message:

In [3]: pd.eval("a = 1 + 2", target=arr, inplace=True) ... IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

This is a very long way of saying numpy arrays don’t support string-item indexing. With this change, the error message is now this:

In [3]: pd.eval("a = 1 + 2", target=arr, inplace=True) ... ValueError: Cannot assign expression output to target

It also used to be possible to evaluate expressions inplace, even if there was no item assignment:

In [4]: pd.eval("1 + 2", target=arr, inplace=True) Out[4]: 3

However, this input does not make much sense because the output is not being assigned to the target. Now, a ValueError will be raised when such an input is passed in:

In [4]: pd.eval("1 + 2", target=arr, inplace=True) ... ValueError: Cannot operate inplace if there is no assignment

Dtype conversions#

Previously assignments, .where() and .fillna() with a bool assignment, would coerce to same the type (e.g. int / float), or raise for datetimelikes. These will now preserve the bools with object dtypes. (GH 16821).

In [52]: s = pd.Series([1, 2, 3])

In [5]: s[1] = True

In [6]: s Out[6]: 0 1 1 1 2 3 dtype: int64

New behavior

In [7]: s[1] = True

In [8]: s Out[8]: 0 1 1 True 2 3 Length: 3, dtype: object

Previously, as assignment to a datetimelike with a non-datetimelike would coerce the non-datetime-like item being assigned (GH 14145).

In [53]: s = pd.Series([pd.Timestamp('2011-01-01'), pd.Timestamp('2012-01-01')])

In [1]: s[1] = 1

In [2]: s Out[2]: 0 2011-01-01 00:00:00.000000000 1 1970-01-01 00:00:00.000000001 dtype: datetime64[ns]

These now coerce to object dtype.

In [1]: s[1] = 1

In [2]: s Out[2]: 0 2011-01-01 00:00:00 1 1 dtype: object

MultiIndex constructor with a single level#

The MultiIndex constructors no longer squeezes a MultiIndex with all length-one levels down to a regular Index. This affects all theMultiIndex constructors. (GH 17178)

Previous behavior:

In [2]: pd.MultiIndex.from_tuples([('a',), ('b',)]) Out[2]: Index(['a', 'b'], dtype='object')

Length 1 levels are no longer special-cased. They behave exactly as if you had length 2+ levels, so a MultiIndex is always returned from all of theMultiIndex constructors:

In [54]: pd.MultiIndex.from_tuples([('a',), ('b',)]) Out[54]: MultiIndex([('a',), ('b',)], )

UTC localization with Series#

Previously, to_datetime() did not localize datetime Series data when utc=True was passed. Now, to_datetime() will correctly localize Series with a datetime64[ns, UTC] dtype to be consistent with how list-like and Index data are handled. (GH 6415).

Previous behavior

In [55]: s = pd.Series(['20130101 00:00:00'] * 3)

In [12]: pd.to_datetime(s, utc=True) Out[12]: 0 2013-01-01 1 2013-01-01 2 2013-01-01 dtype: datetime64[ns]

New behavior

In [56]: pd.to_datetime(s, utc=True) Out[56]: 0 2013-01-01 00:00:00+00:00 1 2013-01-01 00:00:00+00:00 2 2013-01-01 00:00:00+00:00 Length: 3, dtype: datetime64[ns, UTC]

Additionally, DataFrames with datetime columns that were parsed by read_sql_table() and read_sql_query() will also be localized to UTC only if the original SQL columns were timezone aware datetime columns.

Consistency of range functions#

In previous versions, there were some inconsistencies between the various range functions: date_range(), bdate_range(), period_range(), timedelta_range(), and interval_range(). (GH 17471).

One of the inconsistent behaviors occurred when the start, end and period parameters were all specified, potentially leading to ambiguous ranges. When all three parameters were passed, interval_range ignored the period parameter, period_range ignored the end parameter, and the other range functions raised. To promote consistency among the range functions, and avoid potentially ambiguous ranges, interval_range and period_range will now raise when all three parameters are passed.

Previous behavior:

In [2]: pd.interval_range(start=0, end=4, periods=6) Out[2]: IntervalIndex([(0, 1], (1, 2], (2, 3]] closed='right', dtype='interval[int64]')

In [3]: pd.period_range(start='2017Q1', end='2017Q4', periods=6, freq='Q') Out[3]: PeriodIndex(['2017Q1', '2017Q2', '2017Q3', '2017Q4', '2018Q1', '2018Q2'], dtype='period[Q-DEC]', freq='Q-DEC')

New behavior:

In [2]: pd.interval_range(start=0, end=4, periods=6)

ValueError: Of the three parameters: start, end, and periods, exactly two must be specified

In [3]: pd.period_range(start='2017Q1', end='2017Q4', periods=6, freq='Q')

ValueError: Of the three parameters: start, end, and periods, exactly two must be specified

Additionally, the endpoint parameter end was not included in the intervals produced by interval_range. However, all other range functions include end in their output. To promote consistency among the range functions, interval_range will now include end as the right endpoint of the final interval, except if freq is specified in a way which skips end.

Previous behavior:

In [4]: pd.interval_range(start=0, end=4) Out[4]: IntervalIndex([(0, 1], (1, 2], (2, 3]] closed='right', dtype='interval[int64]')

New behavior:

In [57]: pd.interval_range(start=0, end=4) Out[57]: IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4]], dtype='interval[int64, right]')

No automatic Matplotlib converters#

pandas no longer registers our date, time, datetime,datetime64, and Period converters with matplotlib when pandas is imported. Matplotlib plot methods (plt.plot, ax.plot, …), will not nicely format the x-axis for DatetimeIndex or PeriodIndex values. You must explicitly register these methods:

pandas built-in Series.plot and DataFrame.plot will register these converters on first-use (GH 17710).

Note

This change has been temporarily reverted in pandas 0.21.1, for more details see here.

Other API changes#

Deprecations#

Series.select and DataFrame.select#

The Series.select() and DataFrame.select() methods are deprecated in favor of using df.loc[labels.map(crit)] (GH 12401)

In [58]: df = pd.DataFrame({'A': [1, 2, 3]}, index=['foo', 'bar', 'baz'])

In [3]: df.select(lambda x: x in ['bar', 'baz']) FutureWarning: select is deprecated and will be removed in a future release. You can use .loc[crit] as a replacement Out[3]: A bar 2 baz 3

In [59]: df.loc[df.index.map(lambda x: x in ['bar', 'baz'])] Out[59]: A bar 2 baz 3

[2 rows x 1 columns]

Series.argmax and Series.argmin#

The behavior of Series.argmax() and Series.argmin() have been deprecated in favor of Series.idxmax() and Series.idxmin(), respectively (GH 16830).

For compatibility with NumPy arrays, pd.Series implements argmax andargmin. Since pandas 0.13.0, argmax has been an alias forpandas.Series.idxmax(), and argmin has been an alias forpandas.Series.idxmin(). They return the label of the maximum or minimum, rather than the position.

We’ve deprecated the current behavior of Series.argmax andSeries.argmin. Using either of these will emit a FutureWarning. UseSeries.idxmax() if you want the label of the maximum. UseSeries.values.argmax() if you want the position of the maximum. Likewise for the minimum. In a future release Series.argmax and Series.argmin will return the position of the maximum or minimum.

Removal of prior version deprecations/changes#

Performance improvements#

Documentation changes#

Bug fixes#

Conversion#

Indexing#

IO#

Plotting#

GroupBy/resample/rolling#

Sparse#

Reshaping#

Numeric#

Categorical#

PyPy#

Other#

Contributors#

A total of 206 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.