ENH: Add argument to GroupBy.apply to let user pre-select "slow path" & not run function twice on 1st group · Issue #20084 · pandas-dev/pandas (original) (raw)

I have a large data set that includes a "category" column. I'm working to use the data to build several models, one for each category in the data (5 categories total). I'd like to use groupby to do this, and I wrote a custom function that can be used in GroupBy.apply. It works fine, although the function is quite slow when it's just building one model, let alone 5. Unfortunately, because of the way GroupBy.apply works, it's actually building 6 models instead of 5. As a result, my code using GroupBy.apply is 20% slower than it would be if I'd just written a for loop.

As the documentation for GroupBy.apply states:

In the current implementation apply calls func twice on the first group to decide whether it can take a fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take effect twice for the first group.

Since I know that my function is going to be slow anyway, I'm wondering if it's possible to add an argument to apply that will let me opt to take the "slow code path" from the start, and skip the test-run of the function on the first group? When my custom function takes 20 minutes each time it runs anyway, being able to cut out the extra iteration would be quite nice, and I can't imagine that any time lost by taking the slow path over the fast path would make a difference. (Or am I completely wrong about that?)

Sure, I could just write a for loop and everything would work and be fine, but I really like the "neatness" of using GroupBy, if that makes sense. Additionally, it would be nice if I could use GroupBy.apply in a similar way to how the do function in dplyr works after grouping.

Code Samples

Current functionality

In [1]: import pandas as pd

In [2]: df = pd.DataFrame({'A': list('aaabbbcccc'), 'B': [3,4,3,6,5,2,1,9,5,4], 'C': [4,0,2,2,2,7,8,6,2,8]})

In [3]: def print_name_and_describe(g): ...: print(g.name) ...: return g.describe() ...:

In [4]: df.groupby('A').apply(print_name_and_describe) a a b c Out[4]: B C A
a count 3.000000 3.000000 mean 3.333333 2.000000 std 0.577350 2.000000 min 3.000000 0.000000 25% 3.000000 1.000000 50% 3.000000 2.000000 75% 3.500000 3.000000 max 4.000000 4.000000 b count 3.000000 3.000000 mean 4.333333 3.666667 std 2.081666 2.886751 min 2.000000 2.000000 25% 3.500000 2.000000 50% 5.000000 2.000000 75% 5.500000 4.500000 max 6.000000 7.000000 c count 4.000000 4.000000 mean 4.750000 6.000000 std 3.304038 2.828427 min 1.000000 2.000000 25% 3.250000 5.000000 50% 4.500000 7.000000 75% 6.000000 8.000000 max 9.000000 8.000000

Suggested functionality

In [4]: df.groupby('A').apply(print_name_and_describe, use_slow_path=True) a b c Out[4]: B C A
a count 3.000000 3.000000 mean 3.333333 2.000000 std 0.577350 2.000000 min 3.000000 0.000000 25% 3.000000 1.000000 50% 3.000000 2.000000 75% 3.500000 3.000000 max 4.000000 4.000000 b count 3.000000 3.000000 mean 4.333333 3.666667 std 2.081666 2.886751 min 2.000000 2.000000 25% 3.500000 2.000000 50% 5.000000 2.000000 75% 5.500000 4.500000 max 6.000000 7.000000 c count 4.000000 4.000000 mean 4.750000 6.000000 std 3.304038 2.828427 min 1.000000 2.000000 25% 3.250000 5.000000 50% 4.500000 7.000000 75% 6.000000 8.000000 max 9.000000 8.000000

INSTALLED VERSIONS

commit: None
python: 2.7.13.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None

pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 36.5.0
Cython: None
numpy: 1.14.1
scipy: None
pyarrow: None
xarray: None
IPython: 5.5.0
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0b10
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None