ENH: Support mask in groupby cumprod by phofl · Pull Request #48138 · pandas-dev/pandas (original) (raw)

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation21 Commits13 Checks0 Files changed

Conversation

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters

[ Show hidden characters]({{ revealButtonHref }})

phofl

This is a general issue here. If we overflow int64 we get garbage. Previously we were working with float64, which gave us back numbers, but they were incorrect. But we keep precision as long as our numbers fit into int64, which was not the case previously, since we were casting to float64 beforehand, imo this is more important.

cc @jorisvandenbossche

@phofl

@phofl

@phofl

@phofl

Conflicts:

pandas/core/groupby/ops.py

pandas/tests/groupby/test_groupby.py

@phofl

@phofl

Conflicts:

doc/source/whatsnew/v1.6.0.rst

@jorisvandenbossche

This is a general issue here. If we overflow int64 we get garbage. Previously we were working with float64, which gave us back numbers, but they were incorrect. But we keep precision as long as our numbers fit into int64, which was not the case previously, since we were casting to float64 beforehand, imo this is more important.

Comparing to the plain (non-grouped) sum/prod, those currently also overflow:

In [35]: pd.Series([int(1e16)]*100).sum()
Out[35]: 1000000000000000000

In [36]: pd.Series([int(1e16)]*1000).sum()
Out[36]: -8446744073709551616

In [40]: pd.Series([2]*62).prod()
Out[40]: 4611686018427387904

In [41]: pd.Series([2]*63).prod()
Out[41]: -9223372036854775808

So it seems sensible that the groupby variants follow this as well. In general, we should maybe better document those constraints and expectations around overflow (not sure if this is now documented somewhere?)

jorisvandenbossche

@@ -100,6 +100,7 @@ Deprecations
Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
- Performance improvement in :meth:`.GroupBy.cumprod` for extension array dtypes (:issue:`37493`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also now uses int64 instead of float64 for the numpy dtypes? So that also changes behaviour in those cases regarding overflow?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, should we mention this in the whatsnew?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, yes. Maybe as notable bug fix, as it has some behaviour change?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

jorisvandenbossche

@@ -641,10 +641,10 @@ def test_groupby_cumprod():
tm.assert_series_equal(actual, expected)
df = DataFrame({"key": ["b"] * 100, "value": 2})
df["value"] = df["value"].astype(float)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can maybe keep this with as int (or test both in addition), so we have a test for the silent overflow behaviour?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a new test explicitly testing that overflow is consistent with numpy

@@ -641,10 +641,10 @@ def test_groupby_cumprod():
tm.assert_series_equal(actual, expected)
df = DataFrame({"key": ["b"] * 100, "value": 2})
df["value"] = df["value"].astype(float)
actual = df.groupby("key")["value"].cumprod()
# if overflows, groupby product casts to float
# while numpy passes back invalid values

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment can probably be updated

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@phofl

@phofl

@phofl

Conflicts:

doc/source/whatsnew/v1.6.0.rst

pandas/_libs/groupby.pyx

@phofl

Conflicts:

doc/source/whatsnew/v1.6.0.rst

@phofl

So this is the last one of the groupby algos. We can start refactoring the groupby ops code paths after this is through

jorisvandenbossche

In previous versions we cast to float when applying ``cumsum`` and ``cumprod`` which
lead to incorrect results even if the result could be hold by ``int64`` dtype.
Additionally, the aggregation overflows consistent with numpy when the limit of

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would maybe mention that it is making it consistent with the DataFrame method as well? (without groupby)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a reference to the methods

@phofl

@jorisvandenbossche

I am still a bit uneasy about this change, since it is silently changing actual results that you get (a previously somewhat correct results (an inexact float) could silently become completely incorrect (overflowed int)).
So it would be good to get some input from others.

To what extent would it be possible to split the overflow behaviour change from the mask introduction, so we could for example leave that behaviour change for 2.0? (not sure myself whether this is worth it, just wondering)

@phofl

It is a bit unfortunate, this is true. But we can preserve precision now if possible, this was buggy before and since the behaviour is aligned with numpy and the regular DataFrame behaviour this should be ok imo. In the end it probably does not matter how far off your values are if they are off.

We could cast to float before calling the algos, this would keep the current behaviour but would lose performance gains and the precision fixes (would also hit cumsum that is already merged).

Since we intend to do 2.0 as the next release anyways, would it be ok to merge this and revert to casting to float before passing the array to the cython algos, If we do an unexpected 1.6 next?

@jorisvandenbossche

Since we intend to do 2.0 as the next release anyways, would it be ok to merge this and revert to casting to float before passing the array to the cython algos, If we do an unexpected 1.6 next?

Sounds good!

@phofl

Great! Thanks.

@mroeschke Would you mind having a look before merging?

mroeschke

mroeschke

@phofl

@phofl

Conflicts:

doc/source/whatsnew/v1.6.0.rst

mroeschke

@mroeschke

@phofl phofl deleted the groupby_cumprod_mask branch

September 20, 2022 08:22

noatamir pushed a commit to noatamir/pandas that referenced this pull request

Nov 9, 2022

@phofl @noatamir