ENH: Support mask in groupby cumprod by phofl · Pull Request #48138 · pandas-dev/pandas (original) (raw)
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation21 Commits13 Checks0 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
- closes ENH: support masked arrays in groupby cython algos #37493 (Replace xxxx with the Github issue number)
- Tests added and passed if fixing a bug or adding a new feature
- All code checks passed.
- Added type annotations to new arguments/methods/functions.
- Added an entry in the latest
doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.
This is a general issue here. If we overflow int64 we get garbage. Previously we were working with float64, which gave us back numbers, but they were incorrect. But we keep precision as long as our numbers fit into int64, which was not the case previously, since we were casting to float64 beforehand, imo this is more important.
Conflicts:
pandas/core/groupby/ops.py
pandas/tests/groupby/test_groupby.py
Conflicts:
doc/source/whatsnew/v1.6.0.rst
This is a general issue here. If we overflow int64 we get garbage. Previously we were working with float64, which gave us back numbers, but they were incorrect. But we keep precision as long as our numbers fit into int64, which was not the case previously, since we were casting to float64 beforehand, imo this is more important.
Comparing to the plain (non-grouped) sum/prod, those currently also overflow:
In [35]: pd.Series([int(1e16)]*100).sum()
Out[35]: 1000000000000000000
In [36]: pd.Series([int(1e16)]*1000).sum()
Out[36]: -8446744073709551616
In [40]: pd.Series([2]*62).prod()
Out[40]: 4611686018427387904
In [41]: pd.Series([2]*63).prod()
Out[41]: -9223372036854775808
So it seems sensible that the groupby variants follow this as well. In general, we should maybe better document those constraints and expectations around overflow (not sure if this is now documented somewhere?)
@@ -100,6 +100,7 @@ Deprecations |
---|
Performance improvements |
~~~~~~~~~~~~~~~~~~~~~~~~ |
- Performance improvement in :meth:`.GroupBy.cumprod` for extension array dtypes (:issue:`37493`) |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also now uses int64 instead of float64 for the numpy dtypes? So that also changes behaviour in those cases regarding overflow?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, should we mention this in the whatsnew?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, yes. Maybe as notable bug fix, as it has some behaviour change?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -641,10 +641,10 @@ def test_groupby_cumprod(): |
---|
tm.assert_series_equal(actual, expected) |
df = DataFrame({"key": ["b"] * 100, "value": 2}) |
df["value"] = df["value"].astype(float) |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can maybe keep this with as int (or test both in addition), so we have a test for the silent overflow behaviour?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a new test explicitly testing that overflow is consistent with numpy
@@ -641,10 +641,10 @@ def test_groupby_cumprod(): |
---|
tm.assert_series_equal(actual, expected) |
df = DataFrame({"key": ["b"] * 100, "value": 2}) |
df["value"] = df["value"].astype(float) |
actual = df.groupby("key")["value"].cumprod() |
# if overflows, groupby product casts to float |
# while numpy passes back invalid values |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment can probably be updated
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Conflicts:
doc/source/whatsnew/v1.6.0.rst
pandas/_libs/groupby.pyx
Conflicts:
doc/source/whatsnew/v1.6.0.rst
So this is the last one of the groupby algos. We can start refactoring the groupby ops code paths after this is through
In previous versions we cast to float when applying ``cumsum`` and ``cumprod`` which |
lead to incorrect results even if the result could be hold by ``int64`` dtype. |
Additionally, the aggregation overflows consistent with numpy when the limit of |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would maybe mention that it is making it consistent with the DataFrame method as well? (without groupby)
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a reference to the methods
I am still a bit uneasy about this change, since it is silently changing actual results that you get (a previously somewhat correct results (an inexact float) could silently become completely incorrect (overflowed int)).
So it would be good to get some input from others.
To what extent would it be possible to split the overflow behaviour change from the mask introduction, so we could for example leave that behaviour change for 2.0? (not sure myself whether this is worth it, just wondering)
It is a bit unfortunate, this is true. But we can preserve precision now if possible, this was buggy before and since the behaviour is aligned with numpy and the regular DataFrame behaviour this should be ok imo. In the end it probably does not matter how far off your values are if they are off.
We could cast to float before calling the algos, this would keep the current behaviour but would lose performance gains and the precision fixes (would also hit cumsum that is already merged).
Since we intend to do 2.0 as the next release anyways, would it be ok to merge this and revert to casting to float before passing the array to the cython algos, If we do an unexpected 1.6 next?
Since we intend to do 2.0 as the next release anyways, would it be ok to merge this and revert to casting to float before passing the array to the cython algos, If we do an unexpected 1.6 next?
Sounds good!
Great! Thanks.
@mroeschke Would you mind having a look before merging?
Conflicts:
doc/source/whatsnew/v1.6.0.rst
phofl deleted the groupby_cumprod_mask branch
noatamir pushed a commit to noatamir/pandas that referenced this pull request
ENH: Support mask in groupby cumprod
Add whatsnew
Move whatsnew
Adress review
Fix example
Clarify
Change dtype access