Issue 26288: Optimize PyLong_AsDouble for single-digit longs (original) (raw)

Created on 2016-02-05 01:55 by yselivanov, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
as_double.patch yselivanov,2016-02-05 01:54 review
Messages (15)
msg259615 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2016-02-05 01:54
The attached patch drastically speeds up PyLong_AsDouble for single digit longs: -m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + (x+0.1)/(x-0.1)*2 + (x+10)*(x-30)" with patch: 0.414 without: 0.612 spectral_norm: 1.05x faster. The results are even better when paired with patch from issue #21955.
msg259703 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2016-02-06 00:40
New changeset 986184c355e8 by Yury Selivanov in branch 'default': Issue #26288: Optimize PyLong_AsDouble. https://hg.python.org/cpython/rev/986184c355e8
msg259704 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2016-02-06 00:40
Thanks a lot for the review, Serhiy!
msg259715 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2016-02-06 02:24
Nice enhancement. /* Fast path; single digit will always fit decimal. This improves performance of FP/long operations by at least 20%. This is even visible on macro-benchmarks. */ I'm not sure that "spectral_norm" can be qualified as macro-benchmark. It's more a microbenchmark on numerics, no? I'm just nitpicking, your patch is fine ;-)
msg259723 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2016-02-06 10:57
Actually, please fix the comment. We don't want someone wondering what those "macro-benchmarks" are.
msg259724 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2016-02-06 11:23
Additionally, "single digit will always fit a double"?
msg259736 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2016-02-06 17:07
> Actually, please fix the comment. We don't want someone wondering what those "macro-benchmarks" are. If spectral-norm and nbody aren't good benchmarks then let's remove them from our benchmarks suite. I'll remove that comment anyways, as it doesn't make a lot of sense :) > Additionally, "single digit will always fit a double"? What's wrong about that phrase? Would this be better: "It's safe to cast a single-digit long (31 bits) to double"?
msg259737 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2016-02-06 17:18
Sorry, I was a bit brief: The current comment says "decimal" instead of "double". It should be changed to "double".
msg259738 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2016-02-06 17:18
Le 06/02/2016 18:07, Yury Selivanov a écrit : > >> Actually, please fix the comment. We don't want someone wondering what those "macro-benchmarks" are. > > If spectral-norm and nbody aren't good benchmarks then let's remove > them from our benchmarks suite. Probably, yes.
msg259739 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2016-02-06 17:19
Actually, let me refine that. nbody and spectral-norm don't make sense for people running CPython. Perhaps people running PyPy might care about their performance... (though PyPy is supposed to support a subset of Numpy too)
msg259740 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2016-02-06 17:21
New changeset cfb77ccdc23a by Yury Selivanov in branch 'default': Issue #26288: Fix comment https://hg.python.org/cpython/rev/cfb77ccdc23a
msg259741 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2016-02-06 17:23
> Sorry, I was a bit brief: The current comment says "decimal" instead of "double". It should be changed to "double". Oh, got it now, sorry. I rephrased the comment a bit, hopefully it's better now. Please check. Thanks!
msg259777 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2016-02-07 10:16
The comment looks good to me -- I'll stay out of the benchmarking issue, I didn't check any of that. :)
msg259785 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2016-02-07 11:56
Well I *did* run the decimal/float milli-benchmark now and it shows at least 15% improvement for floats consistently. Given that the official benchmark suite does not seem to be very stable either (#21955), I actually prefer small and well-understood benchmarks.
msg259886 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2016-02-08 21:13
I'm not sure why this issue is open... Closing it.
History
Date User Action Args
2022-04-11 14:58:27 admin set github: 70476
2016-02-08 21:13:53 yselivanov set status: open -> closedmessages: +
2016-02-07 11:56:22 skrah set messages: +
2016-02-07 10:16:20 skrah set messages: +
2016-02-06 17:23:13 yselivanov set messages: +
2016-02-06 17:21:35 python-dev set messages: +
2016-02-06 17:19:26 pitrou set messages: +
2016-02-06 17🔞17 pitrou set messages: +
2016-02-06 17🔞07 skrah set messages: +
2016-02-06 17:12:32 mark.dickinson set nosy: + mark.dickinson
2016-02-06 17:07:13 yselivanov set messages: +
2016-02-06 11:23:54 skrah set nosy: + skrahmessages: +
2016-02-06 10:57:17 pitrou set status: closed -> openassignee: yselivanovmessages: +
2016-02-06 02:24:52 vstinner set messages: +
2016-02-06 00:40:58 yselivanov set status: open -> closedtype: performancemessages: + resolution: fixedstage: resolved
2016-02-06 00:40:10 python-dev set nosy: + python-devmessages: +
2016-02-05 01:55:00 yselivanov create