[Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed (original) (raw)
Stefan Krah stefan at bytereef.org
Tue Mar 27 00:47:49 CEST 2012
- Previous message: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed
- Next message: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Victor Stinner <victor.stinner at gmail.com> wrote:
> The 80x is a ballpark figure for the maximum expected speedup for > standard numerical floating point applications.
Ok, but it's just surprising when you read the What's New document. 72x and 80x look to be inconsistent.
Yes, indeed, I'll reword that.
> For huge numbers decimal is also faster than int: > > factorial(1000000): > > decimal, calculation time: 6.844487905502319 > decimal, tostr(): ?? ?? ?? ?? ??0.033592939376831055 > > int, calculation time: 17.96010398864746 > int, tostr(): ... still running ...
Hum, with a resolution able to store the result with all digits?
Yes, you have to set context.prec (and emax) to the maximum values, then the result is an exact integer. The conversion to string is so fast because there is no complicated base conversion.
If yes, would it be possible to reuse the multiply algorithm of decimal (and maybe of other functions) for int? Or does it depend heavily on decimal internal structures?
Large parts of the Number Theoretic Transform could be reused, but there would be still quite a bit of work.
Stefan Krah
- Previous message: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed
- Next message: [Python-Dev] [Python-checkins] cpython: Issue #7652: Integrate the decimal floating point libmpdec library to speed
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]