[Python-Dev] Decimal type question [Prothon] (original) (raw)
Tim Peters tim.peters at gmail.com
Mon Aug 9 18:23:51 CEST 2004
- Previous message: [Python-Dev] Decimal type question [Prothon]
- Next message: [Python-Dev] Re: Decimal type question [Prothon]
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[Martin v. Löwis]
... For example, would it be possible to automatically fall back to binary floating point if the result cannot be represented exactly (which would be most divide operations)? Would that help?
It's a puzzle. .NET Decimal is really more a fixed-point type than a floating-point type. It consists of a sign bit, a 96-bit binary integer, and a "scale factor" in 0..28, which is the power of 10 by which the integer is conceptually divided. The largest positive representable value is thus 296 == 79228162514264337593543950336. The smallest positive non-zero representable value is 1/1028.
So for something like 1/3, you get about 28 decimal digits of good result, which is much better than you can get with an IEEE double.
OTOH, something like 1/300000000000000000000 starts to make the "gradual underflow" nature of Decimal apparent: for numbers with absolute value less than 1, the number of digits you get decreases the smaller the absolute value, until at 1e-28 you have only 1 bit of precision (and, e.g., 1.49999e-28 "rounds to" 1e-28).
So it's a weird arithmetic as you approach its limits. But binary FP is too, and so is IBM's decimal spec. A primary difference is that binary FP has a much larger dynamic range, so you don't get near the limits nearly as often; and IBM's decimal has a gigantic dynamic range (the expectation is that essentially no real app will get anywhere near its limits, unless the app is buggy).
- Previous message: [Python-Dev] Decimal type question [Prothon]
- Next message: [Python-Dev] Re: Decimal type question [Prothon]
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]