[Python-Dev] Expert floats (original) (raw)
Tim Peters tim.one at comcast.net
Tue Mar 30 17:38:23 EST 2004
- Previous message: [Python-Dev] PEP 328 -- relative and multi-line import
- Next message: [Python-Dev] Expert floats
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[Andrew Koenig]
But if you're moving to a wider precision, surely there is an even better decimal approximation to the IEEE-rounded "1.1" than 1.1000000000000001 (with even more digits), so isn't the preceding paragraph a justification for using that approximation instead?
[Tim]
Like Ping, you're picturing typing in "1.1" by hand, so that you know decimal 1.1 on-the-nose is the number you "really want".
[Andrew]
No, I don't think so. I said ``the IEEE-rounded "1.1"'', by which I mean the IEEE floating-point number that is closest to (infinite-precision) 1.1.
Oops -- got it.
Let's call that number X. Now, of course X is a rational number, and one that can be exactly represented on any machine with at least as many bits in its floating-point representation as the machine that computed X.
On the original machine, converting 1.1 to floating-point yields exactly X, as does converting 1.1000000000000001. You claim that on a machine with more precision than the original machine, converting 1.1000000000000001 to floating-point will yield a value closer to X than converting 1.1 to floating-point will yield. I agree with you. However, I claim that there is probably another decimal number, with even more digits, that when converted to floating-point on that machine will yield even a closer approximation to X, so isn't your line of reasoning an argument for using that decimal number instead?
It is, but it wasn't practical. 754 requires that float->string done to 17 significant digits, then back to float again, will reproduce the original float exactly. It doesn't require perfect rounding (there are different accuracy requirements over different parts of the domain -- it's complicated), and it doesn't require that a conforming float->string operation be able to produce more than 17 meaningful digits. For example, on Windows under 2.3.3:
print "%.50f" % 1.1 1.10000000000000010000000000000000000000000000000000
It's fine by the 754 std that all digits beyond the 17th are 0. It would also be fine if all digits beyond the 17th were 1, 8, or chosen at random.
So long as Python relies on the platform C, it can't assume more than that is available. Well, it can't even assume that much, relying on C89, but almost all Python fp behavior is inherited from C, and as a "quality of implementation" issue I believed vendors would, over time, at least try to pay lip service to 754. That prediction was a good one, actually.
Here's another way to look at it. Suppose I want to convert 2**-30 to decimal. On a 64-bit machine, I can represent that value to 17 significant digits as 9.31322574615478516e-10. However, I can also represent it exactly as 9.3132257461547851625e-10.
On Windows (among others), not unless you write your own float->string routines to get those "extra" digits.
print "%.50g" % (2**-30) 9.3132257461547852e-010
BTW, it's actually easy to write perfect-rounding float<->string routines in Python. The drawback is (lack of) speed.
If you are arguing that I can get a better approximation on a machine with more precision if I write the first of these representations, doesn't that argument suggest that the second of these representations is better still?
Yes. The difference is that no standard requires that C be able to produce the latter, and you only suggest David Gay's code because you haven't tried to maintain it <wink -- but it is a cross-platform mess>.
Remember that every binary floating-point number has an exact decimal representation (though the reverse, of course, is not true).
Yup.
- Previous message: [Python-Dev] PEP 328 -- relative and multi-line import
- Next message: [Python-Dev] Expert floats
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]