[Python-Dev] small floating point number problem (original) (raw)
Raymond Hettinger raymond.hettinger at verizon.net
Wed Feb 8 09:08:25 CET 2006
- Previous message: [Python-Dev] small floating point number problem
- Next message: [Python-Dev] small floating point number problem
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[Smith]
I just ran into a curious behavior with small floating points, trying to find the limits of them on my machine (XP). Does anyone know why the '0.0' is showing up for one case below but not for the other? According to my tests, the smallest representable float on my machine is much smaller than 1e-308: it is
2.470328229206234e-325 but I can only create it as a product of two numbers, not directly. Here is an attempt to create the much larger 1e-308:
a=1e-308 a 0.0
The clue is in that the two differ by 17 orders of magnitude (325-308) which is about 52 bits.
The interpreter builds 1-e308 by using the underlying C library string-to-float function and it isn't constructing numbers outside the normal range for floats. When you enter a value outside that range, the function underflows it to zero.
In contrast, your computed floats (such as 1*1e-307) return a denormal result (where the significand is stored with fewer bits than normal because the exponent is already at its outer limit). That denormal result is not zero and the C library float-to-string conversion successfully generates a decimal string representation.
The asymmetric handling of denormals by the atof() and ftoa() functions is why you see a difference. A consequence of that asymmetry is the breakdown of the expected eval(repr(f))==f invariant:
f = f = .1*1e-307 eval(repr(f)) == f False
Raymond
- Previous message: [Python-Dev] small floating point number problem
- Next message: [Python-Dev] small floating point number problem
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]