[Python-Dev] Expert floats (original) (raw)

Tim Peters tim.one at comcast.net
Wed Mar 31 14:41:36 EST 2004


[Andrew Koenig]

... For example, be it, binary or decimal, floating-point addition is still not associative, so even such a simple computation as a+b+c requires careful thought if you wish the maximum possible precision.

Not really for most everyday applications of decimal arithmetic. People work with decimal quantities in real life, and addition of fp decimals is exact (hence also associative) provided the total precision isn't exceeded. Since Decimal allows setting precision to whatever the user wants, it's very easy to pick a precision obviously so large that even adding a billion (e.g.) dollars-and-cents inputs yields the exact result, and regardless of addition order. For the truly paranoid, Decimal's "inexact flag" can be inspected at the end to see whether the exactness assumption was violated, and the absurdly paranoid can even ask that an exception get raised whenever an inexact result would have been produced.

Binary fp loses in these common cases just because the true inputs can't be represented, and the number printed at the end isn't even the true result of approximately adding the approximated inputs. Decimal easily avoids all of that.

Why are you not arguing against decimal floating-point if your goal is to expose users to the problems of floating-point as early as possible?

The overwhelmingly most common newbie binary fp traps today are failures to realize that the numbers they type aren't the numbers they get, and that the numbers they see aren't the results they got. Adding 0.1 to itself 10 times and not getting 1.0 exactly is universally considered to be "a bug" by newbies (but it is exactly 1.0 in decimal). OTOH, if they add 1./3. to itself 3 times under decimal and don't get exactly 1.0, they won't be surprised at all. It's the same principle at work in both cases, but they're already trained to expect 0.9...9 from the latter.

The primary newbie difficulty with binary fp is that the simplest use case (just typing in an ordinary number) is already laced with surprises -- it already violates WYSIWYG, and insults a lifetime of "intuition" gained from by-hand and calculator math (of course it's not a coincidence that hand calculators use decimal arithmetic internally -- they need to be user-friendly).

You have to do things fancier than just typing in the prices of grocery items to get in trouble with Decimal.



More information about the Python-Dev mailing list