[Python-Dev] calculator module (original) (raw)

Tim Peters tim.one at comcast.net
Wed Mar 10 13:46:10 EST 2004


[Raymond Hettinger]

... Also, I'll clarify the part about numerical accuracy. The other respondent took "pretty good" as meaning sloppy. What was meant was using the best algorithms and approximations that can be implemented cleanly and portably. This is a high standard, but much lower than would be expected of a system library (down to the last ULP or pretty close to it).

It's also very much lower than the HP calculators you (& I) remember so fondly. A large part of why they were so pleasant to use is that HP went to extraordinary lengths to make them numerically robust. Indeed, they hired Kahan to design and implement robust algorithms for the HP Solver, the HP adaptive numerical Romberg integration, and the HP financial calculators' IRR solvers (note that an IRR solver is essentially finding the roots of an N-degree polynomial, so there are N roots, and it takes extreme care to isolate the one that makes physical sense; & if cash flows are both positive and negative, there can even be more than one root that makes physical sense). They were all solid advances in the numeric state of the art. Kahan wrote some articles about them which are worth tracking down. Alas, I think they appeared in the HP Systems Journal, and offhand I wasn't able to find them online just now.

A practical and important consequence of "pretty good" is that we can ignore bug reports that say "last two digits of gamma(3.1) don't agree with Matlab."

Another professional thing HP did is document worst-case error bounds on all their function implementations. It's appalling that most C vendors don't (and Python inherits this sad story). It's fine by me if, for example, a numeric function is guaranteed good to only one digit, provided that the documentation says so.

BTW, it should be much easier to write high-quality implementations using Decimal than using native binary fp. The library writer building on Decimal can easily request a temporary increase in working precision, then round the final extended-precision result back. With limitations, even the simplest numerically naive algorithms can often be coaxed into delivering high-quality results that way. 95% of the battle when using native binary fp is finding clever ways to rephrase computation so that native-precision operations don't go insane.

Simple example: hypot(x, y) = sqrt(x2 + y2). The naive implementation in native fp is useless, because it suffers spurious overflow and underflow across a huge section of the input domain. Use an extended-range, extended-precision arithmetic internally, though, and the naive implementation is both accurate and robust across the entire domain.



More information about the Python-Dev mailing list