[Python-Dev] RE: [Python-checkins] python/dist/src/Objects complexobject.c,2.57,2.58 (original) (raw)

Tim Peters tim.one@comcast.net
Mon, 15 Apr 2002 19:22:37 -0400


[Guido]

I presume that you also object against allowing certain "int- requiring" operations (like use as sequence index) on floats when the fraction is zero.

Not if you know the fractional part is 0. If you don't know that for sure, then you're saying sequence indices are fuzzy, subject to change across platforms and releases. I'm acutely aware of this because I spent 10 years using Cray machines where int(6./3.) == 1. That doesn't happen in a 754 world, and 754 also supplies an "exact" flag so you can know for sure that 24./6. is exact and 24./5. is not. The same kind of thing can still happen, though, if "3." is really the result of an fp computation that just happens to round to something a little bigger than 3.0 on the nose; this in turn could be due to a 1-bit difference in the platform exp() implementation for a given input. Over the years, I've tracked down dozens and dozens of x-platform program failures to such tiny platform differences (and, perhaps paradoxically, I spent more time per year doing that at Dragon than at Cray!).

Unified numbers are rapidly becoming less attractive this way...

To the extent that they expose platform accidents, they've always been suspect. A fancy hand calculator gets away with it because it's a single platform where all the numerics are under the control of the system (HW and SW) designers, and they can do whatever it takes to avoid visible surprises. They don't care much about speed, either. A pure-SW equivalent would need to stay away from HW fp, or use it in limited known-safe ways.

... To me, unified numbers means that there's only one numeric type, and it always has the same properties. That means that all numbers must have an imag field, which is zero when it's not a complex number.

But it seems you want some other indicator that says it's never been part of a calculation involving complex numbers...

If you're to avoid platform accidents, it doesn't matter whether it's ever been complex, what matters is that when you're "downcasting" you know whether that's safe, or whether you're just guessing. CL refuses to guess; Scheme tries to make safe guesses possible, but half-heartedly.

[on an "exact?" bit]

Unified numbers are becoming even less attractive...

It's taken from Scheme: exactness is Scheme's way of trying to allow for guaranteed-safe downcasting (or "down towering") when possible. In practice, though, the Scheme standard says so little about how inexactness propagates that you can't count on it across Scheme implementations.

[CL pointers]

Too much to read. :-(

Approximate computer arithmetic is a bitch. If you want a system that's easy to define and to use, stay away from HW fp and it's almost easy. Else rounding errors are inescapable, and so have to be dealt with.

But if the end result is that users will write trunc() or round() calls whenever they have a float value that they believe is an int and that they want to use in an int context -- but then when it's not even close to an int, they won't notice.

I was wondering when someone would get to that . Implicit in that complaint is that, if we left magical float->int up to "the system", then the system could benevolently decide when a given float was or wasn't "close enough" to an exact int to make no difference. Been there, done that, too. IMO it just increases the complication, and the difficulty of writing correct portable code: it's another layer of hidden magic to trip over. For example, if you decide that 1.9 is "close enough" to 2 that it should be treated as 2 on the nose, but that anything in (1.1, 1.9) is "too far away", then you've effectively just moved the boundaries where the tiniest possible 1-bit rounding accidents lead to radically different behavior. Systems that do stuff like that (APL is an example) then grow global "fuzz" settings so that users can override the system, when, through painful experience, they learn the system can't guess reliably.

Some hand calculators deal with this by computing to several extra decimal digits internally, and then saying "it's close enough to an int" if and only if the extra-precision hidden result rounds to an exact normal-precision int. That's effective, but perhaps would break down if hand calculators normally did millions (billions, ...) of computations before needing to decide.

Forget complex. How is sequence.getitem(x) to be defined for a float x (this is the under-the-covers "float"; whether that's a user-visible type distinction isn't material to the question here)? Scheme says that in (list-ref sequence x), x must be an "exact integer", where "integer" includes things like a complex with a 0 imaginary part, but "exact" says you've got to know it's a 0 imaginary part, not just an approximation to a 0 imaginary part. That would be sane, except that most Scheme implementations call all internal floats "inexact", so in practice it ends up meaning "an internal integer", and all the words allowing for other possiblities just confuse the issue. Common Lisp says "screw that -- if you think this is usable as an int, you convert it to an int yourself: your choice, your responsibility, and here are 4 convenient functions for choosing the rounding you want". Either is preferable to "don't worry, trust me" .