[Python-Dev] Floor division (original) (raw)

Tim Peters tim.peters at gmail.com
Fri Jan 26 04:28:03 CET 2007


[Armin Rigo]

Thanks for the clarification. Yes, it makes sense that mod, divmod and floordiv on float and decimal would eventually follow the same path as for complex (where they make even less sense and already raise a DeprecationWarning).

[Nick Maclaren]

Yes. Though them not doing so would also make sense. The difference is that they make no mathematical sense for complex, but the problems with float are caused by floating-point (and do not occur for the mathematical reals).

There is an argument for saying that divmod should return a long quotient and a float remainder,

It could, but who would have a (sane) use for a possibly 2000-bit quotient?

which is what C99 has specified for remquo (except that it requires only the last 3 bits of the quotient for reasons that completely baffle me).

C99 has no integral type capable of representing the full quotient, but the last few bits may suffice for performing argument reduction in the implementation of a periodic function. Surely you did that for trig functions in the bad old days? For example, express input x as N*(pi/2) + r, where |r| <= pi/4. Then N can be expressed as 4*n1 + n2, with n2 in [0, 1, 2, 3], and:

cos(x) =
cos((4*n1+n2)*(pi/2) + r) =
cos(n1*(2*pi) + n2*pi/2 + r) =  [can ignore integral multiples of 2*pi]
cos(n2*pi/2 + r)

Then the combination of n2 and the sign of r tell you which quadrant you're in, and various cos-specific rules can deliver the result from that and cos(r) or sin(r).

The point is that only the last 2 bits of the quotient are needed to determine n2, and all other bits in N are irrelevant to the result.

Linux misimplemented that the last time I looked.

Personally, I think that it is bonkers, as it is fiendishly expensive compared to its usefulness - especially with Decimal!

Ah, but the IBM spec weasels out: it raises an exception if you try to compute "remainder" or "remainder-near" of inputs when the exponents differ by "too much".

This is a bit peculiar to me, because there are ways to compute "remainder" using a number of operations proportional to the log of the exponent difference. It could be that people who spend their life doing floating point forget how to work with integers ;-)

For example, what about 9e99999 % 3.14?

9e99999 = q*3.14 + r

if and only if (multiplying both sides by 100)

9e100001 = 314*q + 100*r

So what's mod(9 * 10**100001, 314)? Binary-method modular exponentiation goes fast:

pow(10, 100001, 314) 148 * 9 % 314 76

So

9e100001 = 314*q + 76

exactly for some integer q, so (dividing both sides by 100)

9e99999 = 3.14*q + 0.76

exactly for the same integer q. Done. It doesn't require 100000 long-division steps, it only requires about two dozen modular multiplications wrt the relatively tiny modulus 314.

OTOH, I don't know how to get the last bit of q with comparable efficiency, and that's required to implement the related "remainder-near" in "halfway cases".

But it isn't obviously WRONG.

For floats, fmod(x, y) is exactly congruent to x modulo y -- I don't think it's possible to get more right than exactly right ;-)



More information about the Python-Dev mailing list