[Python-Dev] Mixing float and Decimal -- thread reboot (original) (raw)

Greg Ewing greg.ewing at canterbury.ac.nz
Mon Mar 22 00:07:53 CET 2010


Raymond Hettinger wrote:

The question of where to stack decimals in the hierarchy was erroneously being steered by the concept that both decimal and binary floats are intrinsically inexact. But that would be incorrect, inexactness is a taint, the numbers themselves are always exact.

I don't think that's correct. "Numbers are always exact" is a simplification due to choosing not to attach an inexactness flag to every value. Without such a flag, we don't really know whether any given value is exact or not, we can only guess.

The reason for regarding certain types as "implicitly inexact" is something like this: If you start with exact ints, and do only int operations with them, you must end up with exact ints. But the same is not true of float or Decimal: even if you start with exact values, you can end up with inexact ones.

I really like Guido's idea of a context flag to control whether mixing of decimal and binary floats will issue a warning.

Personally I feel that far too much stuff concerning decimals is controlled by implicit context parameters. It gives me the uneasy feeling that I don't know what the heck any given decimal operation is going to do. It's probably justified in this case, though.

-- Greg



More information about the Python-Dev mailing list