[Python-Dev] Mixing float and Decimal -- thread reboot (original) (raw)

Guido van Rossum guido at python.org
Mon Mar 22 19:24:20 CET 2010


On Mon, Mar 22, 2010 at 10:22 AM, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:

On Mon, Mar 22, 2010 at 1:56 PM, Raymond Hettinger <raymond.hettinger at gmail.com> wrote:

On Mar 22, 2010, at 10:00 AM, Guido van Rossum wrote: Decimal + float --> Decimal If everybody associated with the Decimal implementation wants this I won't stop you; as I repeatedly said my intuition about this one (as opposed to the other two above) is very weak. That's my vote. I've been lurking on this thread so far, but let me add my +1 to this option.  My reasoning is that Decimal is a "better" model of Real than float and mixed operations should not degrade the result.   "Better" can mean different things to different people, but to me the tie breaker is the support for contexts.  I would not want precision to suddenly change in the middle of calculation I add 1.0 instead of 1. This behavior will also be familiar to users of other "enhanced" numeric types such as NumPy scalars.   Note that in the older Numeric, it was the other way around, but after considerable discussion, the behavior was changed.

Thanks, "better" is a great way to express this.

-- --Guido van Rossum (python.org/~guido)



More information about the Python-Dev mailing list