[Python-Dev] Decimal data type issues (original) (raw)

Tim Peters tim_one at email.msn.com
Fri Apr 16 00:17:24 EDT 2004


[Facundo Batista, on whether to limit exponent magnitude]

The feeling previous to this mail was that an artificial limit was silly if you don't gain nothing.

Here are three valuable reasons for keep the limits. So, if nobody opposes strongly, I'll put these reasons in the PEP and keep the limit.

I'm only +0 on keeping the limits, btw -- I think they do more good than harm, but it's not going to kill us either way.

...

Until now, Decimal does not uses context in creation time (with creation I mean when you create an object instantiating the class).

This allows to not lose any information at creation time, and only use the context in operations. Do you want to change this behaviour?

No, I want to spell the spec's mandatory from-string operation (which does use and affect context, in the specified ways) via an optional bool argument to the Decimal constructor. This isn't at all useless, although if you're only thinking about a money type it may seem that way. In general number-crunching, literals may be given to high precision, but that precision isn't free and usually isn't needed: the operations in the spec don't round back inputs to current context precision before computing, they only round outputs after, and digits feeding into computation are expensive. So, in general, a scientific user will usually not want all the digits in

pi = Decimal("3.1415926535897932384626433832795", use_context=True)

to be used. Instead they'll want to set the working precision once in context, and have their literals automatically get rounded back to that precision. This pattern will be familiar to Fortran77 users, where Fortran REAL ("single precision") literals are usually given to DOUBLE precision, so that changing a single IMPLICIT (static typing) statement can make the same code usable as-is under both REAL and DOUBLE precision. Decimal supports an unbounded number of precisions, and so the need for pleasant precision control is that much more acute (and the spec's from-string operation is one aid toward that end).

...

[on what repr() should do]

The only issue I see here, is that you don't have a clear representation of the internals of the object,

But nobody cares except implementers. I'm perfectly happy to introduce an .as_tuple() method for them.

More generally, Python should (eventually, not necessarily at the start) implement an extension to context, specifying the desired str() and repr() behavior. One size doesn't fit all, and context is the natural place to record individual preferences.

Against "one size doesn't fit all", everyone can at least understand the to-sci-string format, while only a handful will ever be able to read the tuples with ease.

but yes, you have an exact representation of that number. Let me show an example:

There are more in the spec (for those interested).

>>> d = Decimal.Decimal((1, (1,2,3,4,0), 1)) >>> d Decimal( (1, (1, 2, 3, 4, 0), 1L) ) >>> d.toscistring() '-1.2340E+5'

There you have to count the decimals and use the exponent in the string to know what the real exponent is (but I don't know how much important this is).

It's not important to end users; indeed, trying to tell most of them that "the real exponent" isn't 5 would be difficult.

For most people most of the time, after entering 12345.67, output of Decimal("12345.67") will be much more understandable than output of

Decimal( (0, (1, 2, 3, 4, 5, 6, 7), -2) )


More information about the Python-Dev mailing list