[Python-Dev] Re: Decimal data type issues (original) (raw)

Kevin Jacobs jacobs at theopalgroup.com
Wed Apr 21 12:56:35 EDT 2004


Tim Peters wrote:

[Kevin Jacobs] ...

Hopefully this is somewhat clearer.

Sorry, it really isn't to me. When you extract "a number" from one of your databases, what do you get from it, concretely? A triple of (decimal string with embedded decimal point, integer precision, integer scale)? An integer value with an integer scale? A decimal string w/o embedded decimal point and an integer scale? Etc.

Sorry for all of the unnecessary confusion. I am in and out of meetings all of this week and have been trying to keep up several technical conversations in 5 minute breaks between sessions. As such, my examples were flawed. However, I now have 10 minutes, to answer some of your questions, so hopefully I can explain slightly better.

First, I get decimal numbers from many database adapters, flat files, XML files, in a variety of string formats, mainly. Virtually all are decimal string representations (i.e., a string of numbers with an option decimal point thrown in somewhere). Not all of them encode scale explicitly by adding trailing zeros, though most of the time do they conform to a given maximum precision. A few sources do provide decimals as an integer with an explicit decimal scale exponent.

Thus, I would like to create decimal instances that conform to those schema -- i.e., they would be rounded appropriately and overflow errors generated if they exceeded either the maximum precision or scale. e.g.:

Decimal('20000.001', precision=4, scale=0) === Decimal('20000') Decimal('20000.001', precision=4, scale=0) raises an overflow exception

The inputs on those two lines look identical to me, so I'm left more lost than before -- you can't really want Decimal('20000.001', precision=4, scale=0) both to return 20000 and raise an overflow exception.

Clearly not. The first example was supposed to have a precision of 5:

Decimal('20000.001', precision=5, scale=0) === Decimal('20000')

In any case, that's not what the IBM standard supports. Context must be respected in its abstract from-string operation, and maximum precision is a component of context. If context's precision is 4, then

from-string('20000.001') would round to the most-significant 4 digits (according to the rounding mode specified in context), and signal both the "inexact" and "rounded" conditions. What "signal" means: if the trap-enable flags are set in context for either or both of those conditions, an exception will be raised; if the trap-enable flags for both of those conditions are clear, then the inexact-happened and rounded-happened status flags in context are set, and you can inspect them or not (as you please).

Yes -- this is what I would like to have happen, but with a short-cut to support this common operation. My previous comment about "great difficulty" was not in terms of the implementation, but rather the number of times it would have to be developed independently, if not readily available.

However, I am still not aware of a trivial way to enforce a given scale when creating decimal instances. As you point out in a separate e-mail, there are many operations that in effect preserve scale due to unnormalized arithmetic operations.

However, this conversation is somewhat academic since there does not seem to be a consensus that adding support for construction with scale and precision parameters are of general use. So I will create my own decimal subclass and/or utility function and be on my merry way.

Thanks, -Kevin



More information about the Python-Dev mailing list