[Python-Dev] Re: Decimal data type issues (original) (raw)
Tim Peters tim.one at comcast.net
Wed Apr 21 21:04:44 EDT 2004
- Previous message: [Python-Dev] Re: Decimal data type issues
- Next message: [Python-Dev] Re: Decimal data type issues
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[Kevin Jacobs]
Sorry for all of the unnecessary confusion. ...
No problem.
First, I get decimal numbers from many database adapters, flat files, XML files, in a variety of string formats, mainly. Virtually all are decimal string representations (i.e., a string of numbers with an option decimal point thrown in somewhere).
Then what problem are you trying to address when reading these numbers in? Is it that you don't trust, e.g., that a column of a database declared with some specific (precision, scale) pair enforced its own restrictions? Using Decimal.Decimal(string) exactly as-is today, you'll get exactly whatever number a string-of-digits-possibly-with-a-decimal-point specifies.
Not all of them encode scale explicitly by adding trailing zeros, though most of the time do they conform to a given maximum precision. A few sources do provide decimals as an integer with an explicit decimal scale exponent.
The spec doesn't supply any shortcuts for changing the exponent, because multiplication and division by powers of 10 are exact (barring underflow and overflow). Perhaps a shortcut for that would be handy, but it's not semantically necessary.
...
Clearly not. The first example was supposed to have a precision of 5:
Decimal('20000.001', precision=5, scale=0) === Decimal('20000')
So you're really doing a data conversion step? That is, you don't really want the numbers your data source gives you, but want to transform them first on input? You can, of course, it just strikes me as an odd desire.
In any case, that's not what the IBM standard supports. Context must be respected in its abstract from-string operation, and maximum precision is a component of context. If context's precision is 4, then
from-string('20000.001') would round to the most-significant 4 digits (according to the rounding mode specified in context), and signal both the "inexact" and "rounded" conditions. What "signal" means: if the trap-enable flags are set in context for either or both of those conditions, an exception will be raised; if the trap-enable flags for both of those conditions are clear, then the inexact-happened and rounded-happened status flags in context are set, and you can inspect them or not (as you please).
Yes -- this is what I would like to have happen, but with a short-cut to support this common operation.
What, exactly, is "this common operation"? Everything I described in that paragraph happens automatically as a result of a single from-string operation. Is it that you're reading in 10 numbers and want a unique precision for each one? That would surprise me. For example, if you're reading a column of a database, I'd expect that a single max precision would apply to each number in that column.
My previous comment about "great difficulty" was not in terms of the implementation, but rather the number of times it would have to be developed independently, if not readily available.
Well, I don't know what "it" is, exactly, so I'll shut up.
However, I am still not aware of a trivial way to enforce a given scale when creating decimal instances.
Sorry, I don't even know what "enforce a given scale" means. If, for example, you want to round every input to the nearest penny, set context to use the rounding method you mean by "the nearest", define a penny object:
penny = Decimal("0.01")
and then pass that to the quantize() method on each number:
to_pennies = [Decimal(n).quantize(penny) for n in input_strings]
Then every result will have exactly two digits after the decimal point.
If for some mysterious reason you actually want to raise an exception if any information is lost during this step, set context's inexact-trap flag once at the start. If you want to raise an exception even if just a trailing 0 is lost, set context's rounding-trap flag once at the start.
As you point out in a separate e-mail, there are many operations that in effect preserve scale due to unnormalized arithmetic operations.
Yes.
However, this conversation is somewhat academic since there does not seem to be a consensus that adding support for construction with scale and precision parameters are of general use.
Except I'd probably oppose them at this stage even if they were of universal use: we're trying to implement a specific standard here at the start.
Note that a construction method that honors context is required by the spec, and precision is part of context.
So I will create my own decimal subclass and/or utility function and be on my merry way.
Don't forget to share it! I suspect that's the only way I'll figure out what you're after .
- Previous message: [Python-Dev] Re: Decimal data type issues
- Next message: [Python-Dev] Re: Decimal data type issues
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]