[Python-Dev] decimal API (original) (raw)
Raymond Hettinger raymond.hettinger at verizon.net
Fri Jul 2 12:11:00 CEST 2004
- Previous message: [Python-Dev] SHA-256 module
- Next message: [Python-Dev] decimal API
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Currently, calling the Decimal constructor with an invalid literal (such as Decimal("Fred")) returns a quiet NaN. This was done because the spec appeared to require it (in fact, there are IBM test cases to confirm that behavior).
I've discussed this with Mike Cowlishaw (author of the spec and test cases) and he has just clarified that, "... the intent here was not to disallow an exception. The analogy, perhaps, is to a divide-by-zero: the latter raises Invalid Operation and returns a qNaN. The string conversion is similar. (Again, in some implementations/languages, the result after such an exception is not available.) I'll see if I can clarify that, at least making it clear that Invalid Operation is OK at that point."
So, my question for the group is whether to:
- leave it as-is
- raise a ValueError just like float('abc') or int('abc')
- raise an Invalid Operation and return a quiet NaN.
Either of the last two involves editing the third-party test cases which I am loathe to do. The second is the most Pythonic but does not match Mike's clarification. The third keeps within context of the spec but doesn't bode well for Decimal interacting well with the rest of python. The latter issue is unavoidable to some degree because no other python numeric type has context sensitive operations, settable traps, and result flags.
A separate question is determining the default precision. Currently, it is set at 9 which conveniently matches the test cases, the docstring examples, and examples in the spec. It is also friendly to running time.
Tim had suggested that 20 or so would handle many user requirements without needing a context change.
Mike had suggested default single and double precision matching those proposed in 754R. The rationale behind those sizes has nothing to do with use cases; rather, they were chosen so that certain representations (not the ones we use) fit neatly into byte/word sized multiples (once again showing the hardware orientation of the spec).
No matter what the default, the precision is easy to change:
>>> getcontext().prec = 42
>>> Decimal(1) / Decimal(7)
Decimal("0.142857142857142857142857142857142857142857")
Raymond
- Previous message: [Python-Dev] SHA-256 module
- Next message: [Python-Dev] decimal API
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]