[Python-Dev] Re: Decimal data type issues (original) (raw)

Jewett, Jim J jim.jewett at EDS.COM
Mon Apr 19 14:34:10 EDT 2004


Kevin Jacobs: #- Decimal('2.4000', precision=2, scale=1) == Decimal('2.4') #- Decimal('2.4', precision=5, scale=4) == Decimal('2.4000') #- #- Remember, these literals are frequently coming from an #- external source that must be constrained to a given schema.

Facundo Batista:

I like it a lot, but not for Decimal.

This is another face of "what to do with float"

Even if your default context is n digits, there is no reason to assume that all your inputs will be measured that precisely. If someone sends me a list of weights:

PartA	1105 kg
PartB	   3 kg

then I don't want to pretend that the 3kg part was weighed any more precisely just because the total is smaller.

On the other hand, if the weights are:

PartA	3 kg
PartB 3.0 kg

then I do want to assume that the second weight is more precise.

As an example

3000 g	3001 g
+  2 g	+  1 g
  ------      ------
   3kg	3002 g

#- I assume that a new Decimal would normally be created #- with as much precision as the context would need for #- calculations. By passing a context/precision/position, #- the user is saying "yeah, but this measurement wasn't #- that precise in the first place. Use zeros for the #- rest, no matter what this number claims."

I don't get to understand you, and I'm not sure if you have the right concept. Several examples may help:

getcontext().prec = 5 Decimal(124) Decimal( (0, (1, 2, 4), 0) ) +Decimal(124) Decimal( (0, (1, 2, 4), 0) ) Decimal('258547.368') Decimal( (0, (2, 5, 8, 5, 4, 7, 3, 6, 8), -3) ) +Decimal('258547.368') Decimal( (0, (2, 5, 8, 5, 5), 1L) ) Decimal.fromfloat(1.1) Decimal( (0, (1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 8, 1, 7, 8, 4, 1, 9, 7, 0, 0, 1, 2, 5, 2, 3, 2, 3, 3, 8, 9, 0, 5, 3, 3, 4, 4, 7, 2, 6, 5, 6, 2, 5), -51L) ) +Decimal.fromfloat(1.1) Decimal( (0, (1, 1, 0, 0, 0), -4L) )

. Facundo



More information about the Python-Dev mailing list