[Python-Dev] Mixing float and Decimal -- thread reboot (original) (raw)

Pierre B. pierrebai at hotmail.com
Mon Mar 22 21:02:57 CET 2010


Sorry to intervene out of the blue, but I find the suggested rule for fractional to decimal conversion not as clean as I'd expect.

If fractions are converted to decimals when doing arithmetics, would it be worthwhile to at least provide a minimum of fractional conversion integrity? What I have in mind is the following rule:

When doing conversion from fraction to decimal, always generate a whole number of repeating digits, always at least twice.

Examples, with a precision of 5 in Decimal:

1/2 -> 0.50000

1/3 -> 0.33333

1/11 -> 0.090909

Note that we produced 6 digits, because

the repeating pattern contains 2 digits.

1/7 -> 0.142857142857

Always at least two full patterns.

The benefits I see are that:

  1. If a number can be represented exactly it will be converted exactly.

  2. The minimum precision requested is respected.

  3. The conversion yields something that will convert back more precisely.
    Not perfectly, but see the next point.

  4. Since the repeating pattern is present at least twice at the end, one can augment the precision of the conversion by detecting the repetition and adding more. This detection is trivial.



More information about the Python-Dev mailing list