cpython: a79b07e05d0d (original) (raw)
Mercurial > cpython
changeset 76909:a79b07e05d0d
Issue #14245: Merge changes from 3.2. [#14245]
Mark Dickinson mdickinson@enthought.com | |
---|---|
date | Sun, 13 May 2012 21:02:22 +0100 |
parents | 93748e2d64e3(current diff)2b2a7861255d(diff) |
children | 3d509c4a72bc |
files | Doc/faq/design.rst Misc/ACKS |
diffstat | 1 files changed, 29 insertions(+), 40 deletions(-)[+] [-] Doc/faq/design.rst 69 |
line wrap: on
line diff
--- a/Doc/faq/design.rst +++ b/Doc/faq/design.rst @@ -43,56 +43,45 @@ Why am I getting strange results with si See the next question. -Why are floating point calculations so inaccurate? +Why are floating-point calculations so inaccurate? -------------------------------------------------- -People are often very surprised by results like this:: -
-and think it is a bug in Python. It's not. This has nothing to do with Python, -but with how the underlying C platform handles floating point numbers, and -ultimately with the inaccuracies introduced when writing down numbers as a -string of a fixed number of digits.
-The internal representation of floating point numbers uses a fixed number of
-binary digits to represent a decimal number. Some decimal numbers can't be
-represented exactly in binary, resulting in small roundoff errors.
-
-In decimal math, there are many numbers that can't be represented with a fixed
-number of decimal digits, e.g. 1/3 = 0.3333333333.......
+and think it is a bug in Python. It's not. This has little to do with Python,
+and much more to do with how the underlying platform handles floating-point
+numbers.
-In base 2, 1/2 = 0.1, 1/4 = 0.01, 1/8 = 0.001, etc. .2 equals 2/10 equals 1/5,
-resulting in the binary fractional number 0.001100110011001...
-
-Floating point numbers only have 32 or 64 bits of precision, so the digits are
-cut off at some point, and the resulting number is 0.199999999999999996 in
-decimal, not 0.2.
+The :class:float
type in CPython uses a C double
for storage. A
+:class:float
object's value is stored in binary floating-point with a fixed
+precision (typically 53 bits) and Python uses C operations, which in turn rely
+on the hardware implementation in the processor, to perform floating-point
+operations. This means that as far as floating-point operations are concerned,
+Python behaves like many popular languages including C and Java.
-A floating point number's repr()
function prints as many digits are
-necessary to make eval(repr(f)) == f
true for any float f. The str()
-function prints fewer digits and this often results in the more sensible number
-that was probably intended::
+Many numbers that can be written easily in decimal notation cannot be expressed
+exactly in binary floating-point. For example, after::
+
+the value stored for x
is a (very good) approximation to the decimal value
+1.2
, but is not exactly equal to it. On a typical machine, the actual
+stored value is::
-One of the consequences of this is that it is error-prone to compare the result
-of some computation to a float with ==
. Tiny inaccuracies may mean that
-==
fails. Instead, you have to check that the difference between the two
-numbers is less than a certain threshold::
- epsilon = 0.0000000000001 # Tiny allowed error
- expected_result = 0.4 +The typical precision of 53 bits provides Python floats with 15-16 +decimal digits of accuracy.
- if expected_result-epsilon <= computation() <= expected_result+epsilon:
...[](#l1.80)
-
-Please see the chapter on :ref:floating point arithmetic <tut-fp-issues>
in
-the Python tutorial for more information.
+For a fuller explanation, please see the :ref:floating point arithmetic[](#l1.84) +<tut-fp-issues>
chapter in the Python tutorial.
Why are Python strings immutable?