Truncation error (numerical integration) (original) (raw)
From Wikipedia, the free encyclopedia
Errors arising in numerical integration
Truncation errors in numerical integration are of two kinds:
- local truncation errors – the error caused by one iteration, and
- global truncation errors – the cumulative error caused by many iterations.
Suppose we have a continuous differential equation
y ′ = f ( t , y ) , y ( t 0 ) = y 0 , t ≥ t 0 {\displaystyle y'=f(t,y),\qquad y(t_{0})=y_{0},\qquad t\geq t_{0}}
and we wish to compute an approximation y n {\displaystyle y_{n}} of the true solution y ( t n ) {\displaystyle y(t_{n})}
at discrete time steps t 1 , t 2 , … , t N {\displaystyle t_{1},t_{2},\ldots ,t_{N}}
. For simplicity, assume the time steps are equally spaced:
h = t n − t n − 1 , n = 1 , 2 , … , N . {\displaystyle h=t_{n}-t_{n-1},\qquad n=1,2,\ldots ,N.}
Suppose we compute the sequence y n {\displaystyle y_{n}} with a one-step method of the form
y n = y n − 1 + h A ( t n − 1 , y n − 1 , h , f ) . {\displaystyle y_{n}=y_{n-1}+hA(t_{n-1},y_{n-1},h,f).}
The function A {\displaystyle A} is called the increment function, and can be interpreted as an estimate of the slope y ( t n ) − y ( t n − 1 ) h {\displaystyle {\frac {y(t_{n})-y(t_{n-1})}{h}}}
.
Local truncation error
[edit]
The local truncation error τ n {\displaystyle \tau _{n}} is the error that our increment function, A {\displaystyle A}
, causes during a single iteration, assuming perfect knowledge of the true solution at the previous iteration.
More formally, the local truncation error, τ n {\displaystyle \tau _{n}} , at step n {\displaystyle n}
is computed from the difference between the left- and the right-hand side of the equation for the increment y n ≈ y n − 1 + h A ( t n − 1 , y n − 1 , h , f ) {\displaystyle y_{n}\approx y_{n-1}+hA(t_{n-1},y_{n-1},h,f)}
:
τ n = y ( t n ) − y ( t n − 1 ) − h A ( t n − 1 , y ( t n − 1 ) , h , f ) . {\displaystyle \tau _{n}=y(t_{n})-y(t_{n-1})-hA(t_{n-1},y(t_{n-1}),h,f).} [1][2]
The numerical method is consistent if the local truncation error is o ( h ) {\displaystyle o(h)} (this means that for every ε > 0 {\displaystyle \varepsilon >0}
there exists an H {\displaystyle H}
such that | τ n | < ε h {\displaystyle |\tau _{n}|<\varepsilon h}
for all h < H {\displaystyle h<H}
; see little-o notation). If the increment function A {\displaystyle A}
is continuous, then the method is consistent if, and only if, A ( t , y , 0 , f ) = f ( t , y ) {\displaystyle A(t,y,0,f)=f(t,y)}
.[3]
Furthermore, we say that the numerical method has order p {\displaystyle p} if for any sufficiently smooth solution of the initial value problem, the local truncation error is O ( h p + 1 ) {\displaystyle O(h^{p+1})}
(meaning that there exist constants C {\displaystyle C}
and H {\displaystyle H}
such that | τ n | < C h p + 1 {\displaystyle |\tau _{n}|<Ch^{p+1}}
for all h < H {\displaystyle h<H}
).[4]
Global truncation error
[edit]
The global truncation error is the accumulation of the local truncation error over all of the iterations, assuming perfect knowledge of the true solution at the initial time step.[_citation needed_]
More formally, the global truncation error, e n {\displaystyle e_{n}} , at time t n {\displaystyle t_{n}}
is defined by:
e n = y ( t n ) − y n = y ( t n ) − ( y 0 + h A ( t 0 , y 0 , h , f ) + h A ( t 1 , y 1 , h , f ) + ⋯ + h A ( t n − 1 , y n − 1 , h , f ) ) . {\displaystyle {\begin{aligned}e_{n}&=y(t_{n})-y_{n}\\&=y(t_{n})-{\Big (}y_{0}+hA(t_{0},y_{0},h,f)+hA(t_{1},y_{1},h,f)+\cdots +hA(t_{n-1},y_{n-1},h,f){\Big )}.\end{aligned}}} [5]
The numerical method is convergent if global truncation error goes to zero as the step size goes to zero; in other words, the numerical solution converges to the exact solution: lim h → 0 max n | e n | = 0 {\displaystyle \lim _{h\to 0}\max _{n}|e_{n}|=0} .[6]
Relationship between local and global truncation errors
[edit]
Sometimes it is possible to calculate an upper bound on the global truncation error, if we already know the local truncation error. This requires our increment function be sufficiently well-behaved.
The global truncation error satisfies the recurrence relation:
e n + 1 = e n + h ( A ( t n , y ( t n ) , h , f ) − A ( t n , y n , h , f ) ) + τ n + 1 . {\displaystyle e_{n+1}=e_{n}+h{\Big (}A(t_{n},y(t_{n}),h,f)-A(t_{n},y_{n},h,f){\Big )}+\tau _{n+1}.}
This follows immediately from the definitions. Now assume that the increment function is Lipschitz continuous in the second argument, that is, there exists a constant L {\displaystyle L} such that for all t {\displaystyle t}
and y 1 {\displaystyle y_{1}}
and y 2 {\displaystyle y_{2}}
, we have:
| A ( t , y 1 , h , f ) − A ( t , y 2 , h , f ) | ≤ L | y 1 − y 2 | . {\displaystyle |A(t,y_{1},h,f)-A(t,y_{2},h,f)|\leq L|y_{1}-y_{2}|.}
Then the global error satisfies the bound
| e n | ≤ max j τ j h L ( e L ( t n − t 0 ) − 1 ) . {\displaystyle |e_{n}|\leq {\frac {\max _{j}\tau _{j}}{hL}}\left(\mathrm {e} ^{L(t_{n}-t_{0})}-1\right).} [7]
It follows from the above bound for the global error that if the function f {\displaystyle f} in the differential equation is continuous in the first argument and Lipschitz continuous in the second argument (the condition from the Picard–Lindelöf theorem), and the increment function A {\displaystyle A}
is continuous in all arguments and Lipschitz continuous in the second argument, then the global error tends to zero as the step size h {\displaystyle h}
approaches zero (in other words, the numerical method converges to the exact solution).[8]
Extension to linear multistep methods
[edit]
Now consider a linear multistep method, given by the formula
y n + s + a s − 1 y n + s − 1 + a s − 2 y n + s − 2 + ⋯ + a 0 y n = h ( b s f ( t n + s , y n + s ) + b s − 1 f ( t n + s − 1 , y n + s − 1 ) + ⋯ + b 0 f ( t n , y n ) ) , {\displaystyle {\begin{aligned}&y_{n+s}+a_{s-1}y_{n+s-1}+a_{s-2}y_{n+s-2}+\cdots +a_{0}y_{n}\\&\qquad {}=h{\bigl (}b_{s}f(t_{n+s},y_{n+s})+b_{s-1}f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}f(t_{n},y_{n}){\bigr )},\end{aligned}}}
Thus, the next value for the numerical solution is computed according to
y n + s = − ∑ k = 0 s − 1 a k y n + k + h ∑ k = 0 s b k f ( t n + k , y n + k ) . {\displaystyle y_{n+s}=-\sum _{k=0}^{s-1}a_{k}y_{n+k}+h\sum _{k=0}^{s}b_{k}f(t_{n+k},y_{n+k}).}
The next iterate of a linear multistep method depends on the previous s iterates. Thus, in the definition for the local truncation error, it is now assumed that the previous s iterates all correspond to the exact solution:
τ n = y ( t n + s ) + ∑ k = 0 s − 1 a k y ( t n + k ) − h ∑ k = 0 s b k f ( t n + k , y ( t n + k ) ) . {\displaystyle \tau _{n}=y(t_{n+s})+\sum _{k=0}^{s-1}a_{k}y(t_{n+k})-h\sum _{k=0}^{s}b_{k}f(t_{n+k},y(t_{n+k})).} [9]
Again, the method is consistent if τ n = o ( h ) {\displaystyle \tau _{n}=o(h)} and it has order p if τ n = O ( h p + 1 ) {\displaystyle \tau _{n}=O(h^{p+1})}
. The definition of the global truncation error is also unchanged.
The relation between local and global truncation errors is slightly different from in the simpler setting of one-step methods. For linear multistep methods, an additional concept called zero-stability is needed to explain the relation between local and global truncation errors. Linear multistep methods that satisfy the condition of zero-stability have the same relation between local and global errors as one-step methods. In other words, if a linear multistep method is zero-stable and consistent, then it converges. And if a linear multistep method is zero-stable and has local error τ n = O ( h p + 1 ) {\displaystyle \tau _{n}=O(h^{p+1})} , then its global error satisfies e n = O ( h p ) {\displaystyle e_{n}=O(h^{p})}
.[10]
- ^ Gupta, G. K.; Sacks-Davis, R.; Tischer, P. E. (March 1985). "A review of recent developments in solving ODEs". Computing Surveys. 17 (1): 5–47. CiteSeerX 10.1.1.85.783. doi:10.1145/4078.4079.
- ^ Süli & Mayers 2003, p. 317, calls τ n / h {\displaystyle \tau _{n}/h}
the truncation error.
- ^ Süli & Mayers 2003, pp. 321 & 322
- ^ Iserles 1996, p. 8; Süli & Mayers 2003, p. 323
- ^ Süli & Mayers 2003, p. 317
- ^ Iserles 1996, p. 5
- ^ Süli & Mayers 2003, p. 318
- ^ Süli & Mayers 2003, p. 322
- ^ Süli & Mayers 2003, p. 337, uses a different definition, dividing this by essentially by h
- ^ Süli & Mayers 2003, p. 340
- Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2.
- Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0521007941.
- Notes on truncation errors and Runge-Kutta methods
- Truncation error of Euler's method