Subsections

Predictor-corrector methods

In section 6.1 on pages [*] ff. various integration methods have been discussed. The elementary as well as linear multistep methods (in order to get more accurate methods) always assumed $ a_{-1} = -1$ in its general form. Explicit methods were encountered by $ b_{-1} = 0$ and implicit methods by $ b_{-1}
\ne 0$. Implicit methods have been shown to have a limited area of stability and explicit methods to have a larger range of stability. With increasing order $ k$ the linear multistep methods interval of absolute stability (intersection of the area of absolute stability in the complex plane with the real axis) decreases except for the implicit Gear formulae.

For these given reasons implicit methods can be used to obtain solutions of ordinary differential equation systems describing so called stiff problems. Now considering e.g. the implicit Adams-Moulton formulae of order 3

$\displaystyle x^{n+1} = x^{n} + h\cdot \dfrac{5}{12}\cdot f^{n+1} + h\cdot \dfrac{8}{12}\cdot f^{n} - h\cdot \dfrac{1}{12}\cdot f^{n-1}$ (6.30)

clarifies that $ f^{n+1}$ is necessary to calculate $ x^{n+1}$ (and the other way around as well). Every implicit integration method has this particular property. The above equation can be solved using iteration. This iteration is said to be convergent if the integration method is consistent and zero-stable. A linear multistep method that is at least first-order is called a consistent method. Zero-stability and consistency are necessary for convergence. The converse is also true.

The iteration introduces a second index $ m$.

$\displaystyle x^{n+1,m+1} = x^{n} + h\cdot \dfrac{5}{12}\cdot f^{n+1,m} + h\cdot \dfrac{8}{12}\cdot f^{n} - h\cdot \dfrac{1}{12}\cdot f^{n-1}$ (6.31)

This iteration will converge for an arbitrary initial guess $ x^{n+1,0}$ only limited by the step size $ h$. In practice successive iterations are processed unless

$\displaystyle \left\vert x^{n+1,m+1} - x^{n+1,m}\right\vert < \varepsilon_{abs} + \varepsilon_{rel}\cdot \left\vert x^{n+1,m}\right\vert$ (6.32)

The disadvantage for this method is that the number of iterations until it converges is unknown. Alternatively it is possible to use a fixed number of correction steps. A cheap way of providing a good initial guess $ x^{n+1,0}$ is using an explicit integration method, e.g. the Adams-Bashford formula of order 3.

$\displaystyle x^{n+1,0} = x^{n} + h\cdot \dfrac{23}{12}\cdot f^{n} - h\cdot \dfrac{16}{12}\cdot f^{n-1} + h\cdot \dfrac{5}{12}\cdot f^{n-2}$ (6.33)

Equation (6.33) requires no iteration process and can be used to obtain the initial guess. The combination of evaluating a single explicit integration method (the predictor step) in order to provide a good initial guess for the successive evaluation of an implicit method (the corrector step) using iteration is called predictor-corrector method. The motivation using an implicit integration method is its fitness for solving stiff problems. The explicit method (though possibly unstable) is used to provide a good initial guess for the corrector steps.

Order and local truncation error

The order of an integration method results from the truncation error $ \varepsilon_T$ which is defined as

$\displaystyle \varepsilon_T = x\left(t^{n+1}\right) - x^{n+1}$ (6.34)

meaning the deviation of the exact solution $ x\left(t^{n+1}\right)$ from the approximate solution $ x^{n+1}$ obtained by the integration method. For explicit integration methods with $ b_{-1} = 0$ the local truncation error $ \varepsilon_{LTE}$ yields

$\displaystyle \varepsilon_{LTE} = x\left(t^{n+1}\right) - x^{n+1}$ (6.35)

and for implicit integration methods with $ b_{-1}
\ne 0$ it is

$\displaystyle \varepsilon_{LTE} \approx x\left(t^{n+1}\right) - x^{n+1}$ (6.36)

Going into equation (6.11) and setting $ a_{-1} = -1$ the truncation error is defined as

$\displaystyle \varepsilon_{LTE} = \sum^p_{i=-1} a_i\cdot x\left(t^{n-i}\right) + h \sum^p_{i=-1} b_i\cdot f(x\left(t^{n-i}\right), t^{n-i})$ (6.37)

With the Taylor series expansions

$\displaystyle x\left(t^{n+i}\right)$ $\displaystyle = x\left(t^{n}\right) + \dfrac{\left(ih\right)}{1!} \dot{x}\left(t^{n}\right) + \dfrac{\left(ih\right)^2}{2!} \ddot{x}\left(t^{n}\right) + \ldots$ (6.38)
$\displaystyle f(x\left(t^{n+i}\right), t^{n+i}) = \dot{x}\left(t^{n+i}\right)$ $\displaystyle = \dot{x}\left(t^{n}\right) + \dfrac{\left(ih\right)}{1!} \ddot{x...
...{n}\right) + \dfrac{\left(ih\right)^2}{2!} \dddot{x}\left(t^{n}\right) + \ldots$ (6.39)

the local truncation error as defined by eq. (6.37) can be written as

$\displaystyle \varepsilon_{LTE} = C_0\cdot x\left(t^n\right) + C_1 h\cdot \dot{x}\left(t^n\right) + C_2 h^2\cdot \ddot{x}\left(t^n\right) + \ldots$ (6.40)

The error terms $ C_0$, $ C_1$ and $ C_2$ in their general form can then be expressed by the following equation.

$\displaystyle C_{q} = -\dfrac{1}{q!}\cdot \sum_{i=-1}^{p-1} a_i\cdot \left(p - ...
...c{1}{\left(q - 1\right)!} \sum_{i=-1}^{p-1} b_i\cdot \left(p - i\right)^{q - 1}$ (6.41)

A linear multistep integration method is of order $ k$ if

$\displaystyle \varepsilon_{LTE} = C_{k+1}\cdot h^{k+1}\cdot x^{(k+1)}\left(t^n\right) + O\left(h^{k+2}\right)$ (6.42)

The error constant $ C_{k+1}$ of an $ p$-step integration method of order $ k$ is then defined as

$\displaystyle C_{k+1} = -\dfrac{1}{\left(k + 1\right)!}\cdot \sum_{i=-1}^{p-1} ...
... i\right)^{k+1} - \dfrac{1}{k!} \sum_{i=-1}^{p-1} b_i\cdot \left(p - i\right)^k$ (6.43)

The practical computation of these error constants is now going to be explained using the Adams-Moulton formula of order 3 given by eq. (6.30). For this third order method with $ a_{-1} = -1$, $ a_0=1$, $ b_{-1}=5/12$, $ b_0=8/12$ and $ b_1=-1/12$ the following values are obtained using eq. (6.41).

$\displaystyle C_0$ $\displaystyle = -\dfrac{1}{0!}\cdot\left(-1\cdot 2^0 + 1\cdot 1^0\right) = 0$ (6.44)
$\displaystyle C_1$ $\displaystyle = -\dfrac{1}{1!}\cdot\left(-1\cdot 2^1 + 1\cdot 1^1\right) -\dfra...
...\cdot\left(\dfrac{5}{12} 2^0 + \dfrac{8}{12} 1^0 - \dfrac{1}{12} 0^0\right) = 0$ (6.45)
$\displaystyle C_2$ $\displaystyle = -\dfrac{1}{2!}\cdot\left(-1\cdot 2^2 + 1\cdot 1^2\right) -\dfra...
...\cdot\left(\dfrac{5}{12} 2^1 + \dfrac{8}{12} 1^1 - \dfrac{1}{12} 0^1\right) = 0$ (6.46)
$\displaystyle C_3$ $\displaystyle = -\dfrac{1}{3!}\cdot\left(-1\cdot 2^3 + 1\cdot 1^3\right) -\dfra...
...\cdot\left(\dfrac{5}{12} 2^2 + \dfrac{8}{12} 1^2 - \dfrac{1}{12} 0^2\right) = 0$ (6.47)
$\displaystyle C_4$ $\displaystyle = -\dfrac{1}{4!}\cdot\left(-1\cdot 2^4 + 1\cdot 1^4\right) -\dfra...
...frac{5}{12} 2^3 + \dfrac{8}{12} 1^3 - \dfrac{1}{12} 0^3\right) = -\dfrac{1}{24}$ (6.48)

In similar ways it can be verified for each of the discussed linear multistep integration methods that

$\displaystyle C_p = 0 \;\;\;\; \forall \;\;\;\; 0 \le p \le k$ (6.49)

The following table summarizes the error constants for the implicit Gear formulae (also called BDF - backward differention formulae).

  implicit Gear formulae (BDF)
\fbox{steps $n$} 1 2 3 4 5 6
\fbox{order $k$} 1 2 3 4 5 6
error constant $ C_{k+1}$ $ -\dfrac{1}{2}$ $ -\dfrac{2}{9}$ $ -\dfrac{3}{22}$ $ -\dfrac{12}{125}$ $ -\dfrac{10}{137}$ \fbox{$-\dfrac{20}{343}$}

The following table summarizes the error constants for the explicit Gear formulae.

  explicit Gear formulae
\fbox{steps $n$} 2 3 4 5 6 7
\fbox{order $k$} 1 2 3 4 5 6
error constant $ C_{k+1}$ $ +1$ $ +1$ $ +1$ $ +1$ $ +1$ \fbox{$+1$}

The following table summarizes the error constants for the explicit Adams-Bashford formulae.

  explicit Adams-Bashford
\fbox{steps $n$} 1 2 3 4 5 6
\fbox{order $k$} 1 2 3 4 5 6
error constant $ C_{k+1}$ $ \dfrac{1}{2}$ $ \dfrac{5}{12}$ $ \dfrac{3}{8}$ \fbox{$\dfrac{251}{720}$} $ \dfrac{95}{288}$ $ \dfrac{19087}{60480}$

The following table summarizes the error constants for the implicit Adams-Moulton formulae.

  implicit Adams-Moulton
\fbox{steps $n$} 1 1 2 3 4 5
\fbox{order $k$} 1 2 3 4 5 6
error constant $ C_{k+1}$ $ -\dfrac{1}{2}$ $ -\dfrac{1}{12}$ $ -\dfrac{1}{24}$ $ -\dfrac{19}{720}$ \fbox{$-\dfrac{3}{160}$} $ -\dfrac{863}{60480}$

Milne's estimate

The locale truncation error of the predictor of order $ k^{*}$ may be defined as

$\displaystyle \varepsilon^{*}_{LTE} = C^{*}_{k^{*}+1}\cdot h^{k^{*}+1}\cdot x^{(k^{*}+1)}\left(t^n\right) + O\left(h^{k^{*}+2}\right)$ (6.50)

and that of the corresponding corrector method of order $ k$

$\displaystyle \varepsilon_{LTE} = C_{k+1}\cdot h^{k+1}\cdot x^{(k+1)}\left(t^n\right) + O\left(h^{k+2}\right)$ (6.51)

If a predictor and a corrector method with same orders $ k = k^{*}$ are used the locale truncation error of the predictor-corrector method yields

$\displaystyle \varepsilon_{LTE} \approx \dfrac{C_{k+1}}{C^{*}_{k+1} - C_{k+1}}\cdot \left(x^{n+1, m} - x^{n+1, 0}\right)$ (6.52)

This approximation is called Milne's estimate.


Adaptive step-size control

For all numerical integration methods used for the transient analysis of electrical networks the choice of a proper step-size is essential. If the step-size is too large, the results become inaccurate or even completely wrong when the region of absolute stability is left. And if the step-size is too small the calculation requires more time than necessary without raising the accuracy. Usually a chosen initial step-size cannot be used overall the requested time of calculation.

Basically a step-size $ h$ is chosen such that

$\displaystyle \varepsilon_{LTE} < \varepsilon_{abs} + \varepsilon_{rel}\cdot \left\vert x^{n+1,m}\right\vert$ (6.53)

Forming a step-error quotient

$\displaystyle q = \dfrac{\varepsilon_{LTE}}{\varepsilon_{abs} + \varepsilon_{rel}\cdot \left\vert x^{n+1,m}\right\vert}$ (6.54)

yields the following algorithm for the step-size control. The initial step size $ h^0$ is chosen sufficiently small. After each integration step every step-error quotient gets computed and the largest $ q_{max}$ is then checked.

If $ q_{max} > 1$, then a reduction of the current step-size is necessary. As new step-size the following expression is used

$\displaystyle h^n = \left(\dfrac{\varepsilon}{q_{max}}\right)^{\tfrac{1}{k + 1}}\cdot h^n$ (6.55)

with $ k$ denoting the order of the corrector-predictor method and $ \varepsilon < 1$ (e.g. $ \approx 0.8)$. If necessary the process must be repeated.

If $ q_{max} > 1$, then the calculated value in the current step gets accepted and the new step-size is

$\displaystyle h^{n+1} = \left(\dfrac{\varepsilon}{q_{max}}\right)^{\tfrac{1}{k + 1}}\cdot h^n$ (6.56)


This document was generated by Stefan Jahn on 2007-12-30 using latex2html.