Subsections

Convergence

Similar to the DC analysis convergence problems occur during the transient analysis (see section 3.3.2 on page [*]) as well. In order to improve the overall convergence behaviour it is possible to improve the models on the one hand and/or to improve the algorithms on the other hand.

The implications during Newton-Raphson iterations solving the linear equation system

$\displaystyle \left[A\left(x^k\right)\right] \cdot \left[x^{k+1}\right] = \left[z\left(x^k\right)\right]$ (6.165)

are continuous device model equations (with continuous derivatives as well), floating nodes (make the Jacobian matrix $ A$ singular) and the initial guess $ x^0$. The convergence problems which in fact occur are local minimums causing the matrix $ A$ to be singular, nearly singular matrices and overflow problems.

\includegraphics[width=0.9\linewidth]{convprob1}

\includegraphics[width=0.9\linewidth]{convprob2}

Limiting schemes

The modified (damped) Newton-Raphson schemes are based on the limitation of the solution vector $ x^k$ in each iteration.

$\displaystyle x^{k+1} = x^k + \alpha\cdot \Delta x^{k+1} \;\;\;\; \textrm{ with } \;\;\;\; \Delta x^{k+1} = x^{k+1} - x^k$ (6.166)

One possibility to choose a value for $ \alpha \in [0,1]$ is

$\displaystyle \alpha = \dfrac{\gamma}{\left\lVert\Delta x^{k+1}\right\rVert_{\infty}}$ (6.167)

This is a heuristic and does not ensure global convergence, but it can help solving some of the discussed problems. Another possibility is to pick a value $ \alpha^k$ which minimizes the $ L_2$ norm of the right hand side vector. This method performs a one-dimensional (line) search into Newton direction and guarantees global convergence.

$\displaystyle x^{k+1} = x^k + \alpha^k \cdot \Delta x^{k+1} \;\;\;\; \textrm{ w...
...;\; \left\lVert z\left(x^k + \alpha^k \cdot \Delta x^{k+1}\right)\right\rVert_2$ (6.168)

The one remaining problem about that line search method for convergence improvement is its iteration into local minimums where the Jacobian matrix is singular. The damped Newton-Raphson method ``pushes'' the matrix into singularity as depicted in fig. 6.11.

Figure 6.11: singular Jacobian problem
\includegraphics[width=0.55\linewidth]{convprob3}


Continuation schemes

The basic idea behind this Newton-Raphson modification is to generate a sequence of problems such that a problem is a good initial guess for the following one, because Newton basically converges given a close initial guess.

The template algorithm for this modification is to solve the equation system

$\displaystyle \left[A\right] \cdot \left[x\right] - \left[z\right]$ $\displaystyle = 0$ (6.169)
$\displaystyle F\left(x\left(\lambda\right), \lambda\right)$ $\displaystyle = 0$ (6.170)

with the parameter $ \lambda \in [0,1]$ given that $ x\left(\lambda\right)$ is sufficiently smooth. $ F\left(x\left(0\right), 0\right)$ starts the continuation and $ F\left(x\left(1\right), 1\right)$ ends the continuation. The algorithm outline is as follows: First solve the problem $ F\left(x\left(0\right), 0\right)$, e.g. set $ \lambda = \Delta\lambda
= 0.01$ and try to solve $ F\left(x\left(\lambda\right),
\lambda\right)$. If Newton-Raphson converged then increase $ \lambda$ by $ \Delta\lambda$ and double $ \Delta\lambda = 2\cdot \Delta\lambda$, otherwise half $ \Delta\lambda = 0.5\cdot \Delta\lambda$ and set $ \lambda = \lambda_{prev} + \Delta\lambda$. Repeat this until $ \lambda = 1$.

Source stepping

Applied to the solution of (non-linear) electrical networks one may think of $ \alpha \in [0,1]$ as a multiplier for the source vector $ S$ yielding $ S\left(\alpha\right) = \alpha S$. Varying $ \alpha$ form 0 to 1 and solve at each $ \alpha$. The actual circuit solution is done when $ \alpha = 1$. This method is called source stepping. The solution vector $ x\left(\alpha\right)$ is continuous in $ \alpha$ (hence the name continuation scheme).

Minimal derivative $ \mathrm {g_{min}}$ stepping

Another possibility to improve convergence of almostly singular electrical networks is the so called $ g_{min}$ stepping, i.e. adding a tiny conductance to ground at each node of the Jacobian $ A$ matrix. The continuation starts e.g. with $ g_{min} = 0.01$ and ends with $ g_{min} = 0$ reached by the algorithm described in section 6.6.2. The equation system is slightly modified by adding the current $ g_{min}$ to each diagonal element of the matrix $ A$.


This document was generated by Stefan Jahn on 2007-12-30 using latex2html.