Biconjugate gradient stabilized method
Encyclopedia
In numerical linear algebra
Numerical linear algebra
Numerical linear algebra is the study of algorithms for performing linear algebra computations, most notably matrix operations, on computers. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, Telecommunication, computational...

, the biconjugate gradient stabilized method, often abbreviated as BiCGSTAB, is an iterative method
Iterative method
In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method...

 developed by H. A. van der Vorst
Henk van der Vorst
Hendrik "Henk" Albertus van der Vorst is a Dutch mathematician and Emeritus Professor of Numerical Analysis at Utrecht University...

 for the numerical solution of nonsymmetric linear systems. It is a variant of the biconjugate gradient method
Biconjugate gradient method
In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equationsA x= b.\,...

 (BiCG) and has faster and smoother convergence than the original BiCG as well as other variants such as the conjugate gradient squared method (CGS). It is a Krylov subspace
Krylov subspace
In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A , that is,...

 method.

Unpreconditioned BiCGSTAB

To solve a linear system , BiCGSTAB starts with an initial guess and proceeds as follows:
  1. Choose an arbitrary vector such that , e.g.,
  2. For
    1. If is accurate enough then quit

Preconditioned BiCGSTAB

Preconditioner
Preconditioner
In mathematics, preconditioning is a procedure of an application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solution. Preconditioning is typically related to reducing a condition number of the problem...

s are usually used to accelerate convergence of iterative methods. To solve a linear system with a preconditioner , preconditioned BiCGSTAB starts with an initial guess and proceeds as follows:
  1. Choose an arbitrary vector such that , e.g.,
  2. For
    1. If is accurate enough then quit


This formulation is equivalent to applying unpreconditioned BiCGSTAB to the explicitly preconditioned system

with , and . In other words, both left- and right-preconditioning are possible with this formulation.

BiCG in polynomial form

In BiCG, the search directions and and the residuals and are updated using the following recurrence relations:
,
,
,
.


The constants and are chosen to be
,


where so that the residuals and the search directions satisfy biorthogonality and biconjugacy, respectively, i.e., for ,
,
.


It is straightforward to show that
,
,
,


where and are th-degree polynomials in . These polynomials satisfy the following recurrence relations:
,
.

Derivation of BiCGSTAB from BiCG

It is unnecessary to explicitly keep track of the residuals and search directions of BiCG. In other words, the BiCG iterations can be performed implicitly. In BiCGSTAB, one wishes to have recurrence relations for


where with suitable constants instead of in the hope that will enable faster and smoother convergence in than .

It follows from the recurrence relations for and and the definition of that
,


which entails the necessity of a recurrence relation for . This can also be derived from the BiCG relations:
.


Similarly to defining , BiCGSTAB defines
.


Written in vector form, the recurrence relations for and are
,
.


To derive a recurrence relation for , define
.


The recurrence relation for can then be written as
,


which corresponds to
.

Determination of BiCGSTAB constants

Now it remains to determine the BiCG constants and and choose a suitable .

In BiCG, with
.


Since BiCGSTAB does not explicitly keep track of or , is not immediately computable from this formula. However, it can be related to the scalar
.


Due to biorthogonality, is orthogonal to where is any polynomial of degree in . Hence, only the highest-order terms of and matter in the dot products and . The leading coefficients of and are and , respectively. It follows that
,


and thus
.


A simple formula for can be similarly derived. In BiCG,
.


Similarly to the case above, only the highest-order terms of and matter in the dot products thanks to biorthogonality and biconjugacy. It happens that and have the same leading coefficient. Thus, they can be replaced simultaneously with in the formula, which leads to
.


Finally, BiCGSTAB selects to minimize in -norm as a function of . This is achieved when
,


giving the optimal value
.

Generalization

BiCGSTAB can viewed as a combination of BiCG and GMRES where each BiCG step is followed by a GMRES (i.e., GMRES restarted at each step) step to repair the irregular convergence behavior of CGS, as an improvement of which BiCGSTAB was developed. However, due to the use of degree-one minimum residual polynomials, such repair may not be effective if the matrix has large complex eigenpairs. In such cases, BiCGSTAB is likely to stagnate as confirmed by numerical experiments.

One may expect that higher-degree minimum residual polynomials may better handle this situation. This gives rise to algorithms including BiCGSTAB2 and the more general BiCGSTAB. In BiCGSTAB, a GMRES step follows every BiCG steps. BiCGSTAB2 is equivalent to BiCGSTAB with .

See also

  • Biconjugate gradient method
    Biconjugate gradient method
    In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equationsA x= b.\,...

  • Conjugate gradient squared method
  • Conjugate gradient method
    Conjugate gradient method
    In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems that are too...

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK