Page 114 - Compact Numerical Methods For Computers
P. 114
The algebraic eigenvalue problem 103
of x , we have
1
(9.2)
The first iteration of the power method then gives
(9.3)
f
where e j is the eigenvalue of A corresponding to f . Denoting the reciprocal of
j
the norm of y j by N , that is
j
-1
N = ||y || (9.4)
j
j
the vector x i+1 is given by
(9.5)
The first factor in this expression is simply a normalisation to prevent numbers
becoming too large to work with in the computer. Presuming e 1 is the largest
eigenvalue in magnitude, x can also be written
i+1
(9.6)
But since
|e /e | < 1 (9.7)
1
j
unless j = 1 (the case of degenerate eigenvalues is treated below), the coefficients
f
of f , j 1, eventually become very small. The ultimate rate of convergence is
j
given by
r = | e /e | (9.8)
2
1
where e is the eigenvalue having second largest magnitude. By working with the
2
matrix
A ' = A – kl (9.9)
this rate of convergence can be improved if some estimates of e 1 and e 2 are
known. Even if such information is not at hand, ad hoc shifts may be observed to
improve convergence and can be used to advantage. Furthermore, shifts permit
(i) the selection of the most positive eigenvalue or the most negative eigenvalue
and, in particular,
(ii) evasion of difficulties when these two eigenvalues are equal in magnitude.
Degenerate eigenvalues present no difficulty to the power method except that it
now converges to a vector in the subspace spanned by all eigenvectors corres-
ponding to e . Specific symmetry or other requirements on the eigenvector must
1
be imposed separately.
In the above discussion the possibility that a 1 = 0 in the expansion of x 1 has
been conveniently ignored, that is, some component of x in the direction of f is
f
1 l
assumed to exist. The usual advice to users is, ‘Don’t worry, rounding errors will