Page 116 - Compact Numerical Methods For Computers
P. 116

The algebraic eigenvalue problem               105

                      in (9.2); then




                                                                                        (9.16)

                      or

                                                                                        (9.17)

                      Therefore

                                                                                        (9.18)

                      and the eigenvector(s) corresponding to the eigenvalue closest to k very quickly
                      dominate(s) the expansion. Indeed, if k is an eigenvalue, A' is singular, and after
                      solution of the linear equations (9.12a) (this can be forced to override the
                      singularity) the coefficient of the eigenvector f f corresponding to k should be of
                      the order of 1/eps, where eps is the machine precision. Peters and Wilkinson
                      (1971, pp 418-20) show this ‘full growth’ to be the only reliable criterion for
                      convergence in the case of non-symmetric matrices. The process then converges
                      in one step and obtaining full growth implies the component of the eigenvector in
                      the expansion (9.2) of x  is not too small. Wilkinson proposes choosing different
                                           1
                      vectors x 1  until one gives full growth. The program code to accomplish this is
                      quite involved, and for symmetric matrices repetition of the iterative step is
                      simpler and, because of the nature of the symmetric matrix eigenproblem, can
                      also be shown to be safe. The caution concerning the choice of starting vector for
                      matrices which are exactly representable should still be heeded, however. In the
                      case where k is not an eigenvalue, inverse iteration cannot be expected to
                      converge in one step. The algorithm given below therefore iterates until the
                      vector x has converged.
                        The form of equation (9.12a) is amenable to transformation to simplify the
                       iteration. That is, pre-multiplication by a (non-singular) matrix Q gives

                                                   QAy i  = QBx . i                      (9.19)
                       Note that neither x  nor y  are affected by this transformation. If Q is taken to be
                                        i
                                             i
                       the matrix which accomplishes one of the decompositions of §2.5 then it is
                       straightforward to carry out the iteration. The Gauss elimination, algorithm 5, or
                       the Givens’ reduction, algorithm 3, can be used to effect the transformation for
                       this purpose. The matrix Q never appears explicitly. In practice the two matrices
                       A and B will be stored in a single working array W. Each iteration will correspond
                       to a back-substitution similar to algorithm 6. One detail which deserves attention
                       is the handling of zero diagonal elements in QA', since such zeros imply a division
                       by zero during the back-substitution which solves (9.19). The author has found
                       that replacement of zero (or very small) elements by some small number, say the
                       machine precision multiplied by the norm of A', permits the process to continue
   111   112   113   114   115   116   117   118   119   120   121