Page 120 - Compact Numerical Methods For Computers
P. 120
The algebraic eigenvalue problem 109
symmetries which cause the dominant ‘component’ of a vector to be a pair of
elements with equal magnitude but opposite sign. A program may therefore
choose first one, then the other, as dominant component, thereby calculating a
correct eigenvector but an eigenvalue having the wrong sign!
(iii) The limit on the number of iterations is a necessary precaution since the
convergence test may be too stringent. Unfortunately, the rate of convergence
may be very slow since it is governed by the eigenvalue distribution. If the
program can be controlled manually, one may prefer to allow the process to
continue until the approximate eigenvalue has settled on a value, then intervene
to halt execution if the convergence test does not become satisfied after a few
more iterations. In fully automatic computation, some limit, say 100 iterations,
does not seem unreasonable.
A comparison of Gauss elimination and Givens’ reduction algorithms for
inverse iteration using the nine test matrices of appendix 1 suggests that the
former is slightly preferable. Using matrices of order 5, 10, 15 and 20 and a shift
of k = 0, I timed the execution of inverse iteration including the triangularisation.
Of the 36 problems, two were theoretically impossible since the shift was exactly
between two eigenvalues. Both methods failed to converge in 101 iterations for
these two problems (the Wilkinson W – matrices of even order) and the vectors at
the point of termination had large residuals so these two cases were dropped from
the test. On the remaining 34, Gauss elimination yielded a smaller residual than
Givens’ reduction on 20, had a lower execution time on 21 for a total time (a sum
of the separate times) of 3489·3 seconds compared to 4160·9 seconds, and failed
to satisfy the convergence requirements in 101 iterations a total of 8 times, as
compared to 11 times out of 34 for Givens’ reduction. The above tests were
performed in BASIC on a Data General NOVA operating interpretively in six
hexadecimal (base 16) digit arithmetic.
Together with the overall timing, we should consider the fact that the methods
have very different relative performances depending on the matrix in question; in
particular, how many interchanges must be performed in Gauss elimination.
Furthermore, as we might expect, the Givens’ method is faster for sparse
matrices.
Note that algorithm 10 is not the most compact form for the ordinary algebraic
eigenvalue problem (2.62), since the Gauss elimination algorithm 5 gives
PA' x = LRx = Pex (9.22)
by virtue of (6.27), where P is the permutation matrix defined by the inter-
changes resulting from pivoting. These can be stored, as discussed in §6.3 in a
single integer vector of indices q. Then to perform inverse iteration, it is only
necessary to store this vector plus a working array n by n instead of the n by 2n
array used in algorithm 10. Two vectors x and y are still needed. The elements of
the lower-triangular matrix L are
1 for i = j
L = 0 for j > i (9.23)
ij
m ij for j < i.