Page 145 - Compact Numerical Methods For Computers
P. 145
134 Compact numerical methods for computers
eigenvalues in addition to two n by n arrays to store the original matrix and the
2
eigenvectors. Thus 2n +n elements appear to be needed. Rutishauser’s program
2
requires 2n +2n elements (that is, an extra vector of length n) as well as a
number of individual variables to handle the extra operations used to guarantee
the high precision of the computed eigensolutions. Furthermore, the program
does not order the eigenvalues and in a great many applications this is a necessary
requirement before the eigensolutions can be used in further calculations. The
ordering the author has most frequently required is from most positive eigenvalue
to most negative one (or vice versa).
The extra code required to order the eigenvalues and their associated eigenvec-
tors can be avoided by using different formulae to calculate intermediate results.
That is to say, a slightly different organisation of the program permits an extra
function to be incorporated without increasing the code length. This is illustrated
by the second column of table 10.11. It should be emphasised that the program
responsible for these results was a simple combination of Rutishauser’s algorithm
and some of the ideas that have been presented in §3.3 with little attention to how
well these meshed to preserve the high-precision qualities Rutishauser has care-
fully built into his routine. The results of the mongrel program are nonetheless
quite precise.
If the precision required is reduced a little further, both the extra vector of
length n and about a third to a half of the code can be removed. Here the
uncertainty in measuring the reduction in the code is due to various choices such
as which DIMENSION-ing, input-output or constant setting operations are included
in the count.
It may also seem attractive to save space by using the symmetry of the matrix A
(k)
as well as that of the intermediate matrices A . This reduces the workspace by
n(n–1)/2 elements. Unfortunately, the modification introduces sufficient extra
code that it is only useful when the order, n, of A is greater than approximately
10. However, 10 is roughly the order at which the Jacobi methods become
non-competitive with approaches such as those mentioned earlier in this section.
Still worse, on a single-precision machine, the changes appear to reduce the
precision of the results, though the program used to produce column 4 of table
10.1 was not analysed extensively to discover and rectify sources of precision loss.
Note that at order 10, the memory capacity of a small computer may already be
used up, especially if the eigenprohlem is part of a larger computation.
If the storage requirement is critical, then the methods of Hestenes (1958),
Chartres (1962) and Kaiser (1972) as modified by Nash (1975) should be
considered. This latter method is outlined in §§10.1 and 10.2, and is one which
transforms the original matrix A into another, B, whose columns are the eigenvec-
tors of A each multiplied by its corresponding eigenvalue, that is
(10.43)
2
where E is the diagonal matrix of eigenvalues. Thus only n storage locations are
required and the code is, moreover, very short. Column 5 of table 10.1 shows the
precision that one may expect to obtain, which is comparable to that found using
simpler forms of the traditional Jacobi method. Note that the residual and
inner-product computations for table 10.1 were all computed in single precision.