Page 132 - Compact Numerical Methods For Computers
P. 132
Real symmetric matrices 121
Explicit analysis of the elements of equation (10.16) shows that (a) if S S , then
ii
jj
Q = 0, and (b) the commutation
ij
QS = SQ (10.17)
is true even in the degenerate eigenvalue case; thus,
T
AV = XSQ = XQS = XX VS = VS. (10.18)
The corresponding result for U is shown in the same fashion.
10.2. EXTENSION TO MATRICES WHICH ARE NOT POSITIVE
DEFINITE
The above result shows that if A is a symmetric positive definite matrix, its
eigensolutions can be found via the singular-value decomposition algorithms 1
and 4. Moreover, only one of the matrices U or V is required, so that the
eigenvectors overwrite the original matrix. Furthermore, the algorithm 1, for
example, generates the matrix B = US in performing an orthogonalisation of the
columns of A, so that the eigenvalues are kept implicitly as the norms of the
columns of the orthogonalised matrix and a separate vector to store the eigen-
values is not required.
What of the case where the matrix A is not positive definite? This is hardly any
extra trouble, since the matrix
A' = A – hl n (10.19)
has the same eigenvectors as A (as proved by direct substitution in equation
(10.1)) and has eigenvalues
E' = E – h for i = 1, 2, . . . , n (10.20)
ii
ii
where E , i = 1, 2, . . . , n , are the eigenvalues of A. Thus it is always possible to
ii
generate a positive definite matrix from A by adding an appropriate constant –h
to each of its diagonal elements. Furthermore, it is very simple to compute an
appropriate value for h from Gerschgorin’s theorem (Ralston 1965, p 467), which
states that all the eigenvalues of a matrix A (a general square matrix, real or
complex) are found in the domain which is the union of the circles having centres
A , i = 1, 2, . . . , n , and respective radii
ii
(10.21)
Because a symmetric matrix has real eigenvalues this implies
E > E = min(A – r )
nn ii i
(10.22)
If E > 0, the matrix A is positive definite; otherwise a shift equal to E will make