Page 130 - Compact Numerical Methods For Computers
P. 130

Chapter 10

                                   REAL SYMMETRIC MATRICES




                       10.1. THE EIGENSOLUTIONS OF A REAL SYMMETRIC MATRIX
                     The most common practical algebraic eigenvalue problem is that of determining
                     all the eigensolutions of a real symmetric matrix. Fortunately, this problem has
                      the most agreeable properties with respect to its solution (see, for instance,
                     Wilkinson 1965).
                      (i) All the eigenvalues of a real symmetric matrix are real.
                      (ii) It is possible to find a complete set of n eigenvectors for an order-n real
                      symmetric matrix and these can be made mutually orthogonal. Usually they are
                      normalised so that the Euclidean norm (sum of squares of the elements) is unity.
                      Thus, the total eigenproblem can be expressed
                                                      AX = XE                           (10.1)
                      where the matrix X has column j such that
                                                                                        (10.2)
                                                     Ax  = e x j
                                                            j
                                                        i
                      where e j  is the jth eigenvalue. E is the diagonal matrix
                                                      E  = e d                          (10.3)
                                                       ij
                                                           j i j
                      with d ij  the familiar Kronecker delta (see §6.2, p 60).
                        By virtue of the orthogonality, we have
                                                       T
                                                      X X = l n                         (10.4)
                      where 1  is the unit matrix of order n, but because X is composed of n orthogonal
                            n
                      and non-null vectors, it is of full rank and invertible. Thus from equation (10.4)
                      by left multiplication with X we get
                                                        T
                                                     XX X = X                           (10.5)
                                                    -1
                      so that right multiplication with X  gives
                                               T   -1    T      -1
                                            XX XX  = XX  = XX     = 1 n                 (10.6)
                      showing that X T  is the inverse X -l  and that X is an orthogonal matrix.
                      (iii) If the matrix A is not only symmetric, so
                                                        T
                                                       A  = A                           (10.7)
                      but also positive definite (see §7.1, p 71), then from the singular-value decom-
                      position
                                                     A = USV T                          (2.53)
                                                        119
   125   126   127   128   129   130   131   132   133   134   135