Page 292 - Advanced engineering mathematics
P. 292

272    CHAPTER 9  Eigenvalues, Diagonalization, and Special Matrices

                                                                      ⎛ ⎞
                                                                       e 1
                                                                              n       n
                                                                      ⎜ ⎟
                                                   t   
               e 2                 2
                                                                      ⎜ ⎟
                                                 E E = e 1  e 2  ··· e n ⎜ . ⎟ =  e j e j =  |e j | .
                                                                        .
                                                                             j=1      j=1
                                                                      ⎝ . ⎠
                                                                       e n
                                 Therefore the conclusion of Lemma 9.1 can be written
                                                                   n    n
                                                                   i=1  j=1  a ij e i e j
                                                              λ =     n         .
                                                                         |e j | 2
                                                                      j=1
                                 Proof of Lemma 9.1  Since AE = λE, then
                                                                  t       t
                                                                 E AE = λE E,
                                 yielding the conclusion of the lemma.
                                    When we discuss diagonalization, we will need to know if the eigenvectors of a matrix
                                 are linearly independent. The following theorem answers this question for the special case
                                 that the n eigenvalues of A are distinct (the characteristic polynomial has no repeated
                                 roots).



                           THEOREM 9.2
                                 Suppose the n × n matrix A has n distinct eigenvalues. Then A has n linearly independent
                                 eigenvectors.
                                    To illustrate, in Example 9.4, A was 2×2 and had two distinct eigenvalues. The eigenvectors
                                 produced for each eigenvalue were linearly independent.
                                 Proof  We will show by induction that any k distinct eigenvalues have associated with them k
                                 linearly independent eigenvectors. For k = 1 there is nothing to show. Thus suppose k ≥ 2 and
                                 the conclusion of the theorem is valid for any k − 1 distinct eigenvalues. This means that any
                                 k − 1 distinct eigenvalues have associated with them k − 1 distinct eigenvectors. Suppose A has
                                 k distinct eigenvalues λ 1 ,··· ,λ k with corresponding eigenvectors V 1 ,··· ,V k . We want to show
                                 that these eigenvectors are linearly independent.
                                    If they were linearly dependent, then there would be numbers c 1 ,··· ,c k , not all zero, such
                                 that
                                                           c 1 V 1 + c 2 V 2 + ··· + c k V k = O.

                                 By relabeling if necessary, we may assume for convenience that c 1  = 0. Multiply this equation
                                 by λ 1 I n − A:
                                                      O =(λ 1 I n − A)(c 1 V 1 + c 2 V 2 + ··· + c k V k )

                                                        =c 1 (λ 1 I n − A)V 1 + c 2 (λ 1 I n − A)V 2
                                                          + ··· + c k (λ 1 I n − A)V k
                                                        =c 1 (λ 1 V 1 − λ 1 V 1 ) + c 2 (λ 1 V 2 − λ 2 V 2 )
                                                          + ··· + c k (λ 1 V k − λ k V k )
                                                        =c 2 (λ 1 − λ 2 )V 1 + ··· + c k (λ 1 − λ k )V k .
                                 Now V 2 ,··· ,V k are linearly independent by the inductive hypothesis, so these coefficients are all
                                 zero. But λ 1  = λ j for j =2,··· ,k by the assumptions that the eigenvalues are distinct. Therefore
                                                                c 2 = ··· = c k = 0.




                      Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
                      Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

                                   October 14, 2010  14:49  THM/NEIL   Page-272        27410_09_ch09_p267-294
   287   288   289   290   291   292   293   294   295   296   297