Page 196 - Matrices theory and applications
P. 196

Then the QR method converges; that is, the lower triangular part of A k
                              converges to
                                                   
                                                                      
                                                      λ 1
                                                      0
                                                          λ 2
                                                                      
                                                   
                                                                       .
                                                   
                                                    .
                                                                      
                                                           .
                                                    .
                                                                      
                                                      0 .  . ··· .  . 0 . .  λ n 10.2. The QR Method  179
                                Proof
                                In the light of Theorem 10.2.1, it is enough to show that the matrix
                              Y in the previous proof admits an LU factorization. We have YA =
                              diag(λ 1 ,... ,λ n )Y .The rows of Y are thus the left eigenvectors: l j A = λ j l j .
                                If x ∈ CC  n  is nonzero, there exists a unique index r such that x r  =0,
                              while j> r implies x j = 0. By induction, quoting the Hessenberg form and
                                                              m
                              the irreducibility of A,we obtain (A x) r+m  = 0, while j> r + m implies
                                m
                              (A x) j = 0. Hence, the vectors x,Ax,... ,A n−r x are linearly independent.
                              A linear subspace, stable under A and containing x,is thusof dimension
                              greater than or equal to n − r +1.
                                Let F be a linear subspace, stable under A,of dimension p ≥ 1. Let
                              r be the smallest integer such that F contains a nonzero vector x with
                              x r+1 = ··· = x n = 0. The minimality of r implies that x r  = 0. Hence, we
                              have p ≥ n − r + 1. By construction, the intersection of F and of linear
                                                               1
                                        1
                              subspace [e ,... , e n−p ] spanned by e ,... , e r−1  reduces to {0}.Thus we
                              also have p +(r − 1) ≤ n. Finally, r = n − p + 1, and we see that
                                                         1
                                                                       n
                                                   F ⊕ [e ,... , e n−p ]= CC .
                                                         ⊥
                                Let us choose F =[l 1 ,... ,l q ] , which is stable under A.Then p = n−q,
                              and we have
                                                              1
                                                                           n
                                                                     q
                                                         ⊥
                                                [l 1 ,... ,l q ] ⊕ [e ,... , e ]= CC .
                                                               k
                              This amounts to saying that det(l j e ) 1≤j,k≤q  = 0. In other words, the
                              leading principal minor of order q of Y is nonzero. From Theorem 8.1.1, Y
                              admits an LU factorization.
                              Corollary 10.2.1 If A ∈ HPD n and if A 0 is a Hessenberg matrix, unitar-
                              ily similar to A (for example, a matrix obtained by Householder’s method),
                              then the sequence A k defined by the QR method converges to a diagonal
                              matrix whose diagonal entries are the eigenvalues of A.
                                Indeed, A 0 is block-diagonal with irreducible diagonal blocks. We are thus
                              reduced to the case of a Hermitian positive definite tridiagonal irreducible
                              matrix. Such a matrix satisfies the hypotheses of Theorem 10.2.2. The lower
                              triangular part converges, hence the whole matrix, since it is Hermitian.
                              Implementing the QR method:The QR method converges faster as
                              λ n , or merely λ n /λ n−1 , becomes smaller. We can obtain this situation
   191   192   193   194   195   196   197   198   199   200   201