Page 163 - Matrices theory and applications
P. 163

8. Matrix Factorizations
                              146
                              We use the notation of Theorem 7.7.1. From property 1, we obtain S =
                              SGS,where S := diag(s 1 ,... ,s r ). Since S is nonsingular, we obtain G =
                               −1
                                 . Next, property 3 implies SH =0, that is, H = 0. Likewise, property
                              S
                              4gives JS =0, that is, J = 0. Finally, property 2 yields K = JSH =0.
                              We see, then, that D must equal (uniqueness)
                                                †

                                                            −1

                                                          S
                                                                0
                                                                    .
                                                            0   0
                              One easily checks that this matrix solves our problem (existence).
                                Some obvious properties are stated in the following proposition. We warn
                              the reader that, contrary to what happens for the standard inverse, the
                                                                                  †
                              generalized inverse of AB does not need to be equal to B A .
                                                                                †
                              Proposition 8.4.1 The following equalities hold for the generalized in-
                              verse:
                                              1                   †           ∗    ∗ †
                                          †
                                      (λA) =   A †  (λ  =0),  A †  = A,   A †  =(A ) .
                                              λ
                                                  †
                              If A ∈ GL n (CC),then A = A −1 .
                                         † 2
                                                 †
                                                               †
                                Since (AA ) = AA ,the matrix AA is a projector, which can therefore
                              be described in terms of its range and kernel. Since AA is Hermitian, these
                                                                             †
                                                                                 †
                              subspaces are orthogonal to each other. Obviously, R(AA ) ⊂ R(A). But
                                      †
                              since AA A = A, the reverse inclusion holds too. Finally, we have
                                                            †
                                                       R(AA )= R(A),
                              and AA is the orthogonal projector onto R(A). Likewise, A A is an orthog-
                                                                                 †
                                    †
                              onal projector. Obviously, ker A ⊂ ker A A, while the identity AA A = A
                                                                                        †
                                                                 †
                              implies the reverse inclusion, so that
                                                       ker A A =ker A.
                                                           †
                              Finally, A A is the orthogonal projector onto (ker A) .
                                                                            ⊥
                                      †
                              8.4.1 Solutions of the General Linear System
                                                                            n
                              Given a matrix M ∈ M n×m(CC) and a vector b ∈ CC , let us consider the
                              linear system
                                                          Mx = b.                         (8.2)
                              In (8.2), the matrix M need not be square, even not of full rank. From
                              property 1, a necessary condition for the solvability of (8.2) is MM b =
                                                                                          †
                              b. Obviously, this is also sufficient, since it ensures that x 0 := M b is
                                                                                          †
                              a solution. Hence, the generalized inverse plays one of the roles of the
                              standard inverse, namely to provide one solution of (8.2) when it is solvable.
                              To catch every solution of that system, it remains to solve the homogeneous
   158   159   160   161   162   163   164   165   166   167   168