Page 428 - Matrix Analysis & Applied Linear Algebra
P. 428
424 Chapter 5 Norms, Inner Products, and Orthogonality
not be a continuous function of the entries of A. For example,
1 0
for x =0,
01/x
1 0
A(x)= =⇒ A (x)=
†
0 x 10
for x =0.
00
†
†
†
Not only is A (x) discontinuous in the sense that lim x→0 A (x) = A (0), but
it is discontinuous in the worst way because as A(x) comes closer to A(0) the
matrix A (x)moves farther away from A (0). This type of behavior translates
†
†
into insurmountable computational difficulties because small errors due to round-
off (or anything else) can produce enormous errors in the computed A , and as
†
errors in A become smaller the resulting errors in A can become greater. This
†
diabolical fact is also true for the Drazin inverse (p. 399). The inherent numeri-
cal problems coupled with the fact that it’s extremely rare for an application to
†
require explicit knowledge of the entries of A or A D constrains them to being
theoretical or notational tools. But don’t underestimate this role—go back and
read Laplace’s statement quoted in the footnote on p. 81.
Example 5.12.6
Another way to view the URV or SVD factorizations in relation to the Moore–
Penrose inverse is to consider A and A † , the restrictions of A and
/R(A T ) /R(A)
A to R A T and R (A), respectively. Begin by making the straightforward
†
observations that R A † = R A T and N A † = N A T (Exercise 5.12.16).
n T m T
Since = R A ⊕ N (A) and = R (A) ⊕ N A , it follows that
n
m
†
R (A)= A( )= A(R A T ) and R A T = R A † = A ( )= A (R (A)).
†
In other words, A and A † are linear transformations such that
/R(A T ) /R(A)
T T
A : R A → R (A) and A † : R (A) → R A .
/R(A T ) /R(A)
If B = {u 1 , u 2 ,..., u r } and B = {v 1 , v 2 ,..., v r } are the first r columns
in (5.11.11), then AV 1 = U 1 C and
from U = U 1 | U 2 and V = V 1 | V 2
A U 1 = V 1 C −1 implies (recall (4.7.4)) that
†
1 2 1 2
A = C and A † = C −1 . (5.12.19)
/R(A T ) B B /R(A) BB
If left-hand and right-hand singular vectors from the SVD (5.12.2) are used in
B and B , respectively, then C = D = diag (σ 1 ,...,σ r ). Thus (5.12.19) reveals
the exact sense in which A and A are “inverses.” Compare these results with
†
the analogous statements for the Drazin inverse in Example 5.10.5 on p. 399.