Page 351 - Matrix Analysis & Applied Linear Algebra
P. 351
5.7 Orthogonal Reduction 347
We now have four different ways to reduce a matrix to an upper-triangular
(or trapezoidal) form. (1) Gaussian elimination; (2) Gram–Schmidt procedure;
(3) Householder reduction; and (4) Givens reduction. It’s natural to try to com-
pare them and to sort out the advantages and disadvantages of each.
First consider numerical stability. This is a complicated issue, but you can
nevertheless gain an intuitive feel for the situation by considering the effect of
applying a sequence of “elementary reduction” matrices to a small perturbation
of A. Let E bea matrix such that E is small relative to A (the
F F
Frobenius norm was introduced on p. 279), and consider
P k ··· P 2 P 1 (A + E)=(P k ··· P 2 P 1 A)+(P k ··· P 2 P 1 E)= PA + PE.
If each P i is an orthogonal matrix, then the product P = P k ··· P 2 P 1 is also an
orthogonal matrix (Exercise 5.6.5), and consequently PE = E (Exercise
F F
5.6.9). In other words, a sequence of orthogonal transformations cannot magnify
the magnitude of E, and you might think of E as representing the effects of
roundoff error. This suggests that Householder and Givens reductions should be
numerically stable algorithms. On the other hand, if the P i ’s are elementary
matrices of Type I, II, or III, then the product P = P k ··· P 2 P 1 can be any
nonsingular matrix—recall (3.9.3). Nonsingular matrices are not generally norm
preserving (i.e., it is possible that PE > E ), so the possibility of E
F F
being magnified is generally present in elimination methods, and this suggests
the possibility of numerical instability.
Strictly speaking, an algorithm is considered to be numerically stable
if, under floating-point arithmetic, it always returns an answer that is the exact
solution of a nearby problem. To give an intuitive argument that the Householder
or Givens reduction is a stable algorithm for producing the QR factorization of
A n×n , suppose that Q and R are the exact QR factors, and suppose that
floating-point arithmetic produces an orthogonal matrix Q + E and an upper-
triangular matrix R + F that are the exact QR factors of a different matrix
˜
A =(Q + E)(R + F)= QR + QF + ER + EF = A + QF + ER + EF.
If E and F account for the roundoff errors, and if their entries are small relative
to those in A, then the entries in EF are negligible, and
˜
A ≈ A + QF + ER.
But since Q is orthogonal, QF = F and A = QR = R ,
F F F F F
and this means that neither QF nor ER can contain entries that are large
˜
relative to those in A. Hence A ≈ A, and this is what is required to conclude
that the algorithm is stable.
Gaussian elimination is not a stable algorithm because, as alluded to in §1.5,
problems arise due to the growth of the magnitude of the numbers that can occur