Page 107 - Compact Numerical Methods For Computers
P. 107
96 Compact numerical methods for computers
This process yields a very compact algorithm for inverting a matrix in the
working storage needed to store only a single matrix. Alas, the lack of pivoting
may be disastrous. The algorithm will not work, for instance, on the matrix
0 1
1 0
which is its own inverse. Pivoting causes complications. In this example, inter-
changing rows to obtain a non-zero pivot implies that the columns of the resulting
inverse are also interchanged.
The extra work involved in column interchanges which result from partial
pivoting is avoided if the matrix is symmetric and positive definite-this special
case is treated in detail in the next section. In addition, in this case complete
pivoting becomes diagonal pivoting, which does not disorder the inverse. There-
fore algorithms such as that discussed above are widely used to perform stepwise
regression, where the pivots are chosen according to various criteria other than
error growth. Typically, since the pivot represents a new independent variable
entering the regression, we may choose the variable which most reduces the
residual sum of squares at the current stage. The particular combination of (say) m
out of n independent variables chosen by such a forward selection rule is not
necessarily the combination of m variables of the n available which gives the
smallest residual sum of squares. Furthermore, the use of a sum-of-squares and
cross-products matrix is subject to all the criticisms of such approaches to
least-squares problems outlined in chapter 5.
As an illustration, consider the problem given in example 7.2. A Data General
ECLIPSE operating in six hexadecimal digit arithmetic gave a solution
-4·64529E-2
1·02137
x = -0·160467
-0·285955
206·734
when the pivots were chosen according to the residual sum-of-squares reduction
criterion. The average relative difference of these solution elements from those of
solution (a) of table 3.2 is 0·79%. Complete (diagonal) pivoting for the largest
element in the remaining submatrix of the Gauss-Jordan working array gave a
solution with an average relative difference (root mean square) of 0·41%. There
are, of course, 120 pivot permutations, and the differences measured for each
solution ranged from 0·10% to 0·79%. Thus pivot ordering does not appear to be
a serious difficulty in this example.
The operations of the Gauss-Jordan algorithm are also of utility in the solution
of linear and quadratic programming problems as well as in methods derived from
such techniques (for example, minimum absolute deviation fitting). Unfortunately,
these topics, while extremely interesting, will not be discussed further in this
monograph.