Page 106 - Compact Numerical Methods For Computers
P. 106
The symmetric positive definite matrix again 95
then dividing through by diagonal elements. This leaves
1x = f ' (8.5)
that is, the solution to the set of equations.
Yet another way to look at this procedure is as a series of elementary row
operations (see Wilkinson 1965) designed to replace the pth column of an n by n
matrix A with the pth column of the unit matrix of order n, that is, e . To
P
accomplish this, the pth row of A must be divided by A , and A ip times the
PP
resulting pth row subtracted from every row i for i p. For this to be
possible, of course, A cannot be zero.
pp
A combination of n such steps can be used to solve sets of linear equations. To
avoid build-up of rounding errors, some form of pivoting must be used. This will
involve one of a variety of schemes for recording pivot positions or else will use
explicit row interchanges. There are naturally some trade-offs here between
simplicity of the algorithm and possible efficiency of execution, particularly if the
set of equations is presented so that many row exchanges are needed.
By using the Gauss-Jordan elimination we avoid the programming needed to
perform the back-substitution required in the Gauss elimination method. The
3
price we pay for this is that the amount of work rises from roughly n /3
3
operations for a single equation to n /2 as n becomes large (see Ralston 1965
p 401). For small n the Gauss-Jordan algorithm may be more efficient depending
on factors such as implementation and machine architecture. In particular, it is
possible to arrange to overwrite the ith column of the working matrix with the
corresponding column of the inverse. That is, the substitution operations of
equations (8.1) and (8.4) with 1 replaced by i give elimination formulae
= A /A (8.1a)
à ij ii
ij
à kj = A kj – A ki (A /A ) (8.4a)
ii
ij
for j = 1, 2, . . . , n, k = 1, 2 , . . . , n, but k i, with the tilde representing the
transformed matrix. However, these operations replace column i with e , the ith
i
column of the unit matrix 1 , information which need not be stored. The
n
right-hand side b is transformed according to
(8.1b)
(8.3a)
for k = 1, 2, . . . , n with k i. To determine the inverse of a matrix, we could solve
the linear-equation problems for the successive columns of 1 . But now all
n
columns e for j > i will be unaltered by (8.1b) and (8.3a). At the ith stage of the
j
reduction, e can be substituted on column i of the matrix by storing the pivot A ,
i
ii
substituting the value of 1 in this diagonal position, then performing the division
implied in (8.1a). Row i of the working matrix now contains the multipliers
à ij = (A /A ). By performing (8.4a) row-wise, each value A ki can be saved, a
ii
ij
zero substituted from e , and the elements of A , j = 1, 2, . . . , n, computed.
kj
i