Page 91 - Compact Numerical Methods For Computers
P. 91
80 Compact numerical methods for computers
with a residual sum of squares (using KW) over the seven periods of 4·15777E–7.
The same calculation with I gives a residual sum of squares of 241·112, showing
2
that there is not a consistent set of weights. It is, of course, possible to find a
consistent set of weights even though index numbers have been computed using a
varying set; for instance, if our price data had two elements identical in one
period, any pair of weights for these prices whose sum was fixed would generate
the same index number.
6.3. VARIATIONS ON THE THEME OF GAUSS ELIMINATION
Gauss elimination really presents only one type of difficulty to the programmer-
which of the many possible variations to implement. We have already touched
upon the existence of two of the better known ones, those of Crout and Doolittle
(see Dahlquist and Björck 1974, pp 157–8). While these methods are useful and
important, they require double-precision arithmetic to be used to full advantage,
so cannot be used effectively if the computing system at hand lacks this capability.
Bowdler et al (1966) present ALGOL versions of the Crout algorithm which
implicitly scale the rows of the coefficient matrix. These algorithms are compli-
cated by ALGOL'S lack of double-length arithmetic, necessitating calls to machine
code procedures. (This comment applies to ALGOL-60, not ALGOL-68.)
By and large I have avoided scaling within my programs because of the great
difficulty of making any reliable general recommendations. Indeed, given any two
non-singular diagonal matrices D and E, the system of equations
- 1
DAEE x = D b (6.33)
has the same solution x as the equations
Ax = b. (2.2)
In scaling the equations by row multiplication we are adjusting D, which adjusts
the pivot selection. It is often recommended that the scaling factors in D be
chosen to equilibrate the matrix A, that is, so that
max |(DA) | = 1 for i = 1, 2, . . . , n (6.34)
ij
j
where for the moment E is assumed to be a unit matrix. This is simply a dose of
common sense which attempts to avoid arithmetic involving numbers widely
different in magnitude. However, as Dahlquist and Björck (1974, pp 181–3)
-1
point out, the scaling E of the solution x can frustrate our efforts to stabilise the
computation. Furthermore, optimal scaling depends on knowledge of the matrix
-1
A , which is not known. They therefore suggest E be chosen to reflect ‘the
importance of the unknowns’. This statement is suitably amorphous to cover
whatever situations arise, so I shall venture the opinion that the magnitudes of the
solution elements
- 1
y = E x (6.35)
should be roughly equivalent. That is to say, the variables in the problem at hand
should be measured in units which give the expected solution elements