Page 98 - Compact Numerical Methods For Computers
P. 98
The Choleski decomposition 87
we should set L im = 0 for i > m. This is a relatively trivial modification to the
decomposition algorithm. L is, of course, found as the square root of a quantity
mm
which arises by subtraction, and in practice this quantity may be slightly negative
due to rounding error. One policy, adopted here, is to proceed with the decom-
position, setting all L = 0 for i > m even if this quantity is negative, thus
im
assuming the matrix A is non-negative definite. Since there may be very good
reasons for presupposing A to be non-negative definite, for instance if it has been
formed as a sum-of-squares and cross-products matrix in a least-squares regres-
sion calculation, this is not as dangerous as it appears. Furthermore the decision
to continue the decomposition in §7.1 when the computed square of L mm is
positive, rather than greater than some tolerance for zero, has been made after
the following considerations.
(i) The decomposition is valid to the precision available in the arithmetic being
used. When zero diagonal elements arise in L they reflect linear dependencies in
the set of equations for which a solution is to be found, and any of the infinity of
solutions which exist is taken to be acceptable. However, in recognition of the
possibility that there may only be a near-linear dependence, it does not seem wise
to presume a small number is zero, since the Choleski decomposition, unlike the
singular-value decomposition (algorithm 1), does not allow the user to decide at
the time solutions are computed which small numbers are to be assumed zero.
(ii) The size of the computed square of L mm is dependent on the scale of the
matrix A. Unless the tolerance for zero is proportional to a norm of A, its
application has an effect which is not consistent from one problem to another.
If the computed square of L mm is non-positive, the mth column of L is
therefore set to zero
L = 0 for i = m, (m + 1), . . . , n. (7.29)
im
The forward- and back-substitutions to solve the linear equations
Ax = b (2.2)
are unfortunately not now possible since division by zero occurs. If, however, the
equations are consistent, that is, if b belongs to the column space of A, at least one
solution x exists (see, for example, Finkbeiner 1966, p 98).
Consider the forward-substitution to solve
Lv = b (7.30)
for v. If v k is set to zero whenever L kk = 0, then solutions
T
L x = v (7.31)
are solutions to (2.2) for arbitrary values of those x for which L = 0. This is, of
kk
k
course, only possible if
(7.32)
which is another way of stating the requirement of consistency for the equations.