Page 41 - Compact Numerical Methods For Computers
P. 41
Singular-value decomposition, and use in least-squares problems 31
3.2. A SINGULAR-VALUE DECOMPOSITION ALGORITHM
It may seem odd that the first algorithm to be described in this work is designed to
compute the singular-value decomposition (svd) of a matrix. Such computations are
topics well to the back of most books on numerical linear algebra. However, it was
the algorithm below which first interested the author in the capabilities of small
computers. Moreover, while the svd is somewhat of a sledgehammer method for
many nutshell problems, its versatility in finding the eigensolutions of a real
symmetric matrix, in solving sets of simultaneous linear equations or in computing
minimum-length solutions to least-squares problems makes it a valuable building
block in programs used to tackle a variety of real problems.
This versatility has been exploited in a single small program suite of approximately
300 lines of BASIC code to carry out the above problems as well as to find inverses
and generalised inverses of matrices and to solve nonlinear least-squares problems
(Nash 1984b, 1985).
The mathematical problem of the svd has already been stated in §2.5. However,
for computational purposes, an alternative viewpoint is more useful. This consi-
ders the possibility of finding an orthogonal matrix V, n by n, which transforms the
real m by n matrix A into another real m by n matrix B whose columns are
orthogonal. That is, it is desired to find V such that
B = AV = (b , b , . . . , b ) (3.1)
2
n
l
where
(3.2)
and
T
VV T = V V = 1 . (3.3)
n
The Kronecker delta takes values
j
for i
d = { 0 1 for i = j. (3.4)
ij
The quantities S i may, as yet, be either positive or negative, since only their
square is defined by equation (3.2). They will henceforth be taken arbitrarily as
positive and will be called singular values of the matrix A. The vectors
u j = b /S j (3.5)
j
which can be computed when none of the S is zero, are unit orthogonal vectors.
j
Collecting these vectors into a real m by n matrix, and the singular values into a
diagonal n by n matrix, it is possible to write
B = US (3.6)
where
T
U U = 1 n (3.7)
is a unit matrix of order n.
In the case that some of the S are zero, equations (3.1) and (3.2) are still valid,
j
but the columns of U corresponding to zero singular values must now be