Page 160 - Numerical methods for chemical engineering
P. 160
146 3 Matrix eigenvalue analysis
N
parameters β ∈ ,
[1] [1] [1]
x x ... x
1 2 N y [1] β 1
[2] [2] [2] y [2]
x x ... x N β 2 N
.
X = . 1 2 . N y = . ∈ β = . ∈ (3.258)
.
. . . . . . . .
[M]
.
[M] [M] [M] y
x x ... x β N
1 2 N
Usingtherulesofmatrixmultiplication,wecanwritethesetofrelationships y [k] = β 1 x 1 [k] +
[k] [k]
β 2 x +· · · + β N x as y = Xβ
2 N
[1] [1] [1]
y 1 2 N β 1
[1] x x ... x
y [2] [2] [2] [2]
x 1 x 2 ... x N β 2
.
=
. . . . . . . . . . .
.
.
.
y [M] x 1 [M] x [M] ... x N [M] β N
2
[1] [1] [1]
β 1 x + β 2 x +· · · + β N x
1 2 N
[2] [2]
β 1 x + β 2 x +· · · + β N x [2]
1 2 N
= . (3.259)
.
.
[M] [M] [M]
β 1 x + β 2 x +· · · + β N x
1 2 N
In Chapter 1, we computed the coefficients β of a linear model by solving the system
T
T
T
X Xβ = X y, obtained by premultiplying y = Xβ by X . We also can obtain β using the
pseudo-inverse from SVD,
˜ −1
T
β = V W y (3.260)
The advantage of the SVD approach becomes evident when we do not have sufficient
[ j]
data to determine all coefficients β j . Then, the right singular vectors {v |σ j = 0}∈
N
provide information about the “missing” data points x that are necessary to deter-
mine all β j . In particular, we should add new measurements {x [N+1] ,..., x [N+P] } such
that
[ j] [1] [N] [N+1] [N+P]
span v |σ j = 0 ⊂ span x ,..., x , x ,..., x (3.261)
Then, the new design matrix with these P additional measurements will have no zero singular
values, such that all coefficients β j can be estimated. Equation (3.256) tells us that the SVD
solution (3.260) is the least-squares estimate that minimizes the sum of squared errors
2
|Xβ − y| . This subject will be discussed further in Chapter 8.
SVD in MATLAB
We wish to fit the linear model y = β 0 + β 1 θ 1 + β 2 θ 2 to the data in Table 3.1. Entering the
corresponding design matrix and response vector,
X = [100;111;122;133];
y = [1; 6; 11; 16];
we compute the SVD X = USV H