Page 361 - Fundamentals of Probability and Statistics for Engineers
P. 361
344 Fundamentals of Probability and Statistics for Engineers
 e
and (11.23) with the Cram r–Rao lower bounds defined in Section 9.2.2. In
order to evaluate these lower bounds, a probability distribution of Y must be
made available. Without this knowledge, however, we can still show, in Theorem
11.2, that the least squares technique leads to linear unbiased minimum-variance
estimators for and ; that is, among all unbiased estimators which are linear
in Y , least-square estimators have minimum variance.
Theorem 11.2: let random variable Y be defined by Equation (11.4). Given
a sample (x 1 , Y 1 ), (x 2 , Y 2 ), . . . , (x n , Y n ) of Y with its associated x values, least-
^
^
square estimators A and B given by Equation (11.17) are minimum variance
linear unbiased estimators for and , respectively.
Proof of Theorem 11.2: the proof of this important theorem is sketched
below with use of vector–matrix notation.
Consider a linear unbiased estimator of the form
* T 1 T
Q
C C C GY:
11:24
We thus wish to prove that G 0 if Q * * is to be minimum variance.
The unbiasedness requirement leads to, in view of Equation (11.19),
GC 0:
11:25
Consider now the covariance matrix
*
*
*
T
covfQ g Ef
Q q
Q q g:
11:26
Upon using Equations (11.19), (11.24), and (11.25) and expanding the covari-
ance, we have
*
2
T
T
covfQ g
C C 1 GG :
Now, in order to minimize the variances associated with the components of Q * ,
T
we must minimize each diagonal element of GG . Since the iith diagonal
T
element of GG is given by
n
X
T
2
GG g ;
ij
ii
j1
where g ij is the ijth element of G, we must have
g ij 0; for all i and j:
and we obtain
G 0:
11:27
TLFeBOOK