Page 267 - Schaum's Outlines - Probability, Random Variables And Random Processes
P. 267
260 ESTIMATION THEORY [CHAP 7
Then by Eqs. (7.62) and (7.63), we have
I
I
ECg1(X)g2(Y)I = E{ECg,(X)g2( Y) XI} = E{g,(X)E(g2(Y) XI)
Now, setting g,(X) = g(X) and g2(Y) = Y in Eq. (7.64), and using Eq. (7.18), we obtain
ECs(X)YI = ECg(X)E(Y I X)1 = ECg2(X)I
Thus, the m.s. error is given by
e = E{[Y - g(X)I2) = E(Y2) - 2E[g(X)Y] + E[g2(X)]
= E(Y2) - E[g2(X)]
7.19. Let Y = X2 and X be a uniform r.v. over (- 1, 1). Find the m.s. estimator of Y in terms of X and
its m.s. error.
By Eq. (7.18), the m.s. estimate of Y is given by
g(~)=~(~~~)= E(x~~x=x)=x~
Hence, the m.s. estimator of Y is
p=x2
The m.s. error is
e = E([Y - g(X)I2) = E([X2 - X2I2) = 0
LINEAR MEAN SQUARE ESTIMATION
7.20. Derive the orthogonality principle (7.21) and Eq. (7.22).
By Eq. (7.20), the m.s. error is
e(a, b) = E{[Y - (ax + b)I2)
Clearly, the m.s. error e is a function of a and b, and it is minimum if ae/da = 0 and &lab = 0. Now
ae
- = E{~[Y - (ax + b)](- 1)) = -2E{[Y - (ax + b)])
ab
Setting aelda = 0 and &lab = 0, we obtain
E{[Y - (ax + b)]X) = 0
ECY - (ax + b)] = 0
Note that Eq. (7.68) is the orthogonality principle (7.21).
Rearranging Eqs. (7.68) and (7.69), we get
E(X2)a + E(X)b = E(XY)
E(X)a + b = E(Y)
Solving for a and b, we obtain Eq. (7.22); that is,
E(XY) - E(X)E(Y) ax, a,
a = - - Pxr
E(X2) - [E(X)] ax2 a,
b = E(Y) - aE(X) = p, - up,
where we have used Eqs. (2.31), (3.51), and (3.53).
7.21. Show that m.s. error defined by Eq. (7.20) is minimum when Eqs. (7.68) and (7.69) are satisfied.