Page 258 - Probability, Random Variables and Random Processes
P. 258

250                              ESTIMATION  THEORY                           [CHAP  7



           and the minimum m.s. error em is (Prob. 7.22)



           where a,,   = Cov(X, Y) and p,,  is the correlation coefficient of X and Y.  Note that Eq. (7.21) states
           that the optimum linear m.s. estimator p= ax + 6 of  Y is such that the estimation error Y - P = Y -
           (ax + b) is orthogonal to the observation X. This is known  as the orthogonality principle. The line
           y = ax + b is often called a regression line.
               Next,  we  consider  the  estimator  ? of  Y  with  a  linear  combination  of  the  random  sample
           (XI, . -  , Xn) by




           Again, we  maintain  that in order to produce the linear estimator with the minimum m.s. error, the
           coefficients  ai must be such that the following orthogonality conditions are satisfied (Prob. 7.35):




           Solving Eq. (7.25) for ai, we obtain


               a = ['I]  .=[:I
           where



                    an                  "=E(YXj)        R=                     Rij  = E(XiXj)
           and R- ' is the inverse of R.




                                           Solved Problems


         PROPERTIES OF  POINT  ESTIMATORS
         7.1.   Let (XI, . . . , Xn) be a random sample of X having unknown mean p. Show that the estimator of
               p defined by




               is an unbiased estimator of p. Note that X is known as the sample mean (Prob. 4.64).
                  By Eq. (4.1 O8),




               Thus, M is an unbiased estimator of p.

         7.2.   Let (XI, . . . , X,)  be a random  sample of  X having unknown mean p and variance a2. Show that
               the estimator of a2 defined by
   253   254   255   256   257   258   259   260   261   262   263