Page 362 - Fundamentals of Probability and Statistics for Engineers
P. 362

Linear Models and Linear Regression                             345

           This completes the proof. The theorem stated above is a special case of the
           Gauss–Markov  theorem.
             Another interesting comparison is that between the least-square estimators
           for    and    and their maximum likelihood estimators with an assigned dis-
           tribution  for  random  variable  Y .  It  is  left  as  an  exercise  to  show  that  the
           maximum likelihood estimators for    and    are identical to their least-square
           counterparts under the added assumption that Y  is normally distributed.



           11.1.3  UNBIASED  ESTIMATOR  FOR     2

           As we have shown, the method of least squares does not lead to an estimator
           for variance   2  of Y , which is in general also an unknown quantity in linear
           regression models. In order to propose an estimator for   2 , an intuitive choice is
                                       n
                                                   ^
                                               ^
                                      X                2
                                c 2
                                  ˆ k    ‰Y i  …A ‡ Bx i †Š ;           …11:28†
                                       iˆ1
           where coefficient k is to be chosen so that       2  is unbiased. In order to carry out
                                               c
                           c 2
           the expectation of    , we note that [see Equation (11.7)]
                                 ^   ^             ^     ^
                            Y i   A   Bx i ˆ Y i  …Y   Bx†  Bx i
                                                                        …11:29†
                                                    ^
                                        ˆ…Y i   Y†  B…x i   x†:
           Hence, it follows that
                   n                  n               n
                              ^
                  X               2  X          2    X         2
                           ^
                     …Y i   A   Bx i † ˆ  …Y i   Y†   B ^ 2  …x i   x† ;  …11:30†
                  iˆ1                iˆ1              iˆ1
           since [see Equation (11.8)]
                             n                    n
                            X                   ^  X       2
                               …x i   x†…Y i   Y†ˆ B  …x i   x† :       …11:31†
                            iˆ1                   iˆ1

           Upon taking expectations term by term, we can show that
                                                               )
                                     n               n
                                    X          2    X         2
                          c 2
                        Ef  gˆ kE      …Y i   Y†   B ^ 2  …x i   x†
                                     iˆ1             iˆ1
                                        2
                              ˆ k…n   2†  :







                                                                            TLFeBOOK
   357   358   359   360   361   362   363   364   365   366   367