Page 78 - Computational Statistics Handbook with MATLAB
P. 78

64                         Computational Statistics Handbook with MATLAB


                             which is the product of the individual density functions evaluated at each
                               or sample point.
                             x i
                                                           ˆ
                                                           θ
                              In most cases, to find the value   that maximizes the likelihood function,
                             we take the derivative of L, set it equal to 0 and solve for θ. Thus, we solve the
                             following likelihood equation
                                                         d  L θ() =  . 0                   (3.24)
                                                         d θ
                              It can be shown that the likelihood function,  L θ()  , and logarithm of the
                             likelihood function,  ln L θ()  , have their maxima at the same value of θ. It is
                             sometimes easier to find the maximum of  ln L θ()  , especially when working
                             with an exponential function. However, keep in mind that a solution to the
                             above equation does not imply that it is a maximum; it could be a minimum.
                             It is important to ensure this is the case before using the result as a maximum
                             likelihood estimator.
                              When a distribution has more than one parameter, then the likelihood func-
                             tion is a function of all parameters that pertain to the distribution. In these sit-
                             uations, the maximum likelihood estimates are obtained by taking the partial
                             derivatives of the likelihood function (or  ln L θ()  ), setting them all equal to
                             zero, and solving the system of equations. The resulting estimators are called
                             the joint maximum likelihood estimators. We see an example of this below,
                             where we derive the maximum likelihood estimators for µ and  σ 2   for the
                             normal distribution.

                             Example 3.3
                             In this example, we derive the maximum likelihood estimators for the
                             parameters of the normal distribution. We start off with the likelihood func-
                             tion for a random sample of size n given by

                                                                       ⁄
                                        n           (    µ)     1   n 2      n       
                                                            2
                                            1
                                                      x i –
                                                                               1
                                L θ() =  ∏ -------------- exp  –  --------------------  =   ------------ 2  exp  –   --------- 2∑ ( x i – µ)  2  . 
                                                                 2πσ
                                          σ 2π        2σ  2               2σ         
                                       i =  1                                    i =  1
                             Since this has the exponential function in it, we will take the logarithm to
                             obtain
                                                          n ---           n
                                                      1  2          1           
                                                                                  2
                                       ln [ L θ()] =  ln   ------------ 2  +  ln  exp  –   --------- 2∑ ( x i –  µ)   .
                                                     2πσ
                                                                    2σ          
                                                                         i =  1
                             This simplifies to




                            © 2002 by Chapman & Hall/CRC
   73   74   75   76   77   78   79   80   81   82   83