Page 150 - Elements of Distribution Theory
P. 150

P1: JZP
            052184472Xc05  CUNY148/Severini  May 24, 2005  17:53





                            136                   Parametric Families of Distributions

                            function is then given by
                                                             n
                                             1           1             2                     n
                                 p(x; θ) =      n exp −       (x j − βt j )  , x = (x 1 ,..., x n ) ∈ R ;
                                           n              2
                                          σ (2π) 2      2σ
                                                            j=1
                            here θ = (β, σ) and   = R × R .
                                                      +
                            Likelihood ratios
                            Likelihood ratios play an important role in statistical inference. Consider a parametric model
                            for a random variable X with model function p(·; θ), θ ∈  .A function of X of the form
                                                            p(X; θ 1 )
                                                                   ,
                                                            p(X; θ 0 )
                            where θ 0 ,θ 1 ∈  ,is called a likelihood ratio. For cases in which the true parameter value
                            θ is unknown, the ratio p(x; θ 1 )/p(x; θ 0 ) may be used as a measure of the strength of the
                            evidence supporting θ = θ 1 versus θ = θ 0 , based on the observation of X = x.
                              Note that
                                                          p(X; θ 1 )

                                                       E         ; θ 0 = 1
                                                          p(X; θ 0 )
                            for all θ 1 ∈  .To see this, suppose that X has an absolutely continuous distribution with
                            density function p(·; θ 0 ). Then
                                                 p(X; θ 1 )        p(x; θ 1 )
                                               
                 ∞
                                              E         ; θ 0 =          p(x; θ 0 ) dx
                                                 p(X; θ 0 )    −∞ p(x; θ 0 )

                                                                ∞
                                                            =     p(x; θ 1 ) dx = 1.
                                                               −∞
                            A similar result holds for frequency functions.
                              Another important property of likelihood ratios is given in the following example.

                            Example 5.8 (Martingale property of likelihood ratios). Consider a sequence of real-
                            valued random variables Y 1 , Y 2 ,... ;in particular, we are interested in the case in which
                            Y 1 , Y 2 ,... are not independent, although the following analysis also applies in the case
                            of independence. Suppose that, for each n = 1, 2,..., distribution of Y 1 , Y 2 ,..., Y n is
                            absolutely continuous with density function p n (·; θ) where θ is a parameter taking values
                            in a set  .We assume that for each θ ∈   the density functions p n (·; θ), n = 1, 2,..., are
                            consistent in the following sense. Fix n.For any m < n the marginal density of (Y 1 ,..., Y m )
                            based on p n (·; θ)is equal to p m (·; θ). That is,

                                              ∞      ∞
                            p m (y 1 ,..., y m ; θ) =  ···  p n (y 1 ,..., y m , y m+1 ,..., y n ; θ) dy m+1 ··· dy n ,θ ∈  .
                                              −∞    −∞
                              Let θ 0 and θ 1 denote distinct elements of   and define
                                                     p n (Y 1 ,..., Y n ; θ 1 )
                                                X n =              , n = 1, 2,...
                                                     p n (Y 1 ,..., Y n ; θ 0 )
                            where Y 1 , Y 2 ,... are distributed according to the distribution with parameter θ 0 . Note that
                            the event p n (Y 1 ,..., Y n ; θ 0 ) = 0 has probability 0 and, hence, may be ignored.
   145   146   147   148   149   150   151   152   153   154   155