Page 166 - Elements of Distribution Theory
P. 166

P1: JZP
            052184472Xc05  CUNY148/Severini  May 24, 2005  17:53





                            152                   Parametric Families of Distributions

                              In some cases, the explanatory variables are random variables as well. Let (X j , Y j ),
                            j = 1,..., n, denote independent random vectors such that the conditional distribution of
                            Y j given X j = x j is the element of P corresponding to parameter value

                                                          λ j = h(x j ; θ)
                            for some known function h. Then the model based on the conditional distribution of
                            (Y 1 ,..., Y n )given (X 1 ,..., X n ) = (x 1 ,..., x n )is identical to the regression model con-
                            sidered earlier. This approach is appropriate provided that the distribution of the covariate
                            vector (X 1 ,..., X n ) does not depend on θ, the parameter of interest. If the distribution of
                            (X 1 ,..., X n )is also of interest, or depends on parameters that are of interest, then a model
                            for the distribution of (X 1 , Y 1 ),..., (X n , Y n )would be appropriate.


                            Example 5.23. Let (X 1 , Y 1 ),..., (X n , Y n ) denote independent, identically distributed pairs
                            of real-valued random variables such that the conditional distribution of Y j given X j = x
                            is a binomial distribution with frequency function of the form
                                                  x  y       x−y

                                                    θ (1 − θ 1 )  , y = 0, 1,..., x,
                                                     1
                                                  y
                            where 0 <θ 1 < 1, and the marginal distribution of X j is a Poisson distribution with mean
                            θ 2 , θ 2 > 0.
                              If only the parameter θ 1 is of interest, then a statistical analysis can be based on the con-
                            ditional distribution of (Y 1 ,..., Y n )given (X 1 ,..., X n ) = (x 1 ,..., x n ), which has model
                            function
                                                      n
                                                          x j  y j    x j −y j

                                                             θ (1 − θ 1 )  .
                                                              1
                                                          y j
                                                      j=1
                            If both parameters θ 1 and θ 2 are of interest, a statistical analysis can be based on the
                            distribution of (X 1 , Y 1 ),..., (X n , Y n ), which has model function
                                               n      x j
                                                      2     exp(−θ 2 )θ (1 − θ 1 ) x j −y j .
                                                     θ
                                                                     y j
                                                 y j !(x j − y j )!  1
                                              j=1
                                               5.6 Models with a Group Structure

                            For some models there is additional structure relating distributions with different parameter
                            values and it is often possible to exploit this additional structure in order to simplify the
                            distribution theory of the model. The following example illustrates this possibility.

                            Example 5.24 (Normal distribution). Let X denote a random variable with a normal distri-
                            bution with mean µ, −∞ <µ< ∞ and standard deviation σ, σ> 0. Using characteristic
                            functions it is straightforward to show that the distribution of X is identical to the dis-
                            tribution of µ + σ Z, where Z has a standard normal distribution. Hence, we may write
                            X = µ + σ Z.
                              Let b and c denote constants with c > 0 and let Y = b + cX. Then we may write Y =
                            cµ + b + (cσ)Z so that Y has the same distribution of X except that µ is modified to cµ + b
                            and σ is modified to cσ. Hence, many properties of the distribution of Y may be obtained
   161   162   163   164   165   166   167   168   169   170   171