Page 109 - Probability and Statistical Inference
P. 109

86    2. Expectations of Functions of Random Variables

                                 positive integral values of ν. Recall that ν is referred to as the degree of
                                 freedom and X is often denoted by e . Then, in this special situation, we can
                                                                2
                                                                ν
                                 summarize the following results.





                                 These can again be checked out easily from (2.3.23) as well as (2.3.26).



                                 2.4 Determination of a Distribution via MGF
                                 Next, we emphasize the role of a moment generating function in uniquely
                                 determining the probability distribution of a random variable. We state an im-
                                 portant and useful result below. Its proof is out of scope for this book. We
                                 will have ample opportunities to rely upon this result in the sequel.
                                    Theorem 2.4.1 Let M(t) be a finite mgf for | t | < a with some a > 0. Then
                                 M(t) corresponds to the mgf associated with the probability distribution of a
                                 uniquely determined random variable.
                                    This simple looking result however has deep implications. Suppose that U
                                 is a discrete random variable and assume that somehow one knows the ex-
                                 pression of its mgf M (t). Now if M (t), for example, looks exactly like the
                                                   U
                                                                U
                                 mgf of a Binomial random variable, then we can conclude that U indeed has
                                 the Binomial distribution. But on the other hand if M (t) looks exactly like the
                                                                             U
                                 mgf of a Poisson random variable, then again we will conclude that U has the
                                 Poisson distribution, and so on. If U has a continuous distribution instead and
                                 M (t), for example, looks exactly like the mgf of a Gamma random variable,
                                   U
                                 then U indeed must have a Gamma distribution. These conclusions are sup-
                                 ported by the Theorem 2.4.1. We will exploit such implications in the sequel.
                                    Example 2.4.1 Suppose that a random variable X takes the possible val-
                                 ues 0, 1 and 2 with the respective probabilities 1/8, 1/4 and 5/8. The mgf of
                                 X is obviously given by M (t) = 1/8 + 1/4 e  + 5/8 e  = 1/8(1 + 2e  + 5e ).
                                                                              2t
                                                                                           t
                                                                                               2t
                                                                       t
                                                        X
                                 Observe how the mgf of a discrete random variable is formed and how easy
                                 it is to identify the probability distribution by inspecting the appearance of
                                 the terms which together build up the function M (t). Now suppose that we
                                                                           X
                                 have a random variable U whose mgf M (t) = 1/5(1 + e  + 3e )e . What is
                                                                                      2t
                                                                                         –t
                                                                                 t
                                                                    U
                                 the probability distribution of U? Let us rewrite this mgf as .2e  + .2 + .6e . t
                                                                                       –t
                                 We can immediately claim that U takes the possible values –1, 0 and 1 with
                                 the respective probabilities .2, .2 and .6. In view of the Theorem 2.4.1, we
   104   105   106   107   108   109   110   111   112   113   114